Friday, 7 October 2016

Performance: Metadata calls vs Exceptions

Today I was embarking on a mission to check the existence of a field on an entity. If it exists, do something, if not... well, do nothing! My first thoughts were "metadata... urgh!". As all of us in the CRM development world know, the metadata is notoriously slow. Reading stackoverflow I found that the recommended answer was always using the metadata. So I thought to myself there's got to be a better way to do this. Which gave me an idea... which performs faster, exceptions or the metadata? To hit the metadata you need to run something like this:
RetrieveEntityRequest request = new RetrieveEntityRequest
{
    EntityFilters = EntityFilters.Attributes,
    LogicalName = "account"
};
var response = (RetrieveEntityResponse)service.Execute(request);
AttributeMetadata first = null;
foreach (var element in response.EntityMetadata.Attributes)
{
    if (element.LogicalName == "xyz_fieldname")
    {
        first = element;
        break;
    }
}
var fieldExists = first != null;
I chose not to use linq/expressions when checking for the field just to try keep it as performant as possible. This code is fine in general, but boy is it slow. So I came with this as an alternative instead:
try
{
    var query = new QueryExpression("account");
    query.Criteria.AddCondition("accountid", ConditionOperator.Equal, "294450db-46c9-447e-a642-3babf913d800");
    query.NoLock = true;
    query.ColumnSet = new ColumnSet("xyz_fieldname");
    service.RetrieveMultiple(query);
}
catch
{
    // ignored
}
Using a query expression has 2 advantages, you're running the query against the primary key (accountid). You don't care about the id itself, the code will either throw an exception if the field doesn't exist, or return no records if it succeeds (1 record if you are the unluckiest guy on the planet to get a matching guid... but even then it would be faster). The second advantage of a query expression is you can run it using a nolock. You really don't care about the result set, the purpose isn't to find a record, it's to see if including the column forces an exception. So how do you test the execution of this? I wrote a console app that used a stop watch to wrap each call. The first time I ran the tests I ran each console app independently so not to skew the results by code optimization on what call ran first etc. And the results I got for a single call were the exception generally executed about 1.5 times faster. Sample code is this:
private static void RunExceptionTests(IOrganizationService service, int steps)
{
    Console.WriteLine("Testing exception with {0} steps", steps);
    var stopwatch = Stopwatch.StartNew();
    for (int i = 0; i < steps; i++)
    {
        try
        {
            var query = new QueryExpression("account");
            query.Criteria.AddCondition("accountid", ConditionOperator.Equal, "294450db-46c9-447e-a642-3babf913d800");
            query.NoLock = true;
            query.ColumnSet = new ColumnSet("xyz_fieldname");
            service.RetrieveMultiple(query);
        }
        catch
        {
            // ignored
        }
    }
    stopwatch.Stop();
    Console.WriteLine("Milliseconds taken: {0}", stopwatch.ElapsedMilliseconds);
}

private static void RunMetadataTest(IOrganizationService service, int steps)
{
    Console.WriteLine("Testing metadata with {0} steps", steps);
    var stopwatch = Stopwatch.StartNew();
    for (int i = 0; i < steps; i++)
    {
        RetrieveEntityRequest request = new RetrieveEntityRequest
        {
            EntityFilters = EntityFilters.Attributes,
            LogicalName = "account"
        };
        var response = (RetrieveEntityResponse)service.Execute(request);
        AttributeMetadata first = null;
        foreach (var element in response.EntityMetadata.Attributes)
        {
            if (element.LogicalName == "xyz_fieldname")
            {
                first = element;
                break;
            }
        }
        var fieldExists = first != null;
    }
    stopwatch.Stop();
    Console.WriteLine("Milliseconds taken: {0}", stopwatch.ElapsedMilliseconds);
}
I ran several tests on CRM online varying between multiple/single calls to the metadata vs multiple/single retrieve multiple calls throwing an exception. Exceptions always outperformed the metadata. If you're performing multiple calls within the same code it jumps to about 3 times faster (I'm guessing optimizations come into play). Because of how plugins are loaded into memory for faster execution I would wonder if it would regularly perform at 2 - 3 times faster than a metadata call. Either way, the bottle neck is the call to the Metadata service which cannot be optimized unless you introduce caching and more code complexity. Also, if the field exists you won't get an exception which means yet another performance bonus. The only scenario I haven't tested is running it against a massively heavily used entity... but if you're hitting your database so hard that a nolock retrieve cannot return in an acceptable time frame you probably have bigger problems to worry about! To conclude, can I just say the following. Micrososft, will you fix your metadata service already! It's been slow for donkeys years and is a real annoyance when you have to use it.

Thursday, 6 October 2016

Extended CrmSvcUtil - Exporting an attribute list

A little feature of the Extended CrmSvcUtil I neglected to mention in my previous post (it was a late feature!) is the ability to export a list of strongly typed attribute names. This helps remove "magic strings" from your source and introduce some strongly typed attribute checking. This is useful, even when using Early Bound, as you often need to check the existence of an attribute.

For example, let's say you are writing a plugin that fires on update of contact and you wish to include a pre image containing the parent account. What you will often see is code like this:

if (preImage.Contains("parentcustomerid") == false)
{
    //trace / throw an exception stating the parent count hasn't been provided...
}

Checking for null is not the same as checking for existence, because some contacts might not have a parent account set. So the check for existence is often quite important. Using an attribute list allows you to strongly type this instead as follows:

if (preImage.Contains(ContactAttributes.ParentCustomer) == false)
{
    //trace / throw an exception stating the parent count hasn't been provided...
}

Another area where this is incredibly useful is when building queries using Query Expressions or Fetch Expressions. If you want to include a set of columns, or set a condition on an attribute you will end up with this type of code:

QueryExpression qe = new QueryExpression();
qe.EntityName = "contact";
qe.ColumnSet = new ColumnSet();
qe.ColumnSet.Columns.Add("parentcustomerid");

Being able to specify the attribute strongly, like the following looks much better:

qe.ColumnSet.Columns.Add(ContactAttributes.ParentCustomer);

This helps work around many issues, like typo bugs or name change, like in cases where somebody accidentally creates a field called "new_ProjjectType and wishes to fix the name of the field. If 5 or 6 plugins already reference this field and perform some logic based on its value you might end up with multiple "magic strings" to fix across your code. Using an attribute list is a 1 fix solution to the problem.

The source for the Extended CrmSvcUtil can be downloaded from git hub with the latest release available to download from here

Wednesday, 28 September 2016

Extended CrmSvcUtil - A Neater Early Bound Generator

For quite some time I have persisted with late bound objects, because within plugins it makes life a bit simpler. The idea of Early Bound objects is good in theory, but the Microsoft provided CrmSvcUtil just doesn't cut it in terms of how it gives you the code. (1 big SDK file is not my idea of fun). You used to be able to split different aspects of this out, but they have removed functionality in recent versions. This has probably been the main reason I stuck with Late Bound until recently.

Issues with CrmSvcUtil

The biggest issue I have with the existing tool is the lack of naming ability. Say you have a custom entity called "new_project" with an OptionSet attribute on it called "new_projecttype". Ignoring all the other entities that will get exported regardless of whether you actually need them or not, and ignoring all the standard attributes that will be exported, the naming convention you'll end up with is this:

// Class...
public partial class new_project...

// Property...
public Microsoft.Xrm.Sdk.OptionSetValue new_projecttype

// Enum
public enum new_project_new_projecttype

I've chosen the type OptionSet in particular to convey an additional problem, but there are a host of issues with what is generated.
  1. We end up with 1 huge unmanageable file.
  2. Really bad naming convention by default that is not easy to change. Sure, you could manually edit them, but it will get overwritten with each generation of the metadata
  3. OptionSet properties created as the type OptionSetValue. Surely if we are using Early Bound then shouldn't our option set properties be the equivalent enum type? 

So the tool is quite lazy in what it does. It's a bit of a "bare minimum" to get you over the fence, and then you're left to your own devices. Quite frankly, you'd be quicker just writing the classes yourself and as long as you honor the correct attributes it would work perfectly fine.

I have investigated many tools, and all of them fell short. So this and all of the above issues caused the birth of my own pet project which has made my life a lot easier.

Extended Svc Util

It's named simply so, because all it does is extend and build on top of what the existing CrmSvcUtil does. Once the code has been generated it does not intercept the generation of the "monster file", but instead piggy backs the code generated for that to produce its own files. Let's take a look at what you can do.

To fix the problems in the above files you could set up a configuration like this:

<configuration>
 <configSections>
  <section name="schemaDefinition" type="CodeGenerator.Config.SchemaDefinition, CodeGenerator" allowLocation="true" allowDefinition="Everywhere"/>
 </configSections>
 <schemaDefinition groupOptionSetsByEntity="true" exportAttributeNames="true" entitiesFolder="..\MyProject.DomainModels" enumsFolder="..\MyProject.DomainModels">
  <entities>
   <entity name="new_project" friendlyName="Project">
    <attributes>
     <attribute name="new_name"  friendlyName="Name"/>
     <attribute name="new_projecttype" friendlyName="ProjectType" />
    </attributes>
   </entity>
  </entities>

  <optionSets>
   <!-- Global OptionSets-->
   <optionSet name="new_someglobaloptionset" friendlyName="SomeGlobalOptionSet" entity="Global" />

   <!-- Project OptionSets-->
   <optionSet name="new_project_new_projecttype" friendlyName="Project_ProjectType" entity="new_project" />
  </optionSets>
 </schemaDefinition>
</configuration>

So what does this do? Firstly, you can add friendly names to your entities, attributes and option sets. Secondly, the export will use the correct enum for your option sets rather than using the out of the box OptionSetValue. So what you'll end up with instead is this:

// Class...
public partial class Project

// Property...
public Proejct_ProjectType? ProjectType

// Enum
public enum Project_ProjectType

The next configuration item I'd like to point out is not only can you depict where the file is generated, but you can decide to group all of your option sets into 1 file per entity rather than separate classes. These are defined at the top of the configuration under Schema Definition. All of this causes 2 files to be generated named:

  • Project.cs
  • Project.Enums.cs


Finally, only entities you have specified within the list will be exported to their corresponding file, all others will be ignored. All of the code will still be exported to the output file you specify so if you wanted to double check that source against what this tool exports you can do so.

All of this makes life much easier and readable in the Early Bound world, and makes it much quicker to generate the classes exactly as you want. I have included a global option set option in there just as an example of how to deal with that. But in effect all of those option sets in this example will be exported to a file called Global.Enums.cs. You can rename out of the box fields, status fields and their accompanying enums too. So you're not just stuck to your custom entities and fields.

Source

I have uploaded the source to github (https://github.com/conorjgallagher/Dynamics.ExtendedSvcUtil). There are further instructions up there on how to utilise the DLL it builds with CrmSvcUtil. It's fully open source so feel free to download, edit, and use to your hearts delight. In the root folder of the project I have included the latest built version of the DLL, so if you just want that feel free to download it.

If you find bugs please feel free to submit a comment. I have not fully decided on how best to manage contributions, so if you are interested please contact me and we can discuss.

Enjoy!

Wednesday, 14 September 2016

CRM Actions - the mysterious erroneous prefix

Actions were introduced in CRM 2013 and are a very powerful feature of Dynamics CRM. Quite frequently you might have had a particular piece of custom code you wanted to make available in different areas within your code base, such as both JavaScript and plugins. Previously you could not easily encapsulate this code without writing something quite custom and quite frankly a little hacky. For an example of how you could previously achieve this see my blog post about Executing stand alone C# code within crm.

Actions have made this redundant as you can now very easily encapsulate a common piece of functionality that you can call in many different ways and places (See MSDN)

So that's all sweet and awesome, right?

Not quite. As with all new features of CRM you eventually weed out a few bugs, some a little more complex than others. And actions are unfortunately no different. Recently we ran into a mysterious issue on our dev organisation in that it started throwing errors any time we tried to execute an action. The issue did not surface on any of our other environments. On closer inspection I read the error message a little more closely and spotted something
 Request not supported: new_CustomActionName
...Hang on there a second tiger, "new_" is not our default prefix! Where did this suddenly come from?

One thing dawned on us. We had recently recreated our dev from scratch due to some issues with the environment. One of the particular tasks we performed before we reset it was take a backup of the default solution. This was then imported directly after the reset to get us back to base. It all looked hunky dory until we noticed that actions had been impacted by this; they mysteriously got imported with the "new" prefix instead of honouring what our original default prefix was.

The fix?

Luckily this issue had not yet infected our test environment. And doubly luckily the actions hadn't changed since we reset the environment. That meant we could delete the actions on dev, create a fresh solution on our test environment with the (still correct) actions in it, and import this back into dev

I would hazard a guess that if we had changed the default prefix before we imported the backed up default solution we may have avoided the problem. But now I will never know! To be completely safe my advice is to back up all actions in a solution with the correct publisher before you embark on a environment reset under which you intend to restore a default solution.

Friday, 17 June 2016

How to get the parent windows Xrm context in JavaScript

In the latest version (CRM 2016) the Xrm context seems to be buried in a frame. So to get it you need to do this:

window.top.opener.frames[0].Xrm.Page.data.entity.getEntityName()

Locating the parent context seems to change with different releases of CRM so I'd guess it's technically not supported. At least it has the potential of not surviving an upgrade so something to be aware of!

CRM Online - rsProcessingError

Some day in the future I indeed to cover a more detailed post for CRM Online and diagnosing why you might get rsProcessingAborted or rsProcessingError messages on SSRS reports. It's a bit more involved than on premise due to the logging access limitations of CRM Online. In an on premise environment you can simply log on to your SSRS server and grab the log to get more detail. You don't get access to the same level of logging with CRM Online.

For now I will describe a very rudimentary/old school way to analyse a report I had deployed to CRM Online throwing the following error



Not much to go on, you get a retry and a close, unfortunately no download log. Searching around CRM gave no hints as to what was causing this.

What was my particular issue?

Firstly, you might be wondering what was the exact problem I ran into on my report. It was an issue with images which we have attached to an entity. We use this to download and display images on the report. The SSRS report was set up to render a JPEG, but somebody uploaded a BMP file. This was oddly working fine from within Visual Studio, but in the Report Viewer it did not like it. I would have liked the image to not render instead of throwing an error like the above. But how did I discover the problem?

Step 1 - It works on my machine!

I started with the old developer trick to see if it ran on my machine. Open up Visual Studio and run the report for the same report/record combination. You can do this by modifying the parameter called CRM_Filterednew_entity where obviously "new_entity" refers to the entity you run the report against. This parameter is simply some fetch xml that you can build using an advanced find to filter to the record in question.

Once you have performed the above, does the report run? If not you should get an error a bit more substantial than what the Report View was displaying.

If the report does indeed run, check instead for warnings. These can often surface as a processing error in Report Viewer. Nuke all of your warnings where possible!

None of the above worked for me.

Step 2 - Trim the hedges

It's working within visual studio. So what can you do? Create a copy of the report and start trimming pieces off it to find the root cause. Start with more likely parts, for example:

Remove a complex table. Deploy the report. Does it run?
Remove some complex expressions. Deploy the report. Does it run?
Remove sorts. Deploy the report. Does it run?
Remove show/hide expressions etc. Does it run?
Remove images. Does it run?
...

This is where I stopped as it highlighted my issue, but you see where I'm going with this. Keep stripping bits and pieces until you hit what's making the report fail. You can undertake this process of elimination in quite a binary fashion as well to speed up the process. For example, remove half of the report, does the error go away? Remove the other half, does the error go away? Drill into the half with the error and repeat the process.




Thursday, 16 June 2016

Dynamics CRM - Attribute Mapping Issue

Dynamics CRM, like most software products, will have bugs. Most aren't world ending to be fair, but often those slightly less visible ones linger for quite some time. Take this issue I came across today.

We have a custom entity in our system which has 3 lookups to the account entity. Each lookup serves a different purpose:


  • Company
  • PR Company
  • Joint Broker
All of the above are a different type of account in our system, but are accounts all the same. You can see the problem that exists straight off if you take a look under the hood. Open up one of the relationships and check the mappings section. It contains all 3 of the above as part of the attribute mapping.



The issue with this is if you open an account and create a new one of these custom records (from any relationship) it will populate ALL three of the lookups with the same account. You cannot delete these mappings, or disable them. Unfortunately you are stuck with them. This bug has existed for at least 2 years going by an issue raised in CRM community forum back in 2014.

So what is the workaround?

If you need this populated based on the relationship that the record was created through, you might be in a bit of a pickle. I currently know of no supported way to do this without writing your own add buttons to the ribbons, but that is quite a lot of effort for very little reward.

On the other hand, if you only ever want 1 (or particular) lookups populated there is a way to fix it.Write a JavaScript function a bit like the following:


function FixAttributeMappings() {
    // If we are in a create form blank the attribute mappings
    if (Xrm.Page.ui.getFormType() == 1) {
        Xrm.Page.getAttribute("new_jointbroker").setValue(null);
        Xrm.Page.getAttribute("new_prcompany").setValue(null);
    }
}

Call that function in the OnLoad of your entity and they are blanked whenever you create a new record of that type.



Monday, 23 May 2016

Why you should use CRM recommended values for optionsets

Sometimes when I'm on a project I encounter CRM consultants determined to use less obscure values for their optionsets than the CRM recommended for your publisher. I.e. 1, 2, 3, 4... You might think this won't cause massive headaches, but I will give you 3 pretty big reasons why I recommend you shouldn't do this.

Conflicting solutions

The most obvious reason is another company also wants to use the same values as you. This will go unnoticed in custom fields, but as soon as you both customise the values of an out of the box optionset, like accountcategorycode, you'll quickly run into problems. Let me show you what happens. In the following example I set up a couple of new options on accountcategorycode and changed the value of "New Option 1" to 3. I have left "New Option 2" as it's recommended value for the purpose of this sample:



Another publisher is also using these same values rather than the recommended in a managed solution. What you end up with is an overwrite on the options with the shared values:



As you might notice our option "New Option 1" has been renamed to "Other 1". This might not seem detrimental in this particular example, but what if the change was from "Bad Debt Customer" to "Most Awesome Customer" or something to that effect. This should be enough to prevent people from doing this, but unfortunately it's usually swept under the carpet as a really unlikely scenario.

Status Codes

The next issue is inconsistent usage introduced due to fields like status code. You may not realise this when you embark on your adventure to ignore the CRM recommended values, but you cannot apply this standard to all option sets, in particular status codes:



CRM disables the value field so you're stuck with what it recommends and you've got an inconsistent standard. So should you just make an exception to use recommended values for status codes, but continue on your own standard for other fields? Or do you extend this exception to all out of the box fields? There are few things I hate more than inconsistency in naming and value conventions so I say just stick with the CRM recommended for all fields.

Annoying message boxes

The next big reason you shouldn't do this is it becomes incredibly frustrating to add new options to an optionset if you want to maintain this standard. Even more so when you're adding a lot of new options. You will get bombarded with a message like the following every time you change the number for an option:



Ultimately why not just stick with the CRM recommended. I know the values are sometimes unreadable and a pain to copy & paste in javascript / code, but these values are not visible to end users, only to us developers/technical consultants. So the impact of leaving them as the recommended values is relatively minimal.

Friday, 20 May 2016

CRM 2016 - Synchronous Workflow bug

Today I spent quite some time working on a bug in one of our workflows which ended up being a bug within CRM. The error I was getting was
The given key was not present in the dictionary.
Let me show you how to recreate and how to spot that this is infact an issue within CRM as opposed to your code.

In our CRM we have a concept of an Attendee which is how we link people to a meeting. We also have the concept of a Target entity which is a parent of an Attendee. The lookup to Target on Attendee is not mandatory as some attendees are just regular people (i.e. not linked to a "Target"). When an Attendee is linked to a meeting I want to check a target has been set and and if it is do some "stuff" based on Target information. So I create a workflow with just this check in it:



For now I am not going to add anything else as this is all I need to highlight the bug. Next, as with a lot of systems, we have bad data and not all Targets have a name. For example:



I run my workflow in its current state against this record and I get an error

:

Downloading the log file gives you a trace like this:

[Microsoft.Crm.ObjectModel: Microsoft.Crm.Extensibility.InternalOperationPlugin]
[46f6cf4c-14ae-4f1e-98a1-eae99a37e95c: ExecuteWorkflowWithInputArguments]
Starting sync workflow 'CG - workflow with bug', Id: e717769d-8c1e-e611-80f4-5065f38aa981
Entering ConditionStep1_step: Check Attendee.Target contains data
Sync workflow 'CG - workflow with bug' terminated with error 'The given key was not present in the dictionary.'

I'm not sure what the CRM workflows code is doing under the hood here, but it must be trying to reference the name field in code. If you set a value for Name on the Target the problem goes away.
Pro tip: Put comments in your workflows! They are included as part of the trace



Thursday, 19 May 2016

Automapper, Dynamics CRM and excluding fields - Part 2

In my previous post, Automapper, Dynamics CRM and excluding fields, I introduced a concept of an "Excludable property". This is just 1 side of the call - POSTing/PUTtting records using a REST API. What about a GET? If you are using something like excludables you'll notice that the JSON returned does not look like the proposed JSON you POST or PUT. In fact, it looks something like this:

{
    Id:
    {
        Include: true,
        Value: "aef7b4c1-98f6-4f53-9be3-2fa72d1e319d"
    },
    Name:
    {
        Include: true,
        Value: "Hello"
    },
    Address1_Line1:
    {
        Include: true,
        Value: "Home!"
    }
}

Which is how our Excludables map to JSON. Here do we stop this?

Extend the IExcludable interface

To convert our excludables correctly we need to intercept the conversion and handle these properties manually. The first problem we hit is although we know it's an Excludable<> we don't know what the raw type is. The best way around this is to expand the IExcludable interface to allow exposing of a raw value, like this:

    public interface IExcludable
    {
        bool Include { get; set; }
        object RawValue { get; set; }
    }

The Excludable<> class just implements it on top of the existing value field, like this:

        public object RawValue
        {
            get
            {
                return value;
            }
            set { this.value = (T) value; }
        }


View Model Json Converter

Now that we can find the raw value without needing to know the underlying generic type we can intercept any excludable and convert it. This is the full converter class that results:

public class ViewModelJsonConverter : JsonConverter
    {
        public override bool CanRead => false;

        public override void WriteJson(JsonWriter writer, object value, JsonSerializer serializer)
        {
            JObject o = JObject.Parse(JsonConvert.SerializeObject(value, Formatting.Indented,
                new JsonSerializerSettings {ReferenceLoopHandling = ReferenceLoopHandling.Ignore}));
            foreach (var propertyInfo in value.GetType().GetProperties())
            {
                if (propertyInfo.CanRead)
                {
                    var currentValue = propertyInfo.GetValue(value);
                    IExcludable excludable = currentValue as IExcludable;
                    if (excludable != null)
                    {
                        if (excludable.Include)
                        {
                            if (excludable.RawValue == null || excludable.RawValue.GetType().IsValueType || excludable.RawValue.GetType().Name == "String")
                            {
                                o.Property(propertyInfo.Name).Value = new JValue(excludable.RawValue);
                            }
                            else
                            {
                                o.Property(propertyInfo.Name).Value = JObject.FromObject(excludable.RawValue);
                            }
                        }
                        else
                        {
                            o.Remove(propertyInfo.Name);
                        }
                    }
                }
            }
            o.WriteTo(writer);
        }

        public override bool CanConvert(Type objectType)
        {
            if (objectType.BaseType == typeof (ViewModel))
            {
                return true;
            }
            return false;
        }

        public override object ReadJson(JsonReader reader, Type objectType, object existingValue, JsonSerializer serializer)
        {
            throw new System.NotImplementedException();
        }
    }

In the above code our view models code had inherited a ViewModel class. Also worth pointing out is how you handle the both value and object types (with strings needing an extra helping hand as they're a bit different!). For value types and strings you need to create a JValue where as for reference types you need to expose them as a JObject.

Now, just add this converter like so in your global.asax exactly how you added your excludable converter from the previous post:

GlobalConfiguration.Configuration.AddJsonConverter(new ViewModelJsonConverter());


This code effectively flattens the excludable class into the generic types and outside of the world of your REST api nobody is any the wiser.

Monday, 7 March 2016

Automapper, Dynamics CRM and excluding fields

In the past when I've built web based applications Automapper was always one of those libraries that I both loved and hated at the same time. In the Dynamics CRM world it can often be a bit of a dangerous tool to unleash on a website, especially when utilised by developers that don't know CRM very well. Let me explain why.

Attribute Collections

Most CRM Developers will already know what I'm talking about here, but for those of you not privy to this - the properties of an entity from a CRM database table are not actually surfaced quite like EntityFramework or nHibernate, they are surfaced using a dictionary. This is subsequently wrapped in an Attribute Collection. One of the main advantages of this is you can control partial updates/retrieves without worrying about the state of the entire entity. On the negative side this plays havoc when libraries like Automapper are used to populate the entities, especially when using early bound objects.

Here's a sample to give you and idea of what I'm getting at. Firstly, let's presume our domain models are Early Bound objects exported from dynamics. (self plug! Personally I use this open source tool : https://github.com/conorjgallagher/Dynamics.ExtendedSvcUtil).

Now, let's say we have an REST service and we want to utilise it to update an account. So we build a view model like this:

public class AccountViewModel
{
    public Guid Id { get; set; }
    public string Name { get; set; }
    public string Address1_Line1 { get; set; }
}

And we wire up automapper to map that across to our domain entity. In the latest version I believe that would look like this:

CreateMap<Account, AccountViewModel>().ReverseMap()

In this example we can create a new account via the this REST service with the following JSON:

{
    Id: "aef7b4c1-98f6-4f53-9be3-2fa72d1e319d",
    Name: "Hello",
    Address1_Line1: "Home!"
}

Within our controller we will receive an AccountViewModel populated with all the relevant data. We can then use Automapper to push this View Model into an early bound Account entity using the same method as above. As long as all the field names match we're good to go. Even though you are mapping to strongly typed fields what you end up with under the hood is an dictionary like this:

"accountid"="aef7b4c1-98f6-4f53-9be3-2fa72d1e319d"
"nane"="Hello"
"address1_line1"="Home!"

Null vs Unset

A really nice feature of CRM is it differentiates between null and unset. If you exclude a field from the attribute collection it will also be excluded from the update statement that hits SQL. This is very useful for performance and limiting what plugins / workflows fire.

Back to Automapper and our view models above - a problem arises when you exclude fields from the JSON.  For example, if you subsequently send the following after the previous update:

{
    Id: "aef7b4c1-98f6-4f53-9be3-2fa72d1e319d",
    Address1_Line1: "Home line 1!"
}

This will hit our view model as this:

{
    Id = "aef7b4c1-98f6-4f53-9be3-2fa72d1e319d",
    Name = null,
    Address1_Line1 = "Home line 1!"
}

Which I guess is expected, because how else can you represent an excluded value in the view model? If we don't deal with this we hit a more fundamental issue further down the chain in that our attribute collection will end up like this:

"accountid"="aef7b4c1-98f6-4f53-9be3-2fa72d1e319d"
"nane"=null
"address1_line1"="Home line 1!"

And we'll blank our account name in CRM. Not good!

Excludable

This takes me on to a pattern I would like to propose for this type of issue: the concept of an excludable property. Looking at how nullable value types work surely we can come up with a similar concept! I won't take you through the entire evolution, but here is the interface and struct I now propose:

    public interface IExcludable
    {
        bool Include { get; set; }
    }

    public struct Excludable<T> : IExcludable
    {
        private bool hasValue;
        internal T value;
        private bool include;


        public Excludable(T value)
        {
            this.value = value;
            if (value != null)
            {
                this.hasValue = true;
            }
            else
            {
                this.hasValue = false;
            }
            this.include = true;
        }

        public bool HasValue
        {
         
            get
            {
                return hasValue;
            }
        }

        public bool Include
        {
            get { return include; }
            set { include = value; }
        }

        public T Value
        {
            get
            {
                return value;
            }
        }

   
        public T GetValueOrDefault()
        {
            return value;
        }

     
        public T GetValueOrDefault(T defaultValue)
        {
            return hasValue ? value : defaultValue;
        }

        public override bool Equals(object other)
        {
            if (!include || !hasValue) return other == null;
            if (other == null) return false;
            return value.Equals(other);
        }

        public override int GetHashCode()
        {
            return hasValue ? value.GetHashCode() : 0;
        }

        public override string ToString()
        {
            return hasValue ? value.ToString() : "";
        }

        public static implicit operator Excludable<T>(T value)
        {
            return new Excludable<T>(value);
        }

        public static explicit operator T(Excludable<T> value)
        {
            return value.Value;
        }

        public static bool operator ==(Excludable<T> x, T y)
        {
            return x.Equals(y);
        }

        public static bool operator !=(Excludable<T> x, T y)
        {
            return !x.Equals(y);
        }
    }

Overriding all the comparisons were required to get it to perform both value and null comparisons correctly. Finally, we can change our view model to use this instead:

    public class AccountViewModel
    {
        public Excludable<Guid> Id { get; set; }
        public Excludable<string> Name { get; set; }
        public Excludable<string> Address1_Line1 { get; set; }
    }

Json Converter gotcha

There's 1 final problem we need to solve that I'll highlight now. If we run our object without the address in it all seems to work just fine. Our property comes through as excluded as expected. But, if we instead try set it to null it is marked as included=false! The reason is down to how the JSON formatter deserializes the data into the given object. It doesn't actually call your constructor but does something dirty under the hood. How do we fix this? Create a custom json converter

    public sealed class ExcludableConverter : JsonConverter
    {
        public override bool CanConvert(Type objectType)
        {
            return objectType.IsGenericType && objectType.GetGenericTypeDefinition() == typeof(Excludable<>);
        }
        public override bool CanWrite { get { return false; } }

        public override object ReadJson(JsonReader reader, Type objectType, object existingValue, JsonSerializer serializer)
        {
            return Activator.CreateInstance(objectType, reader.Value);
        }

        public override void WriteJson(JsonWriter writer, object value, JsonSerializer serializer)
        {
            throw new NotImplementedException();
        }
    }

And set this up on application start within your global.asax:

            var formatters = GlobalConfiguration.Configuration.Formatters;
            var jsonFormatter = formatters.JsonFormatter;
            var settings = jsonFormatter.SerializerSettings;
            jsonFormatter.SerializerSettings.Converters.Add(new ExcludableConverter());

Now, when a null comes in we explicitly tell the converter to create a new object for us.

Wiring up Automapper

The final step is getting automapper to utilise this new way of excluding items. In automapper you can set a condition when mapping so that you can do conditional ignores. I found the simplest way to add this rule was write an extension method:

public static void AddExcludableRule<S, D>(this IMappingExpression<S, D> m)
        {
            m.ForAllMembers(opt => opt.Condition(
                    s =>
                        !(s.SourceValue is IExcludable) || ((IExcludable)s.SourceValue).Include
                    ));
        }

Now, whenever I want to wire up an object that has excludables I just add that call to the end:

CreateMap<Account, AccountViewModel>().ReverseMap().AddExcludableRule();



Thursday, 25 February 2016

Dynamics CRM Web API - Pluralization bug

I started investigating the Web API while it was in preview mode so have noticed it move quite quickly over time. Mainly this is changes but sometimes it's bugs. For example, a change I spotted since my previous post (Web API Preview - Unrecognized 'Edm.String' literal...) is how it deals with lookup and regarding references. In the preview you performed the following:
/api/data/appointments?$select=subject&$filter=regardingobjectid eq a199a199-a199-a199-a199-a199a199a199

This has change to the following in the v8.0 release:
/api/data/v8.0/appointments?$select=subject&$filter=_regardingobjectid_value eq a199a199-a199-a199-a199-a199a199a199

The second thing I spotted is in fact a bug. This may have existed in the preview, but I never spotted it and it had me scratching my head for quite some time. We have the concept of an "event day" which we have name "inv_eventday". When creating this entity CRM creates a collection schema to allow you to reference it via the web api. For example the appointment entity has an equivalent collection schema called "appointments"; notice the pluralisation.

So, on to my event days. What you would expect to find as a collection schema is this:
/api/data/v8.0/inv_eventdays

Unfortunately, this is not the case. You get an error "Resource not found for the segment 'inv_eventdays'." I wondered if I had a case sensitivity issue or something, but every combination of captitals / camel case / pascal case just didn't seem to work.

After some time I decided I should check out the metadata to see if I could uncover the correct casing. For reference the following is how you get a list of all entity metadata:
/api/data/v8.0/EntityDefinitions

This uncovered a definition with the following schema name for the entity:
"SchemaName":"inv_eventday"

Which seems to match my expectations on what it should be. So what's the problem? Reading further down I find the Collection Schema name:
"CollectionSchemaName":"inv_eventdaies"

That's some funky plural we've got there! Looks like the algorythm decides that anything ending in "y" is pluralised as "ies". Obviously this isn't correct for all words. E.g. the word "sky" is indeed "skies", but "day" is in fact "days".

Now... where do you log CRM bugs...