Can I get single quotes with my FetchXML, please?

“I’d like my FetchXML with double quotes, please”, said no Dynamics developer, ever.

If you ever export FetchXML from the Advanced Find tool in Dynamics, you’re most likely doing this because you want to use that FetchXML in code somewhere. Isn’t it annoying, therefore, that FetchXML exported from the tool uses double-quotes to denote XML attribute values, when the first thing you will have to do every time is to change the double quotes to single quotes before you can use the FetchXML in your .Net code?

I don’t have a solution for that issue, per se, but many developers may not be aware that the FetchXML Builder tool in the XRM Toolbox has an option to render all FetchXML using single quotes.

This means not only that FetchXML rendered from the Fetch XML Builder (FXB) tool itself can be rendered in a more developer-friendly way, but also that any of the other plugins in the toolbox that use the FXB, such as the View Designer, can similarly be used to produce nicely formatted single-quoted XML.

You can even paste FetchXML exported from CRM into the builder, close and open it again and it will have replaced all double with single quotes…

Before formatting in FXB

After Formatting in FXB

Posted in Dynamics CRM, Uncategorized | Tagged | Leave a comment

Programmatically removing components from a MS Dynamics form or view

The ‘additive’ nature of CRM deployments means that it can be easier to add components to an instance of CRM than it is to remove them. With the additive model, even if you delete a component from your development environment and then export and deploy the solution which contained that component elsewhere, the deleted component will still exist in the target environment. As a result of this, there is a danger that CRM deployments can become littered with redundant components as the cost of the housekeeping required to remove these components from production environments can be seen as quite high.

As part of our Continuous Integration process, we have started to remove redundant artifacts from CRM. For example, attributes, entities and workflows which are no longer needed are removed to keep the solutions as clean as possible. Scripting the housekeeping tasks is actually quite straightforward, and once you have done this a couple of times you will see that it is easy to create a set of standard scripts to remove redundant components.

Removing entities and their relationships can all be scripted using standard calls to the metadata API such as DeleteRelationshipRequest and DeleteEntityRequest. However, these components cannot be deleted if they exist on a form. Therefore if you want to be able to automate the deletion process, you will need to script the removal of the items from the forms on which they appear.

Imagine the scenario where you have two entities in a one-to-many relationship, neither of which is required any more. Entity A is the ‘parent’ of entity B. The Entity A form contains a section with an associated view of Entity B and the Entity B form displays a lookup to Entity B.

To remove the entities manually, the process would be:

  1. Remove any references to either entity from either entity’s forms
  2. Delete the relationship between Entity A and Entity B
  3. Delete Entities A and B

The process has to be carried out in this order because:

  • You cannot delete a custom entity whilst it has a relationship to another custom entity
  • You cannot delete a relationship between entities if that entity is ‘in use’ on a form (either as a lookup for a 1:N relationship or an associated view for a N:1 relationship)

Due to the additive nature of Dynamics, as explained earlier, deleting the entities and then exporting the solution which contained them from our Dev environment and importing into another environment will not result in the entity being deleted from the target organisation. So we will need to write a script to remove the components from our target organisations. Deleting relationships and entities is a straightforward activity and we can easily script this using the DeleteRelationshipRequest and DeleteEntityRequest methods. However, we have to deal with the forms first and even if we do this in our development environment first, this will not help in other environments because the solution we export from Dev will not have our amended forms in them because we have deleted the entities from Dev.

Fortunately, scripting the removal of fields or sections from a form can be done quite easily. We can write a query which retrieves the form xml, identify the xml node which needs removing, update the form xml and publish the new version of the form.

The script will have three stages:

1 – Retrieve the Form XML for the entity in question

            // Get the formxml for the entity
            QueryExpression formQuery = new QueryExpression
            {
                EntityName = "systemform",
                ColumnSet = new ColumnSet(true),
                Criteria = new FilterExpression
                {
                    Conditions =
                    {
                        new ConditionExpression
                        {
                            AttributeName = "objecttypecode",
                            Operator = ConditionOperator.Equal,
                            Values = {entitytypeCodeOfTheEntity}
                        }
                    }
                }
            };

2 – Remove references to the entity from the form XML
For each form record returned in the above query, we can cast the formXML attribute as XML, and search for the particular node which references the item we want to remove. Once we have found the relevant node, we can then remove it from the formXml, and update the formXml on the entity and update it.

We will use an XPath expression to search the XML for a particular node. The XPath expression will depend on whether we are looking for an entity (a lookup field) or a section (containing an associated view).

                string xpath;                
                string searchString;
                if (!string.IsNullOrEmpty(sectionName))
                {
                    xpath = "//section[@name='" + sectionName + "']";
                    searchString = sectionName;                    
                }
                else if (!string.IsNullOrEmpty(attributeName))
                {
                    xpath = "//cell[control/@id='" + attributeName + "']";
                    searchString = attributeName;
                }

If the XPath expression returns a result, we can then remove the node and update the form:

                string formXml = form["formxml"].ToString();

                if (formXml.Contains(searchString))
                {
                    Console.WriteLine("Form contains " + searchString);

                    // Remove the section from the document
                    XmlDocument doc = new XmlDocument();
                    doc.LoadXml(formXml);

                    XmlNode root = doc.DocumentElement;
                    XmlNode node = root?.SelectSingleNode(xpath);

                    if (node == null)
                    {
                        xpath = "//tab[labels/label/@description='" + searchString + "']";
                        node = root?.SelectSingleNode(xpath);
                    }

                    if (node != null)
                    {
                        node.ParentNode?.RemoveChild(node);
                        Console.WriteLine("Node has been removed  ");
                    }
                    else
                    {
                        Console.WriteLine("Node not found");
                    }

                    form["formxml"] = doc.InnerXml;
                    service.Update(form);

3 – Publish the changes
Your changes to the form will not be permanent unless you publish them (which means you can practice the process in Dev until you are sure it is right).
You can publish changes to an entity using a method like this:

        public void PublishChangesToEntity(string entityName, IOrganizationService service)
        {
            Console.WriteLine("Publishing entity " + entityName);
            PublishXmlRequest request = new PublishXmlRequest
            {
                ParameterXml = @"<importexportxml>
                                       <entities>
                                          <entity>" + entityName + @"</entity>
                                       </entities>
                                       <nodes/>
                                       <securityroles/>
                                       <settings/>
                                       <workflows/>
                                    </importexportxml>"
            };

            try
            {
                var response = (PublishXmlResponse) service.Execute(request);
                Console.WriteLine("Changes to entity " + entityName + " published");
            }
            catch (Exception e)
            {
                // If entity not found, most likely has been deleted in previous run - not an error
                if (e.Message.Contains("was not found in the MetadataCache"))
                {
                    Console.WriteLine(e);
                }
                else
                {
                    Console.WriteLine(e);
                    throw;
                }
            }
        }

We will need to do the same if there are any views of Entity A which contain any attributes from Entity B – we will not be able to delete the relationship between the entities whilst these attributes are on the view. As before, we can query the fetchXml and the layoutXML of the view and update it. For example, we can use a fetchXml query like the one below to find any views based on a particular entity:

<fetch top='100' >
  <entity name='savedquery' >
    <all-attributes/>
    <filter type='and'>
       <condition attribute='returnedtypecode' operator='eq' value='{0}' />                  
    </filter>
  </entity>
</fetch></pre>
<pre>

and we can then modify the “layoutxml” and “fetchxml” attributes of the returned query definitions to remove any reference to the attributs which we want to remove.

With this technique it is easy to automate the process of removing sections or attributes from forms, or attributes from views, which will then enable you to programatically remove relationships between entities and the entities themselves.

Posted in CI, Dynamics CRM | Tagged | Leave a comment

Formatting the output of LINQPad’s DumpContainer

LINQPad’s Dump Containers can be used as a way of outputting some data to a static place within the output pane. A good example of this would be to display a timer or counter on the screen whilst a particular action is taking place.

A simple example of a Dump Container is as shown below where the content of the container is modified within a For loop.

Like LINQPad’s progress bar, Dump Containers can be really useful to indicate the progress of a running script. What makes them even more useful is that you can format the content of a Dump Container, which can be used to bring the user’s attention to something, such as an exception, or the completion of the script. You can easily amend the style of the container using css styling.

Here’s a trivial example of how the content and the style of the Dump Container can be changed on the fly:

You can download the second script from instant share here.

Posted in LINQPad | Tagged | Leave a comment

System.IO.FileLoadException and other errors in plugins deployed by CRM developer toolkit to Dynamics 2015

On creating a new plugins project for our Dynamics 2015 organisation with the latest version of the Dynamics 365 Developer Toolkit we were experiencing all sorts of errors, either on deployment or on execution.

On trying to register a plugin with IsolationMode=”None” we would experience this error:

System.IO.FileLoadException: Microsoft Dynamics CRM has experienced an error. Reference number for administrators or support: #156BBD18

Registering plugins in Sandbox isolation mode would work, but the plugins would fail on execution with this error:

SandboxAppDomainHelper.Execute: The plug-in type could not be found in the plug-in assembly

and

System.ArgumentNullException: Value cannot be null.

Trying to delete or re-register the step would give us this error:

Unable to load the plugin type

The issue turned out to be that, by default, the toolkit downloads the latest version (8.0.2.1) of the Microsoft.CrmSdk.CoreAssemblies package from NuGet, and this is a higher version than can be used by Dynamics CRM 2015. In order to fix this issue, we just downgraded the package using the NuGet Package Manager:

This fixes the problem and plugins will now work in either Sandbox or ‘None’ isolation mode.

Posted in Dynamics CRM, Uncategorized | Tagged | Leave a comment

Displaying Progress using LINQPad’s Util.ProgressBar

LINQPad contains a number of very useful utilities, as documented here. One of these is the Util.ProgressBar functionality which allows you to display the progress of your script within the results pane. This can be very useful when, say, you are updating a large number of records and want an indication of progress without having to write additional information out to the results pane for every record or group of records. Whilst the use of the ProgressBar is very straightforward, I was caught out by a very simple detail when I first started using it, so thought it worth documenting in detail here.

Here’s the most straightforward example of the ProgressBar:

var pb = new Util.ProgressBar("Analyzing data");
pb.Dump();
for (int index = 0; index <= 100; index++)
{
    pb.Percent = index;
    Thread.Sleep(10);
}

This will result in a progress bar being displayed in the results pane like this:

The general procedure for using the progress bar is as follows.

Instantiate the progress bar, with its optional title:

var pb = new Util.ProgressBar("Analyzing data");

Dump the progress bar to the results pane so it is visible:

pb.Dump();

Update the progress bar, using either its ‘Percent’ property (an integer) or its ‘Fraction’ property (a double).

var pb = new Util.ProgressBar("Progress");
pb.Dump();
float total = 90;
float complete = 32;
pb.Percent = (int)((complete/total)*100);

or

var pb = new Util.ProgressBar("Progress");
pb.Dump();
float total = 90;
float complete = 32;
pb.Fraction = complete/total;

Optionally, you can also update the ‘Caption’ property of the progress bar, so that the progress can be displayed both visually and as text:

The main ‘gotcha’ that you are likely to encounter when using the Progress bar is that you will often be dealing with integer values when calculating progress – for example, fifty-five records processed out of a hundred. Look what happens when we use these integers to calculate the progress:

Why doesn’t the progress bar display correctly as it does here?:

The reason for this is nothing to do with LINQPad or the ProgressBar utility – it is to do with c# arithmetic functionality. In c#, if you perform arithmetic on two integers, the result will be an integer, as we can see from the following LINQPad screenshots.

Dividing two integers gives an integer result:

Making one of the numbers a float produces output as a float:

For this reason, we must always cast at least one of the numbers we are basing our progress on as a float or a double in order for our progress bar to render correctly.

Here is a script (also available via InstantShare) which demonstrates the use of both the Fraction and Percent progress measures, and also a dynamic update of the caption whilst the script is running which will give you an output like this:

void Main()
{
	DisplayProgress(32);
	DisplayProgressPercent(76);
	Console.WriteLine("Done");
}

private void DisplayProgress(float steps) 
{	
	var pb = new Util.ProgressBar("Initialising....");
	pb.Dump();
	Thread.Sleep(2000);
		
	for (int i = 1; i < steps+1; i++)
	{		
		pb.Caption = "Completed " + i + " out of " + steps;
		pb.Fraction = i / steps;
		Thread.Sleep(50);
	}
}


private void DisplayProgressPercent(float steps)
{
	var pb = new Util.ProgressBar("Initialising....");
	pb.Dump();
	Thread.Sleep(2000);

	for (int i = 1; i < steps + 1; i++)
	{
		double percent = (i / steps) * 100;
		pb.Caption = "Completed " + percent + "%";
		pb.Percent = (int)percent;
		Thread.Sleep(100);
	}
}

LINQPad also has a utility called Util.Progress. By setting this to a value between 1 and 100, you can set the display of the progress bar at the bottom of the LINQPad IDE.
The script below demonstrates this functionality, as well as using a ‘DumpContainer’ which allows you to repeatedly dump output to a fixed point on the results pane:

DumpContainer dc = new DumpContainer();
dc.Dump();
for (int i = 0; i < 100; i++)
{	
	dc.Content = i;
	Util.Progress = i;
	Thread.Sleep(100);
}

Once you get the hang of it, using the progress utilities can be a very handy way to visualise the progress of long-running scripts.

Posted in LINQPad | Tagged | Leave a comment

State-based vs Interaction-based testing in Dynamics CRM

When writing Unit Tests one can focus on two different aspects of the code under test. We can call these two types of tests ‘State-based’ and ‘Interaction-based’. The first of these concentrates on the end result achieved by the code under test – what was the end result of the code being run, whereas the second focuses on how that result was achieved.

This distinction is sometimes called ‘white-box vs black-box’ testing. With white-box testing (interaction-based) we can ‘see inside’ the code as it is running, and observe the internals of the code under test. With black-box (state-based) testing, the workings of the machinery are hidden from us; all we can see are the inputs and outputs.

White Box (Interaction-based) testing Black Box (State-Based) testing
Images Designed by Freepik

When writing unit tests for CRM plugins or workflows, until recently the only available method was to use a mocking framework, which restricts you to writing white-box testing. Mocking frameworks focus on interactions – they work by creating a mocked implementation of an interface (such as the IOrganisationService) and as part of your test you then have to write code specifying what should happen when a particular call to the mocked interface took place (ie, “if a call to the RetrieveMultiple method is called with these parameters, return this entity collection”.) Your Assertions take the form of “A call should have been made to the Update method, with these parameters, so many times”. From this we can infer that the code is working correctly, but we cannot actually verify the state of the CRM database at the end of the test.

FakeXRMEasy gives us the ability to do state-based testing for Dynamics CRM code, because it not only mocks the Organisation Service, it also creates an in-memory CRM database with which you can interact during your tests. All calls to the Organisation Service made during your tests (Creates, Updated, Retrieves, Associates, etc) behave in exactly the same way that they would against a real instance of CRM and you can verify your code by testing the state of the in-memory database using these same methods.

I’ve recently started using FakeXRMEasy for all my CRM Unit testing because I believe that writing State-based tests results in tests which are more maintainable, and this is a critical factor when building up a large library of tests for a project over a period of time.

State-based tests are more maintainable for the following reasons:

You can change the implementation of your code without invalidating your tests

Imagine that we have a plugin which is called when a record is updated, and which in turn updates a number of other related records in a parent-child relationship to the record being updated. Your initial implementation might use the Update method of the Organisation Service, and your test would contain an assertion that the Update method was called with certain parameters, a certain number of times (once for each child record).

// Assert that an update was called n times on an entity whose name is "newChildEntityName"
Service.AssertWasCalled(s => s.Update(Arg<Entity>.Matches(
p => p.LogicalName == "newChildEntityName")), o => o.Repeat.Times(numberOfChildRecords));

If you were later to decide that, for performance reasons, you wanted to change your code so that it used the ExecuteMultiple request instead of executing many update statements in a row, this would invalidate your test, even though your code would still be acheiving exactly the functional result you required.

Using state-based testing instead, we don’t have to know the details of how the code works in order to write the test – ie don’t have to know whether a query is using FetchXML or a Query Expression, and we don’t have to specify that ‘this type of query executed with these parameters will return this result set’. We just set up the faked CRM context at the start of the test, and verify the state of the records we are interested in at the end, using standard CRM API calls.

Here is an example in LINQPad of the most simple test I could write to demonstrate that you really only need to know the CRM API in order to use FakeXRMEasy:

Black box tests are more readable

State based tests are more easily understood and easier to write. You don’t have to master the sometimes convoluted syntax associated with mocking frameworks, and your tests will be using the same methods as the code you are writing your production code in. If you are a CRM developer, your tests will just use the already familiar methods of the CRM API (create, update, retrieve, etc).

Readability of tests is important because it makes it easier to understand what the test is actually testing, and so easier to maintain in the long run. It also makes it easier for a developer who is new to the project to understand the code and its tests.

Black box tests are more “BDD”

Black box testing lends itself to a more BDD (behaviour driven development) approach. Tests which make assertions about the expected state of the application are easier to turn into BDD style feature files than those which test the inner workings of the application, and are easier to discuss with business analysts, testers and end users.

No need to amend production code

With most mocking frameworks, when dealing with sealed classes such as RetrieveAttributeResponse, it is not possible to set the returned AttributeMetadata property as it is read-only, so it is necessary to write a wrapper class and to amend production code to use the wrapper code as described in this article by Lucas Alexander. Whilst this approach does work, it is preferable not to have to amend production code in order to make it to be testable in this way, and it adds another source of possible confusion for developers coming across this code for the first time. With FakeXRMEasy this is not necessary. Calls to RetrieveAttributeRequest, RetrieveEntityResponse and other similar methods can be achieved without any modification to production code.

When should we carry out White Box testing?

It is still possible to perform Interaction-based testing with FakeXRMEasy and there are times when you might still want to do this. Interaction based tests are useful if you want to test that your implementation is working as expected. A couple of examples of this are:

If you have conditional logic in your code which (say) switches between using a Retrieve and a RetrieveMultiple under certain conditions, your test could assert that the correct method had been called.

If you notice a bug such that a call to the API is being called more times than expected, you could introduce an assertion to check that (say) a query is only being carried out once.

Here’s the same test from earlier, this time written as an interaction test:

This LINQPAD script contains the above tests, with both the state and interaction based assertions present.

This article by the Jordi Montaña, creator of FakeXRMEasy goes into the differences between using a normal mocking framework and FakeXRMEasy in much more depth.

I hope I’ve managed to convey some of the many advantages that I feel using FetchXRMEasy brings to unit testing in CRM.

Posted in Dynamics CRM, FakeXRMEasy, Testing | Tagged , , | Leave a comment

The case of the missing xml file

Our automated deployment of a CRM Solution file started failing recently and it was quite a journey to find out what was causing the error.

We automate the deployment of our CRM solutions using the Build/Release processes in Visual Studio Team Services (VSTS). In short the process looks like this:

  1. Using the Dynamics 365 Developer Toolkit, we extract and unpack a solution into a customisations project in Visual Studio.
  2. We check these customisations into VSTS.
  3. This triggers a build of the solution, and we use Wael Hamze’s VSTS Extensions to pack the solution back into a deployable zip file
  4. The successful build process triggers a release to CRM, using the powershell utilities from the ALM toolkit from ADXStudio. (We can’t use Wael’s solution for this because it doesn’t work with our version of CRM 2015).

The first thing we knew was that our automated release started failing in VSTS with the error:

“importjob With Id = feb7b87f-1ff9-4914-ba14-0116ca959584 Does Not Exist”

As you can see from the logs, the import gets kicked off and then fails within a couple of minutes with the ‘import job does not exist’ error.

A few searches around the topic seemed to suggest that the issue could be caused by timeouts when uploading large solution files, but this did not seem to be the case for us as we would not expect the timeouts to suddenly start occurring persistently after one fairly minor check-in. Also the deployments of this large solution normally take 5-10 minutes, whereas the error was occurring in less than 2 minutes.

The first thing we tried was to see if we could import the solution file as exported directly from the CRM front-end, using the front-end solution import process.

This worked OK, so we then tried importing the solution file that is created as part of the automated build in step 3 above.

So we copied the solution file from the server where the VSTS agent runs and tried to import this through the CRM front-end. This failed almost immediately with the following error:

So, now, we’ve got something to work with – there’s something wrong with the file that the solution packager is creating.

Switching verbose tracing on on the server to which we are deploying enabled us to find a more helpful error message in the w3wp-CRMWeb trace file:

Crm Exception: Message: The import file is invalid. XSD validation failed with the following error: ‘The element ‘EntityRelationship’ has incomplete content. List of possible elements expected: ‘EntityRelationshipType’.’. The validation failed at:…

Googling the error message took us to this very useful article which suggested that the issue might be that a relationship mentioned in the Relationships.xml file in the ‘Other’ folder in the unpacked solution, might not have a corresponding file in the Relationships folder. Sure enough, the relationships.xml file contained a reference to a newly created relationship, and the Visual Studio project contained a reference to the definition file for this relationship which had not been checked into source control:

Checking that missing file into source control fixed the problem.

Lessons learned:

1.) Those error messages are not always what they seem – sometimes the only place you’ll get really useful information is the CRM trace logs.

2.) Make sure you add any new files which are part of your customisation project into source control.

 

Posted in CI, Dynamics CRM | Tagged | Leave a comment

Error while loading code module: ‘Microsoft.Crm.Reporting.RdlHelper’

When trying to compile a FetchXML-based report in Visual Studio 2015 which had been developed in an earlier version of Visual Studio, we were getting the following error:

Building the report, ReportName.rdl, for SQL Server 2008 R2, 2012 or 2014 Reporting Services.
[rsErrorLoadingCodeModule] Error while loading code module: ‘Microsoft.Crm.Reporting.RdlHelper, Version=5.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35’.
Details: Could not load file or assembly ‘Microsoft.Crm.Reporting.RdlHelper, Version=5.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35’ or one of its dependencies. The system cannot find the file specified.
[rsCompilerErrorInExpression] The Language expression for the textrun ‘Table0_Details0.Paragraphs[0].TextRuns[0]’ contains an error: [BC30456] ‘Crm’ is not a member of ‘Microsoft’.

We had already installed the CRM Report Authoring Extension for CRM 2015, but this helper dll was not being picked up by Visual Studio.

We were able to locate the dll on the SSRS Server of our CRM Development environment – in our case it was located here:

C:\Program Files\Microsoft SQL Server\MSRS11.MSSQLSERVER\Reporting Services\ReportServer\bin

but it may be elsewhere for different versions of SQL or CRM.

We copied the dll to the developer PC’s and then registered it in the GAC (Global Assembly Cache) on these machines. This is done by running the gacutil.exe utility. In some cases, this might be available from a normal command prompt, but we found we had to access it by running the Visual Studio 2015 ‘Developer Command Prompt for VS2015’.

On Windows 10, you can access this from the start menu like this – You need to run the programme as an Administrator, so find the programme and then Right-Click to Run As Administrator:

(You will have to get to this in a slightly different way if you are using Windows 7, but it is still accessible from the Start button).

With the new window open, navigate to the folder where the dll is and run the command gacutil -i Microsoft.Crm.Reporting.RdlHelper.dll

If you are compiling a report in Visual Studio and it is giving the error above, you might need to point the report to this version of the RdlHelper.dll

Open the report in Visual Studio. select Report -> Report Properties from the menu. (NB This options is only available if the report is open and you have selected an element on the report designer/previewer.)

In the Report Properties dialogue box, select ‘References’.

Even if the RdlHelper is listed, it may be that this is the wrong version. Click on ‘Add’ and then browse to the newly registered dll in the GAC. This will be at a location like this:

C:\Windows\assembly\GAC_MSIL\Microsoft.Crm.Reporting.RdlHelper\7.0.0.0__31bf3856ad364e35\Microsoft.Crm.Reporting.RdlHelper.dll although the exact location will depend on which version of the helper you have installed.

Remove any pre-existing references (which are most likely to a different version of the dll).

Posted in Dynamics CRM | Tagged | Leave a comment

Using Impersonation in Dynamics CRM

While you can’t log in as another through the Dynamics CRM front  end, as a System Administrator it is possible to impersonate another user when making calls to the CRM API.

This is very useful if you want to be able to do the following:

  • Test the correct implementation of security roles.
  • Check which records can be seen by which users if sharing is being used.
  • Search for charts or views belonging to different users.

For example, if you have complex security rules involving automatic sharing of records with certain privileges, this can be very time-consuming to test through the CRM front end. Writing an integration test where a record is created by one user, and then the permissions on this record can be immediately checked for another user, is very simple using impersonation.

As another example, we recently had a support issue where a team were all using a personal view created by one of the team. A few of the team were on holiday and no-one knew who had originally created the view. It was straightforward to cycle through a list of users in a particular team and list the personal views to which they had access and who the views were created by. This information is not readily accessible through the front end, nor through the API without being able to switch users, but using impersonation it was very easy to retrieve the information needed.

Here is an example which works in LINQPad but which you can easily amend to work in a Visual Studio project. The line in which we switch caller id is highlighted.

/*
This sample shows how you can change the callerid of the organisation service proxy
allowing you to impersonate different users
*/
void Main()
{
	// Log in with system administrator account
	var orgService = ConnectToCrm(Util.GetPassword("dev org service url"));

	// Display name of currently logged in user
	WhoAmIResponse whoami = (WhoAmIResponse)orgService.Execute(new WhoAmIRequest());
	Entity u = orgService.Retrieve("systemuser",whoami.UserId, new ColumnSet("fullname"));
	Console.WriteLine("Current user is " + u["fullname"]);
	
	// Create an account and display the user who created it
	CreateAccountAndShowWhoCreatedIt(orgService);

	// Get the user details of a different user
    string username = Util.GetPassword("myusername");
	Entity user = GetUser(orgService, username);
	
	// Switch the Caller ID to that user
	Console.WriteLine("Switching caller id to " + username);
    orgService.CallerId = user.Id;

	// Create an account and display the user who created it
	CreateAccountAndShowWhoCreatedIt(orgService);
}

private void CreateAccountAndShowWhoCreatedIt(OrganizationServiceProxy orgService)
{
	Entity account = new Entity("account");
	Guid accountId = orgService.Create(account);
	account = orgService.Retrieve("account", accountId, new ColumnSet("createdby"));
	Console.WriteLine("Account record was created by " + ((EntityReference)account["createdby"]).Name);	
}

private static OrganizationServiceProxy ConnectToCrm(string uri)
{
	string UserName = Util.GetPassword("sysopsuser");
	string Password = Util.GetPassword("sysopspassword");

	AuthenticationCredentials clientCredentials = new AuthenticationCredentials();
	clientCredentials.ClientCredentials.UserName.UserName = UserName;
	clientCredentials.ClientCredentials.UserName.Password = Password;

	OrganizationServiceProxy serviceProxy = new OrganizationServiceProxy(new Uri(uri), null, clientCredentials.ClientCredentials, null);

	Console.WriteLine("Connected to " + serviceProxy.ServiceConfiguration.ServiceEndpoints.First().Value.Address.Uri);

	return serviceProxy;
}

public static Entity GetUser(OrganizationServiceProxy orgService, string domainName)
{
	QueryExpression queryUser = new QueryExpression
	{
		EntityName = "systemuser",
		ColumnSet = new ColumnSet("systemuserid", "businessunitid", "domainname"),
		Criteria = new FilterExpression
		{
			Conditions =
					{
						new ConditionExpression
						{
							AttributeName = "domainname",
							Operator = ConditionOperator.Equal,
							Values = {domainName}
						}
					}
		}
	};

	EntityCollection results = orgService.RetrieveMultiple(queryUser);
	return results.Entities[0];
}

Here is a LINQPad script of the above code.

For fairly obvious reasons, it is not possible to change the caller id to that of a user who is disabled, or to a user who has no security roles. Such a user would not be able to make any API calls, so there would be no point in doing so anyway.

Posted in Dynamics CRM, LINQPad | Tagged , | 2 Comments

Integrating LINQPad scripts into Source Control

Whilst there is no built-in source control integration in LINQPad, it is still easy to keep your LINQPad scripts under source control. This means that scripts can easily be shared amongst your team and you can take full advantage of the benefits of source control, such as versioning and safe file storage in an external repository.

As an example I will show you how we have integrated LINQPad scripts into our Visual Studio Team Services code repository – the same pattern should work equally well for other source control systems.

First of all, create a folder within your VCS which will hold your LINQPad scripts. We have a folder called LINQPad queries with sub-folders for different projects and another folder for useful samples. Once you have created that folder, you can link that folder location on your computer to the ‘My Queries’ folder in LINQPad by clicking on the ‘Set Folder…’ link…

… and then selecting this as your custom location for queries:

Now that your LINQPad queries folder is under source control, any changes you make will be flagged as Pending Changes in Visual Studio, and you can add new files to the repository using the Source Control Explorer in Visual Studio.

If your team all map their ‘My Queries’ folder to the same location, then all files in the repository become shared with the team.

You can avoid uploading passwords and other sensitive information in your scripts by using the Util.GetPassword method, as explained in this blog post.

PS If you just want to share LINQPad scripts with other users without putting them into source control, LINQPad has a feature called Instant Share which allows you to upload a file to a repository on the LINQPad server – you will then be given a URL of the uploaded file which you can share with colleagues or post on your blog, like this script which shows you how to generate a PDF in LINQPad.

Posted in LINQPad | Tagged | Leave a comment