Darren Liu's Blog

Archive for the ‘Performance’ Category

CRM 2013 Maintenance Jobs

with one comment

The Maintenance jobs for CRM 2013 is similar to the CRM 2011 jobs if they are not the same. I had to dig into the details of each of the jobs for my project. These are my findings and I hope it will be useful for everyone. Please remember to reschedule these maintenance jobs after the installation of CRM is completed.

Maintenance Job Name

Purpose

Default Frequency/ Recommendation

Deletion Service

The deletion service maintenance operation now cleans up subscription tracking records for deleted metadata objects as they expire.

  • If Outlook does not sync for 90 days, the job will remove the subscription with CRM.
  • Cleanup POA records. Un-sharing of record(s) do not remove the records in the POA table, it will stay there for 90 days and it will clean up by the deletion job.
  • Cleanup failed/stuck workflows for the record(s) synchronize with Outlook.

By default, the job executes every 24 hours, reschedule the job to run at non-peak hours. E.g. 11:00PM at night

Indexing Management

Validates that system-managed indexes exist for all entities and recreates any missing indexes.

This is more for CRM Online and only impacts us during configuration of the solution. It automatically creates index for each column used in search for Quick Find if the index is not already created in the CRM database

By default, the job executes every 24 hours, reschedule the job to run at non-peak hours. E.g. 11:00PM at night

Reindex All

Reorganizes/rebuilds fragmented indexes depending on the amount of fragmentation.

It execute the p_ReindexAll stored procedure, it selects all indexes in the database with greater than 30% fragmentation and runs ReIndex on each while in online mode. Run while online mode means indexes are not taken offline while reindex is happening they still are active and are functioning and can be used.

Large enterprise CRM implementation with big data volume should disable this CRM reindex job if a custom job is created for recreating the indexes and with run update stats.

By default, the job executes every 24 hours, disable the job by setting the Job Next Run value to the future. E.g. 12/31/2999

Cleanup Inactive Workflow Assemblies

Seeks custom workflow assemblies that are no longer referenced in workflow rules or in-process jobs. Those unreferenced assemblies are then deleted. Consider the scenario where you register version 2.0 of a custom workflow assembly.  You may update your rules to reference the new version, but some in-progress jobs may still be referencing version 1.0.  Once those jobs have completed, this maintenance job will clean up the version 1.0 assembly that is no longer referenced by rules/jobs.

It executes the p_CleanupInactiveWorkflowAssemblies stored procedure.

By default, the job executes every 24 hours, reschedule the job to run at non-peak hours. E.g. 11:00PM at night

Create Audit Partition

Alters the partitioning scheme for the auditbase table (SQL Enterprise only).

By default, the job executes every month, reschedule the job to run at non-peak hours. E.g. 11:00PM at night

Check for MUI Updates

Detects upgrades to language (MUI) packs and schedules additional async operations to perform individual language provisioning.

By default, the job executes every 24 hours, if you don’t have language pack installed, you can disable the job by setting the Job Next Run value to the future. E.g. 12/31/2999

Refresh Entity Row Count

Refreshes the Record Count snapshot statistics leveraged enhanced query plans.

This job is important because it define how Quick Find works, it uses the counts to come up with the correct query plan.

By default, the job executes every 24 hours, reschedule the job to run at non-peak hours. E.g. 11:00PM at night

Refresh Sharing Count

Refreshes the POA read snapshot statistics leveraged in enhanced query plans.

This job is important because it define how Quick Find works, it uses the counts to come up with the correct query plan.

By default, the job executes every 24 hours, reschedule the job to run at non-peak hours. E.g. 11:00PM at night

The Deletion Service executes the following SQL queries.

exec sp_executesql N’select SubscriptionId, SystemUserId

from Subscription (nolock)

where datediff(dd, LastSyncStartedOn, getutcdate()) >= @expireSubscriptionInDays’,N’@expireSubscriptionInDays int’,@expireSubscriptionInDays=90

exec sp_executesql N’delete from PrincipalObjectAccess where (AccessRightsMask = 0 or AccessRightsMask is null) and (InheritedAccessRightsMask = 0 or InheritedAccessRightsMask is null) and VersionNumber <= @versionNumberExpired’,N’@versionNumberExpired bigint’,@versionNumberExpired=404676

exec sp_executesql N’delete from SubscriptionTrackingDeletedObject

where TimeStamp <= convert(timestamp, @versionNumberExpired)

SELECT @@ROWCOUNT’,N’@versionNumberExpired bigint’,@versionNumberExpired=404676

exec sp_executesql N’

declare @now DateTime = getutcdate();

DELETE FROM [MetadataSyncTrackingDeletedObject]

WHERE datediff(dd, CreatedOn, @now) >= @expireSubscriptionInDays;

declare @rowCount int;

SELECT @rowCount = @@ROWCOUNT;

declare @columnExists bit = 0;

IF (EXISTS (SELECT * FROM sys.columns

WHERE object_id = OBJECT_ID(N”OrganizationBase”)

AND name = N”MetadataSyncLastTimeOfNeverExpiredDeletedObjects”))

BEGIN

set @columnExists = 1;

END

SELECT @rowCount as [RowCount], @now as [Now], @columnExists as [ColumnExists];

‘,N’@expireSubscriptionInDays int’,@expireSubscriptionInDays=90

exec sp_executesql N’

UPDATE

OrganizationBase

SET

MetadataSyncLastTimeOfNeverExpiredDeletedObjects = null

WHERE

MetadataSyncLastTimeOfNeverExpiredDeletedObjects is not null

AND MetadataSyncLastTimeOfNeverExpiredDeletedObjects <= DATEADD(dd, -@expireSubscriptionInDays, @now)

‘,N’@now datetime,@expireSubscriptionInDays int’,@now=’2014-04-02 16:13:42.993′,@expireSubscriptionInDays=90

exec sp_executesql N’

if exists (select * from WorkflowWaitSubscriptionBase (nolock) where IsModified = 1)

begin

update AsyncOperationBase

set

StateCode = @readyState,

StatusCode = @waitingStatus,

ModifiedOn = @modifiedOn,

ModifiedBy = CreatedBy

where

StateCode = @suspendedState

and RetryCount < @maxRetries

and AsyncOperationId in (select AsyncOperationId from WorkflowWaitSubscriptionBase where IsModified = 1)

end’,N’@readyState int,@suspendedState int,@waitingStatus int,@maxRetries int,@modifiedOn datetime’,@readyState=0,@suspendedState=1,@waitingStatus=0,@maxRetries=10,@modifiedOn=’2014-04-02 16:13:43′

Reindex All executes the p_ReindexAll stored procedure below.

/***********************************************************************************************/

/* RETURN CODES: */

/* 0 – Success */

/* 1 – Partial success – some indexes could not be rebuilt */

/* 5 – Invalid input parameter(s) */

/***********************************************************************************************/

/*

Returns

a) Always first recordset – one row with 4 integer columns:

ResultCode – see RETURN CODES

TotalIndexesToRebuild – total count of indexes detected to be rebuild

RebuiltWithOnlineON – count of indexes rebuilt with option ONLINE = ON

RebuiltWithOnlineOFF – count of indexes rebuilt with option ONLINE = OFF (can’t be rebuilt with ONLINE = ON)

b) Always second recordset – see @errors table

c) Only when @Verbose=1, then the second recordset with detailed info about all indexes

*/

BEGIN

SET NOCOUNT ON

DECLARE @TotalIndexesToRebuild int = 0,

@RebuiltWithOnlineON int = 0,

@RebuiltWithOnlineOFF int = 0

–Get start time for max run time tracking

DECLARE @MaxTime datetime

SELECT @MaxTime = dateadd(ss,ISNULL(@MaxRunTime,0), GetUTCDate())

–Account for nulls in parameters, set to default values

SET @FragRebuildPct = ISNULL(@FragRebuildPct, 30)

SET @AllIndexTypes = ISNULL(@AllIndexTypes, 0)

SET @Verbose = ISNULL(@Verbose, 0)

SET @MinPages = ISNULL(@MinPages, 25)

SET @Online = ISNULL(@Online, 1)

–Validate parameters

IF ((@MaxRunTime <= 0) OR

(@AllIndexTypes not in (0,1)) OR

(@Verbose not in (0,1)) OR

(@Online not in (0,1)) OR

(@MinPages < 1) OR

(@FragRebuildPct > 100) OR (@FragRebuildPct < 0))

BEGIN

PRINT ‘Invalid Parameter value. Valid values are:’

PRINT ‘MaxRunTime > 0,’

PRINT ‘MinPages > 0’

PRINT ‘FragRebuildPct in {NULL,0..100}’

PRINT ‘AllIndexTypes in {0,1}’

PRINT ‘Verbose in {0,1}’

PRINT ‘Online in {0,1}’

SELECT 5 as ResultCode, @TotalIndexesToRebuild as TotalIndexesToRebuild, @RebuiltWithOnlineON as RebuiltWithOnlineON, @RebuiltWithOnlineOFF as RebuiltWithOnlineOFF

RETURN 5

END

DECLARE @indexes table

(

SchemaName sysname,

TableName sysname,

IndexName sysname,

OldFrag int,

NewFrag int null,

processed bit

)

DECLARE @errors table

(

Number int,

Severity int,

State int,

–Message nvarchar(4000), — can be found by select * from sys.messages m where message_id = Number and m.language_id = 1033

OnlineOn bit,

Statement NVarchar(2048)

)

INSERT INTO @indexes

SELECT schema_name(o.schema_id), object_name(s.object_id), i.name, s.avg_fragmentation_in_percent, null, 0

FROM sys.dm_db_index_physical_stats (db_id(),NULL,NULL,NULL,NULL) s

JOIN sys.objects o on (s.object_id = o.object_id)

JOIN sys.indexes i on (s.object_id = i.object_id and s.index_id = i.index_id)

WHERE

s.avg_fragmentation_in_percent > @FragRebuildPct — defrag only if more than x% fragmented

and i.type in (1, @AllIndexTypes + 1) — (1,2) — cannot defrag non-indexes(0-heap, 1- clustered, 2-nonclustered, 3-xml)

and s.page_count >= @MinPages — select only if the index spans multiple pages

ORDER BY s.avg_fragmentation_in_percent desc

select @TotalIndexesToRebuild = @@rowcount

DECLARE @SchemaName sysname,

@TableName sysname,

@IndexName sysname,

@sqlTemplate NVarchar(2048),

@sql NVarchar(2048)

DECLARE @retry bit

,@onlineON bit

DECLARE IndexCursor CURSOR LOCAL FOR

SELECT SchemaName, TableName, IndexName from @indexes order by OldFrag desc

OPEN IndexCursor

FETCH NEXT FROM IndexCursor INTO @SchemaName, @TableName, @IndexName

WHILE ((@@FETCH_STATUS = 0) AND ((GetUTCDate() < @MaxTime) OR (@MaxRunTime IS NULL)) )

BEGIN

select @sqlTemplate = ‘ALTER INDEX [‘+ @IndexName +’] ‘+

‘ON [‘+@SchemaName+’].[‘+@TableName+’] REBUILD WITH ‘+

‘( PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, SORT_IN_TEMPDB = OFF, ONLINE = ‘

IF (@Online=1)

select @sql = @sqlTemplate + ‘ON )’

ELSE

select @sql = @sqlTemplate + ‘OFF )’

select @retry = 1, @onlineON = @Online

while (@retry = 1)

BEGIN

BEGIN TRY

IF (@Verbose=1) PRINT @sql

EXEC (@sql)

select @retry = 0

IF (@onlineON=1)

SELECT @RebuiltWithOnlineON = @RebuiltWithOnlineON +1

ELSE

SELECT @RebuiltWithOnlineOFF = @RebuiltWithOnlineOFF +1

END TRY

BEGIN CATCH

insert into @errors

select ERROR_NUMBER(), ERROR_SEVERITY(), ERROR_STATE(), @onlineON, @sql

IF (@onlineON=1 and ERROR_NUMBER() = 2725)

BEGIN

— Handle the possible exception below: rebuild index offline. Only SQL2012 has THROW

–ErrorNumber ErrorMessage

–2725 An online operation cannot be performed for index ‘?’ because the index contains column ‘?’ of data type text, ntext, image, varchar(max), nvarchar(max), varbinary(max), xml, or large CLR type. For a non-clustered index, the column could be an include column of the index. For a clustered index, the column could be any column of the table. If DROP_EXISTING is used, the column could be part of a new or old index. The operation must be performed offline.

select @sql = @sqlTemplate + ‘OFF )’

select @onlineON = 0

END

ELSE

select @retry = 0

END CATCH

END

UPDATE @indexes SET processed=1 WHERE SchemaName=@SchemaName and TableName=@TableName and IndexName=@IndexName

FETCH NEXT FROM IndexCursor INTO @SchemaName, @TableName, @IndexName

END

CLOSE IndexCursor

DEALLOCATE IndexCursor

IF (@Verbose=1)

BEGIN

UPDATE @indexes

SET NewFrag = avg_fragmentation_in_percent

FROM

sys.dm_db_index_physical_stats (db_id(),NULL,NULL,NULL,NULL) s

JOIN sys.objects o on (s.object_id = o.object_id)

JOIN sys.indexes i on (s.object_id = i.object_id and s.index_id = i.index_id)

WHERE SchemaName=schema_name(o.schema_id)

and TableName = object_name(s.object_id)

and IndexName = i.name

END

DECLARE @ResultCode int

IF Exists(select * from @indexes where processed = 0)

BEGIN

PRINT ‘Did not process all indexes due to @MaxRunTime constraint’

SELECT @ResultCode = 1

END

ELSE

BEGIN

SELECT @ResultCode = 0

END

— Return results

SELECT @ResultCode as ResultCode, @TotalIndexesToRebuild as TotalIndexesToRebuild, @RebuiltWithOnlineON as RebuiltWithOnlineON, @RebuiltWithOnlineOFF as RebuiltWithOnlineOFF

SELECT * from @errors

IF (@Verbose=1) SELECT * FROM @indexes order by OldFrag desc

RETURN @ResultCode

Clean Up Inactive Workflow Assemblies executes the p_CleanupInactiveWorkflowAssemblies stored procedures below:

DECLARE @ActivationsToDelete TABLE

(

WorkflowId UNIQUEIDENTIFIER

)

INSERT @ActivationsToDelete

SELECT WorkflowId

FROM WorkflowBase

WHERE

Type = 2 — it’s a workflow activation

AND StateCode = 0 — not active

AND WorkflowId NOT IN

(SELECT OwningExtensionId FROM AsyncOperationBase

WHERE OperationType = 10

AND OwningExtensionId IS NOT NULL)

DELETE FROM WorkflowDependencyBase WHERE WorkflowId IN

(SELECT WorkflowId from @ActivationsToDelete)

DELETE FROM DependencyNodeBase WHERE

ComponentType = 29 AND

ObjectId IN

(SELECT WorkflowId from @ActivationsToDelete)

DELETE FROM WorkflowBase WHERE WorkflowId IN

(SELECT WorkflowId from @ActivationsToDelete)

Written by darrenliu

04/03/2014 at 4:31 am

Learned something new on IIS Compression

leave a comment »

Dynamics CRM is a web app, therefore it leverages IIS. Usually IIS compression is enabled by default since we would like to reduce bandwidth usage and also reduce the page load times. The .aspx files sent by the server to the browser is highly compressed by algorithms such as gzip. IIS 7+ offers the ability to cache not only compressed static files, it also compressed dynamic files. Modern browsers have the ability to decompress the incoming files.

IIS Compression uses the CPU to compress the files, I’ve learned that compression is automatically switch OFF when CPU usage is above 90% for Dynamic file and 100% for Static files. It will automatically switch back ON when the CPU drops back to 50% for both Dynamic and Static files. You can modify these limits in the applicationHost.config file which is normally in the C:\Windows\System32\inetsrv\config folder.

I want to bring this to everyone’s attention because usually during CRM performance testing, we often push the server CPUs to their limits. Please look at your CPU usage on your front end servers to make sure that they don’t hit the IIS compression limits and you will have a more accurate performance test.

Written by darrenliu

03/26/2014 at 4:10 am

CRM 2011 – IE 7,8 9 Hangs

leave a comment »

I ran into a problem today that CRM IE7, 8, 9 hangs after 30 minutes because I have many XmlHttpRequest calls on the form.  I found a blog on MSDN that helped me out and I believe all of the us as a CRM developer should pay attention to.  The title of the blog article is Why You Should Use XMLHttpRequest Asynchronously, it’s a good read.

Written by darrenliu

08/27/2012 at 3:28 pm

CRM 2011 Performance Tuning Lesson Learned

with 2 comments

Oh well, when it comes to tuning the CRM 2011 application, there are many things we can do.  The project that I am working on for the past two years is finally finished with performance tuning, I like to write down a few things that I learned from the tuning exercise just in case that I need to do it in the future.  Thanks to my friend, Grant Geiszler for sharing some of the information with me.

  1. Infrastructure
    • Reduce network latency, make sure the network routings to the servers are correct.
    • Make sure the SPNs are setup correct between servers and the different service accounts in used.
    • Make sure the IIS AppPool Recycle time is adjusted.
    • Make sure the load balancer are setup correctly using the correct profile.
      • If using F5 load balancer, please review the Deployment Guide provided by F5. The deployment guide is written for Microsoft Dynamics CRM 4.0 however most of the configuration applies to CRM 2011 as well. The profile that we used has the following settings.
        ltm profile tcp pr_irpc_tcp_120min {
        ack-on-push enabled
        bandwidth-delay disabled
        defaults-from tcp
        idle-timeout 7200
        nagle disabled
        proxy-buffer-high 131072
        proxy-buffer-low 98304
        receive-window-size 65535
        send-buffer-size 65535
        slow-start disabled
        }

  2. Customization/Coding Optimization
    • Reduce the # of REST/SOAP calls, consolidate the client-side calls into a server side plug-in.
    • If you are using the REST endpoint to retrieve related information, only return the fields that are needed.  If returning all of the fields from an entity, it’ll drag down the performance because the return page-size.  We have entities storing a lot of data, the return page-size was up to 5MB, after the optimization and only returning the fields that we need, the page-size reduced to KB.
    • Avoid registering too many synchronous plug-ins, try to convert some of the plug-ins to asynchronous when possible.
  3. CRM Optimization
    • Enable authPersistNonNTLM.
    • Enable WCF compression.
    • Turn off IM Presence Functionality if not needed by updating the IsPresenceEnabled field to false in the OrganizationBase table.
    • Add EnableRetrieveMultipleOptimization (value 2) to the registry under HKLM\Software\Microsoft\MSCRM.
    • Add MaxUserPort (value 65534) and TcpTimedWaitDelay (value 30) to the registry under HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters.
    • Monitor the AysncOperationBase table.  Schedule a periodic job to clean up the AsyncOperationBase table.  More information on removing the records from the AyncOperationBase table and to schedule a job to clean up the table, please refer to KB 968520.
    • Reschedule the CRM 2011 Maintenance Jobs. By default CRM 2011 creates 6 maintenance jobs which are scheduled to run daily.  Depends on when the CRM Organization is created, the jobs may run when users are in the system.  You may download the CRM 2011 Maintenance Job Editor Tool to reschedule the jobs.  More information on the tool and how to use the tool, the premier team wrote a post on their blog.
  4. SQL 2008 Optimization
    • Set degree of parallelism from 0 to 1. This disables the parallel execution of the simultaneous operations on the SQL server.
    • Turn Read Committed Snapshot On.
    • Configure the TempDB Drive using the SQL best practice.
    • Validate the Max/Min Memory Configuration.
    • Validate Processor Configuration is set to use Normal mode and not fiber mode.

Anyway, I am sure there are many more optimization that you can do to the Outlook client, IE and the client OS. I hope the above tips can give you a head start.

Written by darrenliu

06/13/2012 at 11:58 pm

Posted in CRM, Performance

Tagged with , , , ,

CRM 2011 Import: An existing connection was forcibly closed by the remote host

leave a comment »

We ran into an issue when importing a solution to few of our environments.  The import failed and we received an error message telling us that “An existing connection was forcibly closed by the remote host.”  We have tried so many different approaches to solve this problem: making the solution smaller, updated timeout values, updated the registry key, etc… The problem still remains.  The pattern we saw was that the import failed and times out at exactly 30 minutes. If the import finishes in less than 30 minutes, everything works.

After days of troubleshooting, we found out that the CRM application server only allocated 1 CPU instead of 2.  We increased the number of CPU to 2 and everything works like a charm!  I guess that’s why IG recommended 2 CPUs at a minimum.

Issue:

Inner Exception: “{Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host.”}

Solution:

Increase the # of CPU on the application server.  2 Minimum.

Written by darrenliu

02/29/2012 at 10:00 pm

Slow Internet Explorer 7 (IE7) Performance

with one comment

I am working on a global Dynamics CRM 2011 project and we are encountering performance issues on Internet Explorer (IE) 7.  I guess there are several options to tackle this problem.

  • Option 1: If you customer is ok with upgrading all of the users to IE 8 or 9, they will see performance enhancements for sure comparing to IE 7.  However not an ideal option since enterprise customers are not moving as fast as smaller companies on their infrastructure and on the users’ operating system.
  • Option 2: Update 2 IE settings and add a registry key to the user’s machine, I can’t believe what I am seeing.  CRM 2011 performs extremely well after the changes.  Here’s what I did.
    1. Change the Internet Explorer settings
      • Navigate to Tools | General | Tabs (Settings).
      • Unselect Enable Quick Tabs.
      • Select Let Internet Explorer decide how pop-ups should open.
      • Click OK and then restart the browser.
    2. Add the following registry key (by default, IE only has 2 connections)
      • Click Run  | Regedit.
      • Navigate to hkeycurrentuser\software\microsoft\windows\currentversion\internetsettings.
      • Add new DWORD MaxConnectionPerServer with Decimal value of 10.
      • Add new DWORD MaxConnectionsPer1_0Server with Decimal value of 10.
      • Close Regedit.

Thanks to my buddy, Jim Wang, for the great tips!

Written by darrenliu

02/22/2012 at 3:29 am