Batch migrations

To keep our migrations under control I’ve been relying very heavily on the use of batched migration runs.  The process I’ve followed uses the ‘suspendwhenreadytocomplete’ value which performs the normal migration, copying the entire set of users’ mailboxes, but – crucially – stops the process just before the final stage where the changes are committed to the directory.

The really great thing about doing things this way is that, almost as an incidental benefit, the vast majority of corrupt mailbox content will be revealed through this exercise: those mailboxes fail to reach that ‘autosuspended’ stage. Yet getting to that point doesn’t  impact end users at all: they remain active and oblivious on their old server all the way through the job.

I used this feature to benchmark the migration process for one department and get a feel for for how long it might take to do this on a larger scale. When the final stage was ready we could just resume the task, via an overnight scheduled task. The momentary interruption when the mailbox is locked, for that final ‘commit’ phase, shouldn’t affect anyone. Our tests showed that even users actively accessing their mailboxes during the move weren’t adversely affected. Well, except for Mac users… More on that later.

Initially we had planned to use this auto-suspend capability to migrate the entire user base of 50,000 mailboxes in one go, but a lack of documentation from anyone else having tried it on that scale caused some raised eyebrows. The compromise required a rethink on mailbox distribution and some careful tweaking of circular logging. I used circular logging to keep log files manageable during the phase where mailboxes get copied. This was followed by a full backup, then circular logging was switched off before finally committing the changes. Although this added extra steps it did ensure that our backups were able to cope with the extra content without contending with vast numbers of logfiles that would’ve otherwise been generated by the mailbox moves.

There were several distinct phases to our migration which, due to our scale, were to be repeated across twenty separate migration runs. Each one went through these steps:

  1. Export from the GAL a CSV file containing the aliases of the mailboxes to be upgraded. I included primary SMTP address and department data so that a version of this same file could be used for a mail merge in the next step.
  2. Notify users of the upgrade – one week prior to the planned date.
  3. Create the move requests and launch them, using the ‘SuspendWhenReadyToComplete’ option.
  4. Turn off circular logging on the destination databases and back them up.
  5. Schedule an overnight ‘resume’ of the autosuspended mailboxes to complete the migration.

I came up with a bit of PowerShell (later optimised a little further by a colleague by the addition of logging and setting file attributes) to allow the bulk of the mailbox copying to take place without colleagues needing to remember the exact syntax of the command. It relies on you having a prepared CSV file with at least the following values in it:


The script assumes you’ll have saved the file as ‘c:\MIGRATION.CSV’.

All you need to do then is run the following as a PS1 file, remembering to include a name for the batch at the end:

$batchName = $(throw “Please specify a batch name.”),
$migrationCsv = ‘c:\migration.csv’

## Capture the batch name in a file (for the other scripts):
$batchNameFile = ‘c:\batchname.txt’
# If it already exists make it writable
if (Test-Path $batchNameFile)
Set-ItemProperty $batchNameFile -name isreadonly -value $false
# Overwrite the file (if it exists) and means the batch name is all it contains:
$batchName > $batchNameFile
# Make it read-only
Set-ItemProperty $batchNameFile -name isreadonly -value $true

#Load snap-in to support use of Exchange Commands:
Add-PSSnapin Microsoft.Exchange.Management.Powershell.E2010 -erroraction silentlyContinue
import-csv $migrationCsv |foreach {get-mailbox $_.alias |new-moverequest -suspendwhenreadytocomplete -batchname $batchName -targetdatabase $_.targetdb}

The job of completing these part-finished moves was left for an overnight scheduled task. To ensure that it only completed the moves for that night’s batch of users it would use the batch name that was created earlier:

#Load snap-in to support use of Exchange Commands:
Add-PSSnapin Microsoft.Exchange.Management.Powershell.E2010 -erroraction silentlyContinue
# Get the batch name
$batchNameFile = ‘c:\batchname.txt’
$batchName = Get-Content $batchNameFile
$TIMESTAMP_SUFFIX = “{0:dd-MMM-yyyy-HHmm}” -f (Get-Date)
$logFile = “C:\PS_LOGS\commit_$batchName_$TIMESTAMP_SUFFIX.txt”

“This script started executing at {0:G}.” -f (Get-Date) >> $logFile
“About to start processing the commits for batch: ‘$batchname’.” >> $logFile
## Resume and commit
Get-MoveRequest -resultsize unlimited -MoveStatus ‘AutoSuspended’ -BatchName $batchName | Resume-MoveRequest
#Resume any other suspended moves associated with this batch name:
Get-MoveRequest -resultsize unlimited -MoveStatus ‘Suspended’ -BatchName $batchName | Resume-MoveRequest

‘(Check the other logs and e-mails for precise timings & statistics).’ >> $logFile
“This script exited at {0:G}.” -f (Get-Date) >> $logFile

That took care of the heavy lifting but obviously I wanted to know what had happened when I arrived the next day, so yet another bit of scheduled PowerShell ran the following command and emailed me the output:

Get-MoveRequest -BatchName $batchName -MoveStatus Completed | Get-MoveRequestStatistics |ft alias, TotalItemSize, TotalMailboxItemCount, PercentComplete, BytesTransferred, ItemsTransferred -auto

In fact I ran several variations on that, with different status values, so I’d also be told about failed migrations and anything that was still suspended.

Now this is all well and good but we’ve now done nine nights of mass-migrations, as well as several early adopter and test ones, so after a while it’s possible that you’d lose track of all of those batch names you’d used. The problem is exacerbated because the graphical interface doesn’t even show them.

Luckily there’s another bit of PowerShell which can reveal what batch names are still lurking on your system:

Get-MoveRequest –ResultSize Unlimited | Sort-Object –Property batchname | Select batchname | Get-Unique –AsString

We’re about to cross the milestone of the halfway stage – we’ll have migrated approximately 25,000 mailboxes at some point in the early hours of tomorrow morning – so from that moment over half of the university will be running on Exchange 2010.

Posted in Uncategorized | 4 Comments

4 Responses to “Batch migrations”

  1. Andor says:

    Dear Matthew,

    It has been more than 6 months since your post and the mention that you have almost crossed the halfway-stage.

    I am currently assessing your script and feel that I can definitely put it to good use; I especially appreciate the SuspendWhenReadyToComplete addition. Because I’m about to use it to start migrating, I thought to mail you first, asking if any changes have been made to scripts used, what they might be /are and what your key findings were during the (massive) migration.

    Any chance you can share your thoughts and feelings?

    Would love to receive any relevant information (,scripts and so on) if you are happy to share.
    Let me know if you need some contact-details.


  2. Andor says:

    And one more thing I was curious about: “How did manage /control the TargetDB variable?”

    The generated *.Csv file (during ‘step 1’) shows the variable, but you don’t describe the logic behind it. In other words, how did you populate the 50k mailboxes /-objects logically and evenly?


  3. Andor says:

    Hi Matthew,

    Not sure why, but it seems that my previous remarks made aren’t here anymore. I do hope you did get them. With regards to the $_.TargetDB I have already found the (obviously simple) solution and I will be making use of the Automatic Mailbox Distribution mechanism, in combination with the -IsSuspendedFromProvisioning option to get some load-balancing here.

    Did you receive and read my previous remarks?

    Would still love to understand the full process.


  4. Matthew Gaskin says:

    Hi Andor,

    The scripts were used pretty much exactly as they are here and didn’t need any further changes. I chose to assign users to a particular set of databases during the migration so I didn’t use the option to randomly assign a destination DB. But, if you choose to allow the system to pick a destination database for you, do remember that it will pick from _any_ available Exchange 2010 database.
    We were using circular logging on not-yet-populated databases (but of course changing that immediately prior to the migration). If a user was placed in one of those databases you could not recover their data beyond the time of your last backup. For that reason you may want to consider assigning databases manually within the CSV file, as I did.