Double-Take Release Notes

Double-Take version, Wednesday, April 29, 2015

© 1996-2015 Vision Solutions, Inc. All rights reserved.

This readme file contains last minute release notes that are not included in the user documentation.



This release is an updated release that includes, but may not be limited to, the modifications listed in this section.

  1. Full server to Hyper-V—New options for this job type include the ability to split multiple source volumes among multiple volumes on the target, the option to map source IP addresses to specific target VLANs, and if the source is a generation 2 virtual machine, the option to select a generation 1 or generation 2 replica virtual machine.
  2. Test failver—You can now specify alternate network adapters to use during a test failover for full server to ESX and full server to Hyper-V jobs.
  3. SQL—SQL jobs mirrored to a clustered target now support Double-Take snapshots, including the ability to failover to a snapshot.

Installation and upgrade

  1. This release may not include all product changes provided as a limited release. Use the following lists to determine the previous versions that have been included in this release.
  2. If you are performing a push installation from the Double-Take Console, and the servers you are pushing to do not have a C drive, make sure you update the installation folder fields on the Install page. The Double-Take Console will not validate that the fields are set to a volume that does not exist, and the installation will not start. This issue may be resolved in a future release.
  3. If you are upgrading Double-Take Availability for vSphere from version 7.0.0, you should change the network adapter that is used in your controller and replication appliances after you have upgraded to improve performance.Use the following instructions to change the network adapter.
    1. Upgrade your controller and replication appliances as outlined in the Double-Take Availability for vSphere User's Guide.
    2. After the upgrades are complete, make sure the Double-Take Console is closed.
    3. Power off all controller and replication appliances.
    4. From your vSphere client, edit the settings for each appliance to remove the E1000 network adapter and add a VMXNET3 network adapter. See your vSphere documentation for details. Be sure to do this step on each appliance.
    5. Power on each appliance.
    6. Once the appliances are powered on, confirm the network settings on each appliance by selecting Configure Network and option 0. If the network settings are correct, restart your jobs in the Double-Take Console. If the network settings are not correct, modify them as needed using the Configure Network options, reboot the appliance, and then restart your jobs in the Double-Take Console.
  4. Version 7.1 includes many improvements to the full server to ESX appliance job, however these improvements make it inoperable with previous versions. If you are using a version 7.1 Double-Take Console, you can only monitor, failover, start, or stop full server to ESX appliance jobs that were created in 7.0. If you want to edit that job or create a new job on servers and appliances running version 7.0, you must use a version 7.0 Double-Take Console.
  5. The Chinese and English versions of Double-Take do not have compatible upgrades at this time. If you want to use a different language version, you must uninstall your current version. Same language upgrades are supported.

  6. A full server to ESX appliance job will fail to start when upgrading to version 7.1 if your original job was created using a flat disk. To work around this issue, see knowledgebase article 45995 on the support site.
  7. Because of version interoperability and architectural differences between Double-Take RecoverNow version 5.3 and Double-Take DR version 7.1, there is no automated upgrade from a Double-Take RecoverNow job to a Double-Take DR job . You will need to create new protection jobs using Double-Take DR version 7.1 or convert your existing RecoverNow job to a DR job. Contact CustomerCare for more information about converting existing Double-Take RecoverNow jobs to Double-Take DR jobs.

back to the top


  1. If you are using version 6.0 or earlier, you must upgrade any existing Double-Take activation codes (now called license keys) using the following process.
    1. Log in to the support site at
    2. Select Agreements.
    3. Select Request License Keys (7.1).
    4. Select the licenses you would like to upgrade and use for this release and click Submit.
    5. You will receive an email from with the subject Upgraded Double-Take License Keys. This email includes your upgraded keys in an XML file along with instructions for how to import the keys. For complete details on importing a license file, see the User's Guide or online help.
  2. This release requires activation of each Double-Take license. If you do not activate each license within the required time frame, your Double-Take jobs will fail.

    For complete details on license activation, see the Double-Take Installation, Licensing, and Activation document or the console online help.

back to the top

Common issues

The following issues may be common to multiple Double-Take job types.

  1. If you have specified an IP address as the source server name, but that IP address is not the server's primary IP address, you will have issues with snapshot functionality. If you need to use snapshots, use the source's primary IP address or its name.
  2. Only one primary IPv6 address can be monitored for failover when using the replication service monitoring method. Therefore, if you have multiple IPv6 addresses per subnet, failover monitoring may not work properly with the replication service monitoring method. To ensure proper failover monitoring, use IPv4 addresses or use only a single IPv6 address per subnet with the replication service monitoring method. You can also use the network service monitoring method with any IPv4 or IPv6 configuration.
  3. If you are using Windows 2008 R2, virtual hard disks can be mounted and dismounted reusing the same drive letter. However, once you have established a job, you cannot mount a different virtual hard disk to the same drive letter used in your job. This could cause errors, orphan files, or possibly data corruption. If you must change drive letters associated with a virtual hard disk, delete the job, change the mounting, and then re-create the job. This issue may be resolved in a future release.
  4. If you are using SQL to create snapshots of your SQL database, the Double-Take Availability verification report will report the file size of the snapshot files on the source and target as different. This is a reporting issue only. The snapshot file is mirrored and replicated completely to the target. This issue may be resolved in a later release.
  5. If you are using HP StorageWorks File Migration Agent, migrated files will incorrectly report modified time stamp differences in the verification report. This is a reporting issue only. This issue may be resolved in a future release.
  6. During the job creation process, the Double-Take Console may select the wrong route to the target on the Set Options page. Make sure that you confirm the route selected is reachable from your source. This issue may be resolved in a future release.
  7. If you are performing DNS failover but your source and target are in a workgroup, the DNS suffix must be specified for the source NICs and that suffix must correspond to the zone name on the DNS server. This issue may be resolved in a future release.
  8. In a cluster configuration, if you add a possible owning node to the protected network name after a job has started, you must stop and restart the job. If you do not, the records for the new node will not be locked. This could cause problems with DNS records if the source cluster nodes are rebooted or the resources are otherwise cycled on the new owning node. This issue may be resolved in a future release.
  9. If you are protecting a Hyper-V source and you select an existing Hyper-V server to use as the target, the Hyper-V Integration Services on the target must be version 2008 SP2 or greater. Without this version, the target may not start after failover. This limitation may be addressed in a future release.
  10. Because Windows 64-bit has a strict driver signing policy, if you get a stop code 0x7b after failover, you may have drivers failing to load because the driver signatures are failing the policy. In this case, reboot the server and press F8. Choose the option to not enforce the driver signing policy. If this allows the system to boot, then the problem is being caused by the cat file signature mismatch. This issue may be resolved in a future release. If your system still fails to boot, contact technical support.
  11. If you receive a path transformation error during job validation indicating a volume does not exist on the target server, even though there is no corresponding data being protected on the source, you will need to manually modify your replication rules. Go back to the Choose Data page and under the Replication Rules, locate the volume from the error message. Remove any rules associated with that volume. Complete the rest of the workflow and the validation should pass. This issue may be resolved in a future release.
  12. If you have specified replication rules that exclude a volume at the root, that volume will be incorrectly added as an inclusion if you edit the job after it has been established. If you need to edit your job, modify the replication rules to make sure they include the proper inclusion and exclusion rules that you want. This issue may be resolved in a future release.
  13. If you are using Double-Take over a WAN and do not have DNS name resolution, you will need to add the host names to the local host file on each server running Double-Take. See your Windows documentation for instructions on adding entries to host files.
  14. If you are running the Double-Take Console on a Windows XP machine and are inserting a Windows 2012 cluster into the console, you must use the IPv4 address to insert the cluster. The console will be unable to connect to the Double-Take Management Service on a Windows 2012 cluster if it is inserted using the name or fully-qualified domain name .
  15. If you are protecting a CSV volume on a Windows 2012 server, you may see event 16400 in the system log during a file rename operation. This error does not indicate a problem with replication or with data integrity on the target and can be ignored. This issue may be resolved in a future release.
  16. Service monitoring has been expanded to files and folders job, however, you must make sure that you are using the latest console. Older consoles will not be able to update service monitoring for upgraded servers, even for application jobs that had the service monitoring feature previously.
  17. If you have Double-Take encryption enabled on a server, any Double-Take bandwidth limiting settings will be disregarded for any jobs on that server. You will still see the bandwidth limiting settings, be able to change the settings, and see the changes persisted, but the settings will be ignored and the encrypted data will be sent at unlimited bandwidth. The bandwidth settings that are configured will be used if you disable encryption. This issue may be addressed in a future release.
  18. If you are using Windows 2003, you will be unable to open the Double-Take Console context-sensitive help using Internet Explorer. You can access the Double-Take online documentation from the console Help menu using Windows 2003 and Internet Explorer. The help and online documentation are both available in all other Windows versions and in any Windows version using other major browsers.

back to the top

Full server and full server migration jobs

The following issues are for full server and full server migration jobs.

  1. Right before failover occurs, Double-Take will stop all services that are not critical to Windows. If the stop command fails (perhaps because it is a blocking driver that cannot be shutdown, as is the case with some anti-virus software) or a third-party tool restarts any of these services, Double-Take may not be able to successfully failover files locked by the services. In this case, you may have to make manual modifications to your server after failover.
  2. After a failover is complete and your target server is online as your source, when you login, you may have to specify a Windows reboot reason. You can specify any reason and continue. Additionally, you may see a prompt indicating a reboot is required because of a device change. You can disregard this error and select to reboot later.
  3. The backup job, when a full server protection job is configured for reverse, may become orphaned after the initial backup job has been completed, manually updated, and then the target server is restarted. In this specific case, you will need to contact technical support for a workaround to use the backup job during a reverse. This issue may be resolved in a future release.
  4. If you pause a full server job and then stop the job from the paused state, you will be unable to restart the job. You will have to delete the job and re-create it. This issue may be resolved in a future release.
  5. If you have reverse enabled, are updating your target image, and the Double-Take service on the target restarts (either manually or automatically, for example the target server restarts), you should restart your target image update after the Double-Take service is back online. This will correct any incorrect status displayed in the console and ensure the target image is complete.

back to the top

Exchange and SQL jobs

The following issues are for Exchange and SQL jobs.

  1. You may receive an error when trying to failback to the source cluster if the application’s virtual IP address is offline. Verify the source cluster’s virtual name and IP address resources are online, as well as the application’s virtual IP address resource, and retry failback.
  2. The following issues are specific to Microsoft Exchange protection.
    1. Do not failover an Exchange job with a version 6.0 source and a version 7.0.x target. You should upgrade your source before failover.
    2. The CatalogData directory will not be mirrored to the target, but will be regenerated automatically on the target after failover. Searching through messages after failover may be slower until the catalog data is rebuilt.
    3. If you are protecting Exchange 2010, the Fix All option may report validation errors if a public folder store already existed on the target before a clone but there is no corresponding public folder store on the source. You can disregard the errors. Subsequent fixes and validations will be successful.
    4. If you are protecting Exchange 2010 in a DAG environment and a mailbox store fails to come online after failback, you will need to manually mount the store. If that does not work, contact technical support for assistance.
    5. If you are protecting Exchange 2010 DAG with public folders and have made updates to the public folders on the target after failover, there may be a delay in seeing those updates after failback. This is because in a DAG configuration the default public folder store does not move when mailbox databases are moved between nodes. A restore from the target will be made to the node where the DAG is mounted, which may or may not contain the default public folder store. Therefore, after failback, updates to public folders may not be available until public folder replication occurs.

back to the top

Full server to ESX/Hyper-V and V to ESX/Hyper-V jobs

The following issues are for the following job types.

  1. If you are using Windows 2008 R2 Service Pack 1 in a cluster environment, SCVMM may incorrectly report your virtual machine as missing after the Double-Take Availability reverse process. To work around this issue, remove the virtual machine from SCVMM and it will automatically be added back in the proper state. This issue may be resolved in a future release.
  2. If you have failed over a Gen 2 virtual machine with an attached DVD drive from a Windows 2012 R2 source host to a Windows 2012 R2 target host using a V to Hyper-V job, the reverse process will fail. You will need to remove the DVD drive from the original source and then stop and restart the job to workaround this problem. This issue may be resolved in a future release.

back to the top

Linux jobs

The following issues are for Linux files and folders and full server to ESX appliance jobs.

  1. Make sure you are using the appropriate client for your Linux job type. Full server to ESX appliance jobs must only use the Double-Take Console. If you have created a full server to ESX appliance job using the Double-Take Console, do not use the Replication Console to try and control the full server to ESX appliance job. The reverse is also true for Linux files and folders jobs.
  2. If you have hard links in your Linux files and folders replication set, note the following caveats.
  3. Two replication operations are sent when a file is closed. This may have a negative effect on performance. This issue may be resolved in a future release.
  4. If your Linux files and folders replication set contains exclude rules for specific files, the replication set generated during the restoration process will not contain those same exclude rules. Therefore, if those excluded files exist on the target, they may be restored, potentially overwriting the files on the source. This issue may be resolved in a future release.
  5. For Linux files and folders jobs, if you specify NoMoveAddresses with the MonitorOption DTCL command, the addresses will still be moved after failover. This issue may be resolved in a later release.
  6. For Linux files and folders jobs, when you schedule start criteria for transmission, you may see the transmission status in an error state at the scheduled start. The transmission will still continue as expected. This may be resolved in a later release.
  7. For Linux files and folders jobs, when you schedule a verification process, it may run a verification report when you save the scheduled verification settings. The scheduled verification will still process as expected. This may be resolved in a later release.
  8. For Linux files and folders jobs, the ability to stop transmission based on a specified byte limit is currently not functional. This issue may be resolved in a later release.
  9. Sparse files will become full size, zero filled files on the target.
  10. If you are using Logical Volume Manager, then you can only re-use existing disks when creating a new full server to ESX appliance job if the existing disks were created using Double-Take version 7.1 or later. Versions prior to 7.1 have important LVM information deleted when the job is deleted, thus you cannot reuse the disk for a future job. If you are not using LVM, this is not an issue.

back to the top

Agentless jobs

The following issues are for agentless Hyper-V and agentless vSphere jobs.

  1. The new DNS update functionality added to agentless Hyper-V jobs in version 7.0 will not be available if you have upgraded your job from version 6.0. If you want to use the new DNS update functionality, you will need to delete your job and create a new one.
  2. If your agentless vSphere job is in the middle of replication and the source replication appliance or its host is rebooted, the job will end up stopped. You will need to manually restart the job. This issue may be resolved in a future release.
  3. Agentless vSphere jobs do not support NFS shares and products that use NFS shares, such as NetApp. This limitation may be resolved in a future release.

back to the top

Contact information

This documentation is subject to the following: (1) Change without notice; (2) Furnished pursuant to a license agreement; (3) Proprietary to the respective owner; (4) Not to be copied or reproduced unless authorized pursuant to the license agreement; (5) Provided without any expressed or implied warranties, (6) Does not entitle Licensee, End User or any other party to the source code or source code documentation of anything within the documentation or otherwise provided that is proprietary to Vision Solutions, Inc.; and (7) All Open Source and Third-Party Components (“OSTPC”) are provided “AS IS” pursuant to that OSTPC’s license agreement and disclaimers of warranties and liability.

Vision Solutions, Inc. and/or its affiliates and subsidiaries in the United States and/or other countries own/hold rights to certain trademarks, registered trademarks, and logos. Hyper-V and Windows are registered trademarks of Microsoft Corporation in the United States and/or other countries. Linux is a registered trademark of Linus Torvalds. vSphere is a registered trademark of VMware. All other trademarks are the property of their respective companies. For a complete list of trademarks registered to other companies, please visit that company’s website.

© 2015 Vision Solutions, Inc. All rights reserved.