Double-Take Release Notes

Double-Take version 7.0.1, Tuesday, April 29, 2014

© 1996-2013 Vision Solutions, Inc. All rights reserved.

Check the Vision Solutions support web site for the most up-to-date version of this readme file.

This readme file contains last minute release notes that are not included in the user documentation. The user documentation is available on the Vision Solutions support web site, the product CD, and in the installation directory after you have installed the software.

Contents

Overview

  1. This release is an updated release that includes, but may not be limited to, the modifications listed in this section. Items marked with an asterisk (*) indicate the latest updates.
  2. If you are using Chrome version 30 or later or Opera version 11 or later, you will be unable to view the HTML version of the Double-Take documentation that is installed with the software because these browser versions have limited the ability to view locally installed files. If you want to view the installed HTML documentation files, you will have to use an earlier version of these browsers or a different browser. If you need to use one of these browsers with this local file limitation, you must start the browser from a command prompt using the –allow-file-access-from-files parameter. Once the browser is open, you will have to enter the full path and name of the HTML document you want to view. This limitation for Chrome and Opera does not exist if you are viewing the Double-Take documentation from the Vision Solutions support site, only when viewing files locally.

Installation and upgrade

  1. This release may not include all product changes provided as a limited release. Use the following lists to determine the previous versions that have been included in this release.
  2. If you are performing a push installation from the Double-Take Console, and the servers you are pushing to do not have a C drive, make sure you update the installation folder fields on the Install page. The Double-Take Console will not validate that the fields are set to a volume that does not exist, and the installation will not start. This issue may be resolved in a future release.
  3. If you are upgrading an existing job from version 5.3, you should configure manual intervention for failover before you upgrade. Any jobs that are set to automatic failover will have an Engine Connection Error after the upgrade. If you did not configure manual intervention and have the engine error, you will have to start a mirror (Other Job Actions, Mirror, Start) or stop and restart the job.
  4. If you will be protecting a Linux source and you previously had Double-Take for Linux version 4.7, use the following steps to update the Double-Take driver on the Linux source to the new driver included in this release.
    1. Shut down all of your protected applications on your Linux source.
    2. Save your DTFS configuration by moving /etc/DT/dtfs_mounts to another location, outside of /etc/DT.
    3. If you were using full server protection with version 4.7, you will need to remove it using DTSetup (Setup tasks, Configure File System or Block Device Replication, Full Server Replication Configuration, Remove Full Server Protection).
    4. Stop the Double-Take service and driver using DTSetup (Start/Stop Double-Take daemon, Stop the running service and teardown driver config).
    5. If you were using full server protection with version 4.7, reboot the Linux source server to unload the old driver.
    6. Upgrade Double-Take on your Linux source using the installation instructions in the User's Guide.
    7. Restart Double-Take on your Linux source.
  5. Do not upgrade Double-Take on servers that have version 5.3 full server to ESX, full server to Hyper-V, V to ESX, or V to Hyper-V jobs on them. You must delete any of these 5.3 job types and re-create the job after you have upgraded Double-Take.
  6. If you have a version 5.3 Exchange cluster to standalone job, you must stop the job before upgrading Double-Take on the target. You can restart the job after the target upgrade is complete.
  7. If you are upgrading Double-Take Availability for vSphere from version 7.0.0, you should change the network adapter that is used in your controller and replication appliances after you have upgraded to improve performance.Use the following instructions to change the network adapter.
    1. Upgrade your controller and replication appliances as outlined in the Double-Take Availability for vSphere User's Guide.
    2. After the upgrades are complete, make sure the Double-Take Console is closed.
    3. Power off all controller and replication appliances.
    4. From your vSphere client, edit the settings for each appliance to remove the E1000 network adapter and add a VMXNET3 network adapter. See your vSphere documentation for details. Be sure to do this step on each appliance.
    5. Power on each appliance.
    6. Once the appliances are powered on, confirm the network settings on each appliance by selecting Configure Network and option 0. If the network settings are correct, restart your jobs in the Double-Take Console. If the network settings are not correct, modify them as needed using the Configure Network options, reboot the appliance, and then restart your jobs in the Double-Take Console.
  8. If you want to upgrade the Double-Take Console from version 7.0.0 using the setup files available on the Linux virtual recovery appliance, you should delete the file /opt/dbtk/share/installers/windows/dbtk_se_install.exe before you upgrade the appliance. If you have already upgraded the appliance, you will need to create a symbolic link (using appropriate administrative rights) called dbtk_se_installer.exe in /opt/dbtk/share/installers/windows/, that has a target of the versioned executable in that same directory, such as the following example.

    rm /opt/dbtk/share/installers/windows/dbtk_se_install.exe

    ln –s /opt/dbtk/share/installers/windows/DoubleTake_7.0.1.2622.0.exe /opt/dbtk/share/installers/windows/dbtk_se_install.exe

back to the top

Licensing

  1. If you are using version 6.0 or earlier, you must upgrade any existing Double-Take activation codes using the following process.
    1. Log in to the support site at http://www.VisionSolutions.com/SupportCentral.
    2. Select Agreements.
    3. Select Request Activation Codes (7.0).
    4. Select the licenses you would like to upgrade and use for this release and click Submit.
    5. You will receive an email from CSRGroup@visionsolutions.com with the subject Upgraded Double-Take Activation Codes. This email includes your upgraded codes in an XML file along with instructions for how to import the codes. For complete details on importing a license file, see the User's Guide or online help.
  2. This release requires activation of each Double-Take license. If you do not activate each license within the required time frame, your Double-Take jobs will fail.

    For complete details on license activation, see the User's Guide or online help.

back to the top

Common job issues

The following issues may be common to multiple job types within Double-Take Availability and/or Double-Take Move.

  1. If you have specified an IP address as the source server name, but that IP address is not the server's primary IP address, you will have issues with snapshot functionality. If you need to use snapshots, use the source's primary IP address or its name.
  2. Only one primary IPv6 address can be monitored for failover when using the replication service monitoring method. Therefore, if you have multiple IPv6 addresses per subnet, failover monitoring may not work properly with the replication service monitoring method. To ensure proper failover monitoring, use IPv4 addresses or use only a single IPv6 address per subnet with the replication service monitoring method. You can also use the network service monitoring method with any IPv4 or IPv6 configuration.
  3. If you are using Windows 2008 R2, virtual hard disks can be mounted and dismounted reusing the same drive letter. However, once you have established a job, you cannot mount a different virtual hard disk to the same drive letter used in your job. This could cause errors, orphan files, or possibly data corruption. If you must change drive letters associated with a virtual hard disk, delete the job, change the mounting, and then re-create the job. This issue may be resolved in a future release.
  4. If you are using SQL to create snapshots of your SQL database, the Double-Take Availability verification report will report the file size of the snapshot files on the source and target as different. This is a reporting issue only. The snapshot file is mirrored and replicated completely to the target. This issue may be resolved in a later release.
  5. If you are using HP StorageWorks File Migration Agent, migrated files will incorrectly report modified time stamp differences in the verification report. This is a reporting issue only. This issue may be resolved in a future release.
  6. During the job creation process, the Double-Take Console may select the wrong route to the target on the Set Options page. Make sure that you confirm the route selected is reachable from your source. This issue may be resolved in a future release.
  7. If you are performing DNS failover but your source and target are in a workgroup, the DNS suffix must be specified for the source NICs and that suffix must correspond to the zone name on the DNS server. This issue may be resolved in a future release.
  8. In a cluster configuration, if you add a possible owning node to the protected network name after a job has started, you must stop and restart the job. If you do not, the records for the new node will not be locked. This could cause problems with DNS records if the source cluster nodes are rebooted or the resources are otherwise cycled on the new owning node. This issue may be resolved in a future release.
  9. When you first open the Double-Take Console, the Home page may not show any jobs in an error state. If you go to any other page in the console and then return to the Home page, any jobs with errors will be displayed. This issue may be resolved in a future release.
  10. If you are protecting a Hyper-V source and you select an existing Hyper-V server to use as the target, the Hyper-V Integration Services on the target must be version 2008 SP2 or greater. Without this version, the target may not start after failover. This limitation may be addressed in a future release.
  11. Because Windows 64-bit has a strict driver signing policy, if you get a stop code 0x7b after failover, you may have drivers failing to load because the driver signatures are failing the policy. In this case, reboot the server and press F8. Choose the option to not enforce the driver signing policy. If this allows the system to boot, then the problem is being caused by the cat file signature mismatch. This issue may be resolved in a future release. If your system still fails to boot, contact technical support.
  12. If you receive a path transformation error during job validation indicating a volume does not exist on the target server, even though there is no corresponding data being protected on the source, you will need to manually modify your replication rules. Go back to the Choose Data page and under the Replication Rules, locate the volume from the error message. Remove any rules associated with that volume. Complete the rest of the workflow and the validation should pass. This issue may be resolved in a future release.
  13. If you have specified replication rules that exclude a volume at the root, that volume will be incorrectly added as an inclusion if you edit the job after it has been established. If you need to edit your job, modify the replication rules to make sure they include the proper inclusion and exclusion rules that you want. This issue may be resolved in a future release.
  14. If you are using Double-Take over a WAN and do not have DNS name resolution, you will need to add the host names to the local host file on each server running Double-Take. See your Windows documentation for instructions on adding entries to host files.
  15. If you are running the Double-Take Console on a Windows XP machine and are inserting a Windows 2012 cluster into the console, you must use the IPv4 address to insert the cluster. The console will be unable to connect to the Double-Take Management Service on a Windows 2012 cluster if it is inserted using the name or fully-qualified domain name .
  16. If you are protecting a CSV volume on a Windows 2012 server, you may see event 16400 in the system log during a file rename operation. This error does not indicate a problem with replication or with data integrity on the target and can be ignored. This issue may be resolved in a future release.
  17. Service monitoring has been expanded to files and folders job, however, you must make sure that you are using the latest console. Older consoles will not be able to update service monitoring for upgraded servers, even for application jobs that had the service monitoring feature previously.
  18. The Double-Take Management Service was not cluster-aware until version 6.0. Therefore, if you are monitoring clusters or cluster nodes that are running Double-Take 5.3 from a Double-Take Console that is running version 6.0 or later, you will see errors related to your servers. This does not impact the Manage Jobs page of the console and any jobs that may be using these clusters or cluster nodes.
  19. You may see some inconsistencies with the screen display if the Double-Take Console is running in a maximized window or on low resolutions displays. For example, you may not see scroll bars on the Set Options page, the Next button may not appear to become active on the Set Options page, or the validation process may not appear to complete on the Summary page. In cases like this, resize the window to any size other than maximized. This issue may be resolved in a future release.
  20. If you have Double-Take encryption enabled on a server, any Double-Take bandwidth limiting settings will be disregarded for any jobs on that server. You will still see the bandwidth limiting settings, be able to change the settings, and see the changes persisted, but the settings will be ignored and the encrypted data will be sent at unlimited bandwidth. The bandwidth settings that are configured will be used if you disable encryption. This issue may be addressed in a future release.

back to the top

Full server and full server migration jobs

The following issues are for full server and full server migration jobs.

  1. Right before failover occurs, Double-Take will stop all services that are not critical to Windows. If the stop command fails (perhaps because it is a blocking driver that cannot be shutdown, as is the case with some anti-virus software) or a third-party tool restarts any of these services, Double-Take may not be able to successfully failover files locked by the services. In this case, you may have to make manual modifications to your server after failover.
  2. After a failover is complete and your target server is online as your source, when you login, you may have to specify a Windows reboot reason. You can specify any reason and continue. Additionally, you may see a prompt indicating a reboot is required because of a device change. You can disregard this error and select to reboot later.
  3. The backup job, when a full server protection job is configured for reverse, may become orphaned after the initial backup job has been completed, manually updated, and then the target server is restarted. In this specific case, you will need to contact technical support for a workaround to use the backup job during a reverse. This issue may be resolved in a future release.
  4. If you pause a full server job and then stop the job from the paused state, you will be unable to restart the job. You will have to delete the job and re-create it. This issue may be resolved in a future release.
  5. If you have reverse enabled, are updating your target image, and the Double-Take service on the target restarts (either manually or automatically, for example the target server restarts), you should restart your target image update after the Double-Take service is back online. This will correct any incorrect status displayed in the console and ensure the target image is complete.
  6. Once you have created a full server job, you cannot create any other jobs using the same source and target pair. This limitation may be resolved in a future release.

back to the top

Exchange and SQL jobs

The following issues are for Exchange and SQL jobs.

  1. You may receive an error when trying to failback to the source cluster if the application’s virtual IP address is offline. Verify the source cluster’s virtual name and IP address resources are online, as well as the application’s virtual IP address resource, and retry failback.
  2. You cannot create an application job (Exchange or SQL) and a files and folders job using the same source and target pair. This limitation may be resolved in a future release.
  3. The following issues are specific to Microsoft Exchange protection.
    1. Do not failover an Exchange job with a version 6.0 source and a version 7.0.x target. You should upgrade your source before failover.
    2. The CatalogData directory will not be mirrored to the target, but will be regenerated automatically on the target after failover. Searching through messages after failover may be slower until the catalog data is rebuilt.
    3. If you are protecting Exchange 2010, the Fix All option may report validation errors if a public folder store already existed on the target before a clone but there is no corresponding public folder store on the source. You can disregard the errors. Subsequent fixes and validations will be successful.
    4. If you are protecting Exchange 2010 in a DAG environment and a mailbox store fails to come online after failback, you will need to manually mount the store. If that does not work, contact technical support for assistance.
    5. If you are protecting Exchange 2010 DAG with public folders and have made updates to the public folders on the target after failover, there may be a delay in seeing those updates after failback. This is because in a DAG configuration the default public folder store does not move when mailbox databases are moved between nodes. A restore from the target will be made to the node where the DAG is mounted, which may or may not contain the default public folder store. Therefore, after failback, updates to public folders may not be available until public folder replication occurs.

back to the top

Full server to ESX/Hyper-V and V to ESX/Hyper-V jobs

The following issues are for the following job types.

  1. If you are using Windows 2008 R2 Service Pack 1 in a cluster environment, SCVMM may incorrectly report your virtual machine as missing after the Double-Take Availability reverse process. To work around this issue, remove the virtual machine from SCVMM and it will automatically be added back in the proper state. This issue may be resolved in a future release.
  2. If you have failed over a Gen 2 virtual machine with an attached DVD drive from a Windows 2012 R2 source host to a Windows 2012 R2 target host using a V to Hyper-V job, the reverse process will fail. You will need to remove the DVD drive from the original source and then stop and restart the job to workaround this problem. This issue may be resolved in a future release.

back to the top

Linux jobs

The following issues are for Linux files and folders and full server to ESX appliance jobs.

  1. This release does not support full server (disk-to-disk) failover. However, you can configure a full server job and use an application note to walk you through a manual recovery process. See the Vision Solutions support web site for application notes.

  2. Windows sources are no longer supported with full server to ESX appliance jobs. If you were using a Windows source, you will need to use another job type.
  3. Do not use Internet Explorer version 10 or 11 to access the web interface of your virtual recovery appliance.
  4. If you have hard links in your Linux files and folders replication set, note the following caveats.
  5. Two replication operations are sent when a file is closed. This may have a negative effect on performance. This issue may be resolved in a future release.
  6. If your Linux files and folders replication set contains exclude rules for specific files, the replication set generated during the restoration process will not contain those same exclude rules. Therefore, if those excluded files exist on the target, they may be restored, potentially overwriting the files on the source. This issue may be resolved in a future release.
  7. For Linux files and folders jobs, if you specify NoMoveAddresses with the MonitorOption DTCL command, the addresses will still be moved after failover. This issue may be resolved in a later release.
  8. For Linux files and folders jobs, when you schedule start criteria for transmission, you may see the transmission status in an error state at the scheduled start. The transmission will still continue as expected. This may be resolved in a later release.
  9. For Linux files and folders jobs, when you schedule a verification process, it may run a verification report when you save the scheduled verification settings. The scheduled verification will still process as expected. This may be resolved in a later release.
  10. For Linux files and folders jobs, the ability to stop transmission based on a specified byte limit is currently not functional. This issue may be resolved in a later release.
  11. Make sure you are using the appropriate client for your Linux job type. Full server to ESX appliance jobs must only use the Double-Take Console. If you have created a full server to ESX appliance job using the Double-Take Console, do not use the Replication Console to try and control the full server to ESX appliance job. The reverse is also true for Linux files and folders jobs.
  12. Sparse files will become full size, zero filled files on the target.

back to the top

Agentless jobs

The following issues are for agentless Hyper-V and agentless vSphere jobs.

  1. The new DNS update functionality added to agentless Hyper-V jobs in version 7.0 will not be available if you have upgraded your job from version 6.0. If you want to use the new DNS update functionality, you will need to delete your job and create a new one.
  2. If your agentless vSphere job is in the middle of replication and the source replication appliance or its host is rebooted, the job will end up stopped. You will need to manually restart the job. This issue may be resolved in a future release.
  3. Do not use Internet Explorer version 10 or 11 to access the web interface of your replication or controller appliances.
  4. Agentless vSphere jobs do not support NFS shares and products that use NFS shares, such as NetApp. This limitation may be resolved in a future release.

back to the top

Contact information

This documentation is subject to the following: (1) Change without notice; (2) Furnished pursuant to a license agreement; (3) Proprietary to the respective owner; (4) Not to be copied or reproduced unless authorized pursuant to the license agreement; (5) Provided without any expressed or implied warranties, (6) Does not entitle Licensee, End User or any other party to the source code or source code documentation of anything within the documentation or otherwise provided that is proprietary to Vision Solutions, Inc.; and (7) All Open Source and Third-Party Components (“OSTPC”) are provided “AS IS” pursuant to that OSTPC’s license agreement and disclaimers of warranties and liability.

Vision Solutions, Inc. and/or its affiliates and subsidiaries in the United States and/or other countries own/hold rights to certain trademarks, registered trademarks, and logos. Hyper-V and Windows are registered trademarks of Microsoft Corporation in the United States and/or other countries. Linux is a registered trademark of Linus Torvalds. vSphere is a registered trademark of VMware. All other trademarks are the property of their respective companies. For a complete list of trademarks registered to other companies, please visit that company’s website.

© 2013 Vision Solutions, Inc. All rights reserved.