Double-Take Release Notes

Double-Take version 7.0.1, Thursday, October 24, 2013

© 1996-2013 Vision Solutions, Inc. All rights reserved.

Check the Vision Solutions support web site for the most up-to-date version of this readme file.

This readme file contains last minute release notes that are not included in the user documentation. The user documentation is available on the Vision Solutions support web site, the product CD, and in the installation directory after you have installed the software.

Contents

Overview

  1. This release is an updated release that includes, but may not be limited to, the modifications listed in this section.

Installation and upgrade

  1. This release may not include all product changes provided as a limited release. Use the following lists to determine the previous versions that have been included in this release.
  2. If you are performing a push installation from the Double-Take Console, and the servers you are pushing to do not have a C drive, make sure you update the installation folder fields on the Install page. The Double-Take Console will not validate that the fields are set to a volume that does not exist, and the installation will not start. This issue may be resolved in a future release.
  3. If you are upgrading an existing job from version 5.3, you should configure manual intervention for failover before you upgrade. Any jobs that are set to automatic failover will have an Engine Connection Error after the upgrade. If you did not configure manual intervention and have the engine error, you will have to start a mirror (Other Job Actions, Mirror, Start) or stop and restart the job.
  4. If you will be protecting a Linux source and you previously had Double-Take for Linux version 4.7, use the following steps to update the Double-Take driver on the Linux source to the new driver included in this release.
    1. Shut down all of your protected applications on your Linux source.
    2. Save your DTFS configuration by moving /etc/DT/dtfs_mounts to another location, outside of /etc/DT.
    3. If you were using full server protection with version 4.7, you will need to remove it using DTSetup (Setup tasks, Configure File System or Block Device Replication, Full Server Replication Configuration, Remove Full Server Protection).
    4. Stop the Double-Take service and driver using DTSetup (Start/Stop Double-Take daemon, Stop the running service and teardown driver config).
    5. If you were using full server protection with version 4.7, reboot the Linux source server to unload the old driver.
    6. Upgrade Double-Take on your Linux source using the installation instructions in the User's Guide.
    7. Restart Double-Take on your Linux source.
  5. If you have a full server job with reverse enabled, make sure you update your target image after you have upgraded Double-Take.
  6. Do not upgrade Double-Take on servers that have version 5.3 full server to ESX, full server to Hyper-V, V to ESX, or V to Hyper-V jobs on them. You must delete any of these 5.3 job types and re-create the job after you have upgraded Double-Take.
  7. If you have a version 5.3 Exchange cluster to standalone job, you must stop the job before upgrading Double-Take on the target. You can restart the job after the target upgrade is complete.
  8. If you are using SLES version 10.x for a Linux files and folders jobs, you may get an installation database warning regarding an exclusive database lock. You can safely disregard this message.

back to the top

Licensing

  1. If you are using version 6.0 or earlier, you must upgrade any existing Double-Take activation codes using the following process.
    1. Log in to the support site at http://www.VisionSolutions.com/SupportCentral.
    2. Select Agreements.
    3. Select Request Activation Codes (7.0).
    4. Select the licenses you would like to upgrade and use for this release and click Submit.
    5. You will receive an email from CSRGroup@visionsolutions.com with the subject Upgraded Double-Take Activation Codes. This email includes your upgraded codes in an XML file along with instructions for how to import the codes. For complete details on importing a license file, see the User's Guide or online help.
  2. This release requires activation of each Double-Take license. If you do not activate each license within the required time frame, your Double-Take jobs will fail.

    For complete details on license activation, see the User's Guide or online help.

back to the top

Common job issues

The following issues may be common to multiple job types within Double-Take Availability and/or Double-Take Move.

  1. If you have specified an IP address as the source server name, but that IP address is not the server's primary IP address, you will have issues with snapshot functionality. If you need to use snapshots, use the source's primary IP address or its name.
  2. Only one primary IPv6 address can be monitored for failover when using the replication service monitoring method. Therefore, if you have multiple IPv6 addresses per subnet, failover monitoring may not work properly with the replication service monitoring method. To ensure proper failover monitoring, use IPv4 addresses or use only a single IPv6 address per subnet with the replication service monitoring method. You can also use the network service monitoring method with any IPv4 or IPv6 configuration.
  3. If you are using Windows 2008 R2, virtual hard disks can be mounted and dismounted reusing the same drive letter. However, once you have established a job, you cannot mount a different virtual hard disk to the same drive letter used in your job. This could cause errors, orphan files, or possibly data corruption. If you must change drive letters associated with a virtual hard disk, delete the job, change the mounting, and then re-create the job. This issue may be resolved in a future release.
  4. If you are using SQL to create snapshots of your SQL database, the Double-Take Availability verification report will report the file size of the snapshot files on the source and target as different. This is a reporting issue only. The snapshot file is mirrored and replicated completely to the target. This issue may be resolved in a later release.
  5. If you are using HP StorageWorks File Migration Agent, migrated files will incorrectly report modified time stamp differences in the verification report. This is a reporting issue only. This issue may be resolved in a future release.
  6. During the job creation process, the Double-Take Console may select the wrong route to the target on the Set Options page. Make sure that you confirm the route selected is reachable from your source. This issue may be resolved in a future release.
  7. If you are performing DNS failover but your source and target are in a workgroup, the DNS suffix must be specified for the source NICs and that suffix must correspond to the zone name on the DNS server. This issue may be resolved in a future release.
  8. In a cluster configuration, if you add a possible owning node to the protected network name after a job has started, you must stop and restart the job. If you do not, the records for the new node will not be locked. This could cause problems with DNS records if the source cluster nodes are rebooted or the resources are otherwise cycled on the new owning node. This issue may be resolved in a future release.
  9. When you first open the Double-Take Console, the Home page may not show any jobs in an error state. If you go to any other page in the console and then return to the Home page, any jobs with errors will be displayed. This issue may be resolved in a future release.
  10. If you are using Trend Micro Firewall, shares may not be accessible after failover. You can work around this issue by resetting the NIC's IP address with the netsh ip reset command. For more details on using this command, see your Windows reference.
  11. If you are protecting a Hyper-V source and you select an existing Hyper-V server to use as the target, the Hyper-V Integration Services on the target must be version 2008 SP2 or greater. Without this version, the target may not start after failover. This limitation may be addressed in a future release.
  12. Because Windows 64-bit has a strict driver signing policy, if you get a stop code 0x7b after failover, you may have drivers failing to load because the driver signatures are failing the policy. In this case, reboot the server and press F8. Choose the option to not enforce the driver signing policy. If this allows the system to boot, then the problem is being caused by the cat file signature mismatch. This issue may be resolved in a future release. If your system still fails to boot, contact technical support.
  13. If you receive a path transformation error during job validation indicating a volume does not exist on the target server, even though there is no corresponding data being protected on the source, you will need to manually modify your replication rules. Go back to the Choose Data page and under the Replication Rules, locate the volume from the error message. Remove any rules associated with that volume. Complete the rest of the workflow and the validation should pass. This issue may be resolved in a future release.
  14. If you have specified replication rules that exclude a volume at the root, that volume will be incorrectly added as an inclusion if you edit the job after it has been established. If you need to edit your job, modify the replication rules to make sure they include the proper inclusion and exclusion rules that you want. This issue may be resolved in a future release.
  15. If you are using Double-Take over a WAN and do not have DNS name resolution, you will need to add the host names to the local host file on each server running Double-Take. See your Windows documentation for instructions on adding entries to host files.
  16. If you are running the Double-Take Console on a Windows XP machine and are inserting a Windows 2012 cluster into the console, you must use the IPv4 address to insert the cluster. The console will be unable to connect to the Double-Take Management Service on a Windows 2012 cluster if it is inserted using the name or fully-qualified domain name .
  17. If you are protecting a CSV volume on a Windows 2012 server, you may see event 16400 in the system log during a file rename operation. This error does not indicate a problem with replication or with data integrity on the target and can be ignored. This issue may be resolved in a future release.
  18. Service monitoring has been expanded to files and folders job, however, you must make sure that you are using the latest console. Older consoles will not be able to update service monitoring for upgraded servers, even for application jobs that had the service monitoring feature previously.
  19. If you are using a DNS reverse lookup zone , then it must be Active Directory integrated. Double-Take is unable to determine if this integration exists and therefore cannot warn you during job creation if it doesn't exist.
  20. The Double-Take Management Service was not cluster-aware until version 6.0. Therefore, if you are monitoring clusters or cluster nodes that are running Double-Take 5.3 from a Double-Take Console that is running version 6.0 or later, you will see errors related to your servers. This does not impact the Manage Jobs page of the console and any jobs that may be using these clusters or cluster nodes.

back to the top

Files and folders and data migration jobs

The following issues are for files and folders and data migration jobs.

  1. In some cases, when you failback before restoring your data from the target, your job may end up in a stopped state, not allowing you to continue with the next step of restoring. Click Start from the toolbar to restart the job, and then you can continue with the restoration process. This issue may be resolved in a future release.
  2. If you have two files and folders jobs created from the same source to the same target, only one failover monitor will be created. Therefore, if you delete one of the jobs, that shared failover monitor will be deleted. To re-create the failover monitor, you can stop and start the job (which will require a remirror) or you can restart the Double-Take Management Service (which will not require a remirror). This issue may be resolved in a future release.

back to the top

Full server and full server migration jobs

The following issues are for full server and full server migration jobs.

  1. You will be unable to failover to a snapshot if your full server job is stopped. If you need to revert to a snapshot, contact technical support for a manual process. This issue may be addressed in a future release.
  2. Right before failover occurs, Double-Take will stop all services that are not critical to Windows. If the stop command fails (perhaps because it is a blocking driver that cannot be shutdown, as is the case with some anti-virus software) or a third-party tool restarts any of these services, Double-Take may not be able to successfully failover files locked by the services. In this case, you may have to make manual modifications to your server after failover.
  3. After a failover is complete and your target server is online as your source, when you login, you may have to specify a Windows reboot reason. You can specify any reason and continue. Additionally, you may see a prompt indicating a reboot is required because of a device change. You can disregard this error and select to reboot later.
  4. If the login credentials for your source and target are different, you will be unable to restart your job after a reverse. In this case, you will need to update the credentials for the target server and then stop and restart the Double-Take Management Service on the target server.
  5. Before you reverse a full server job, make sure the production NIC on your original source is online. If the NIC is disabled or unplugged, you will experience problems with the reverse. Make sure you continue to access the servers through the reserved IP addresses, but you can disregard any IP address conflicts for the primary NIC. Since the new source (running on the original target hardware) already has the source's address assigned to it, Windows will automatically assign a different address to the original source.
  6. The backup job, when a full server protection job is configured for reverse, may become orphaned after the initial backup job has been completed, manually updated, and then the target server is restarted. In this specific case, you will need to contact technical support for a workaround to use the backup job during a reverse. This issue may be resolved in a future release.
  7. If you are protecting a VMware virtual server running VMware Tools version 9.0 and your target is a VMware virtual server running an earlier version (earlier than version 9.0) of VMware Tools, you will have to reinstall VMware Tools on the target after failover.
  8. If you pause a full server job and then stop the job from the paused state, you will be unable to restart the job. You will have to delete the job and re-create it. This issue may be resolved in a future release.
  9. If you have reverse enabled, are updating your target image, and the Double-Take service on the target restarts (either manually or automatically, for example the target server restarts), you should restart your target image update after the Double-Take service is back online. This will correct any incorrect status displayed in the console and ensure the target image is complete.
  10. Once you have created a full server job, you cannot create any other jobs using the same source and target pair. This limitation may be resolved in a future release.

back to the top

Exchange and SQL jobs

The following issues are for Exchange and SQL jobs.

  1. You may receive an error when trying to failback to the source cluster if the application’s virtual IP address is offline. Verify the source cluster’s virtual name and IP address resources are online, as well as the application’s virtual IP address resource, and retry failback.
  2. You cannot create an application job (Exchange or SQL) and a files and folders job using the same source and target pair. This limitation may be resolved in a future release.
  3. The following issues are specific to Microsoft Exchange protection.
    1. The CatalogData directory will not be mirrored to the target, but will be regenerated automatically on the target after failover. Searching through messages after failover may be slower until the catalog data is rebuilt.
    2. If you are protecting Exchange 2007 or Exchange 2010 and need to use the Exchange Management Console while the Double-Take Availability test failover process is running, you will need to start the IIS service. Stop the IIS service after the test failover is complete.
    3. If you are protecting Exchange 2010, arbitration mailboxes will not be failed over. These mailboxes can be rehomed manually using the Set-Mailbox -database PowerShell command.
    4. If you are protecting Exchange 2010 and you have a consolidated target server, you must have a send connector configured specifically with the target server before failover. Otherwise, you will be unable to send email to the Internet after failover. This issue may be resolved in a future release.
    5. If you are protecting Exchange 2010, the Fix All option may report validation errors if a public folder store already existed on the target before a clone but there is no corresponding public folder store on the source. You can disregard the errors. Subsequent fixes and validations will be successful.
    6. If you are protecting Exchange 2010 in a DAG environment and a mailbox store fails to come online after failback, you will need to manually mount the store. If that does not work, contact technical support for assistance.
    7. If you are protecting Exchange 2010 DAG with public folders and have made updates to the public folders on the target after failover, there may be a delay in seeing those updates after failback. This is because in a DAG configuration the default public folder store does not move when mailbox databases are moved between nodes. A restore from the target will be made to the node where the DAG is mounted, which may or may not contain the default public folder store. Therefore, after failback, updates to public folders may not be available until public folder replication occurs.

back to the top

Full server to ESX/Hyper-V and V to ESX/Hyper-V jobs

The following issues are for the following job types.

  1. If you are protecting your source to a Hyper-V target and the source is running on Windows Server 2008 and the target replica has one or more SCSI drives, then after failover the CD/DVD ROM will not be allocated. If the CD/DVD ROM is required, you will need to edit the virtual machine settings to add a CD/DVD ROM after failover. By not allocating a CD/DVD ROM under these specific conditions, drive letter consistency will be guaranteed.
  2. If you are using Windows 2008 R2 Service Pack 1 in a cluster environment, SCVMM may incorrectly report your virtual machine as missing after the Double-Take Availability reverse process. To work around this issue, remove the virtual machine from SCVMM and it will automatically be added back in the proper state. This issue may be resolved in a future release.

back to the top

Linux jobs

The following issues are for Linux files and folders and full server to ESX appliance jobs.

  1. This release does not support full server (disk-to-disk) failover. However, you can configure a full server job and use an application note to walk you through a manual recovery process. See the Vision Solutions support web site for application notes.

  2. Windows sources are no longer supported with full server to ESX appliance jobs. If you were using a Windows source, you will need to use another job type.
  3. Do not use Internet Explorer version 10 to access the web interface of your virtual recovery appliance.
  4. Double-Take does not currently support Red Hat Enterprise Linux version 6.0 eCryptfs, which is a new operating system feature that provides data and filename encryption on a per-file basis. This limitation may be resolved in a future release.
  5. If you have hard links in your Linux files and folders replication set, note the following caveats.
  6. Two replication operations are sent when a file is closed. This may have a negative effect on performance. This issue may be resolved in a future release.
  7. If your Linux files and folders replication set contains exclude rules for specific files, the replication set generated during the restoration process will not contain those same exclude rules. Therefore, if those excluded files exist on the target, they may be restored, potentially overwriting the files on the source. This issue may be resolved in a future release.
  8. For Linux files and folders jobs, if you specify NoMoveAddresses with the MonitorOption DTCL command, the addresses will still be moved after failover. This issue may be resolved in a later release.
  9. For Linux files and folders jobs, when you schedule start criteria for transmission, you may see the transmission status in an error state at the scheduled start. The transmission will still continue as expected. This may be resolved in a later release.
  10. For Linux files and folders jobs, when you schedule a verification process, it may run a verification report when you save the scheduled verification settings. The scheduled verification will still process as expected. This may be resolved in a later release.
  11. For Linux files and folders jobs, if you are moving or deleting orphan files, select a move location outside of the replication set. If you select the location where the files are currently located, the files will be deleted. If you select another location inside the replication set, the files will be moved multiple times and then possibly deleted.
  12. For Linux files and folders jobs, the ability to stop transmission based on a specified byte limit is currently not functional. This issue may be resolved in a later release.
  13. If you are using the Replication Console and interactive text client (DTCL -i) for Linux files and folders jobs, these clients will fail if there is no DNS entry or way for a server to resolve server names.
  14. Make sure you are using the appropriate client for your Linux job type. Full server to ESX appliance jobs must only use the Double-Take Console. If you have created a full server to ESX appliance job using the Double-Take Console, do not use the Replication Console to try and control the full server to ESX appliance job. The reverse is also true for Linux files and folders jobs.

back to the top

Agentless jobs

The following issues are for agentless Hyper-V and agentless vSphere jobs.

  1. The new DNS update functionality added to agentless Hyper-V jobs in version 7.0 will not be available if you have upgraded your job from version 6.0. If you want to use the new DNS update functionality, you will need to delete your job and create a new one.
  2. If your agentless vSphere job is in the middle of replication and the source replication appliance or its host is rebooted, the job will end up stopped. You will need to manually restart the job. This issue may be resolved in a future release.
  3. If your agentless vSphere jobs are using vCenter, your target vCenter must be the same or a newer version than your source vCenter. For example, you can have vCenter 5.0 to 5.0, 5.0 to 5.1, or 5.1 to 5.1, however you cannot have vCenter 5.1 to 5.0. You can also have a single vCenter.
  4. Do not use Internet Explorer version 10 to access the web interface of your replication or controller appliances.

back to the top

Contact information

This documentation is subject to the following: (1) Change without notice; (2) Furnished pursuant to a license agreement; (3) Proprietary to the respective owner; (4) Not to be copied or reproduced unless authorized pursuant to the license agreement; (5) Provided without any expressed or implied warranties, (6) Does not entitle Licensee, End User or any other party to the source code or source code documentation of anything within the documentation or otherwise provided that is proprietary to Vision Solutions, Inc.; and (7)All Open Source and Third-Party Components (“OSTPC”) are provided “AS IS” pursuant to that OSTPC’s license agreement and disclaimers of warranties and liability.

Vision Solutions, Inc. and/or its affiliates and subsidiaries in the United States and/or other countries own/hold rights to certain trademarks, registered trademarks, and logos. Hyper-V and Windows are registered trademarks of Microsoft Corporation in the United States and/or other countries. Linux is a registered trademark of Linus Torvalds. vSphere is a registered trademark of VMware. All other trademarks are the property of their respective companies. For a complete list of trademarks registered to other companies, please visit that company’s website.

© 2013 Vision Solutions, Inc. All rights reserved.