A request to protect some new domain controllers has been submitted in the ticketing system The engineer creates a backup job with the following steps:
* 1. Right-clicks on the Jobs navigation item on the left
* 2. Selects VMware vSphere from the menu
* 3. Enters a name for the job.
* 4. Selects workloads to protect.
* 5. Defines a job schedule.
* 6. Clicks the Finish button.
When testing restores, the engineer finds that the backups are crash-consistent. Which set of steps should the engineer use to avoid crash-consistent backups for the domain controllers?
Correct Answer:
A
Application-aware processing is a feature in Veeam Backup & Replication that creates transactionally consistent backup images of VMs. For domain controllers, this feature ensures that backups are consistent with the applications running on the VM, like Active Directory services. To avoid crash-consistent backups and ensure application consistency, the engineer must enable the Application-Aware Processing option during the job configuration.References:
✑ Veeam Backup & Replication User Guide: Application-Aware Processing
✑ Veeam Best Practices: Protecting Active Directory Domain Controllers
An engineer has a NAS file share to protect.
What preliminary step must be taken to create a NAS backup job?
Correct Answer:
D
Before creating a NAS backup job, an engineer must add the NAS file share to the Veeam Backup & Replication (VBR) console under the Inventory section. This involves specifying the NAS Filer and the particular file share to be protected. This step
allows Veeam to recognize the file share as a valid source for backup operations.
References:
✑ Veeam Backup & Replication User Guide: NAS Backup
✑ Veeam Help Center: Adding File Shares to Inventory
A customer's NAS has multiple hardware failures, and the NAS is no longer accessible. All of the users are impacted as they need to access the NAS for day-to-day work.
Which restore method could minimize the service impact to the users?
Correct Answer:
B
Instant file share recovery is the most effective method to minimize service impact in this scenario. This feature allows users to instantly access the NAS data directly from the backup files without having to wait for the entire file share to be restored. This approach is beneficial when quick access to data is crucial.References: Veeam Backup & Replication Documentation, Veeam NAS Backup Guide
A Windows Server using the ReFS filesystem has been used as a standalone Veeam repository for several years and is due for replacement. A new Windows server using the ReFS filesystem has been created to replace the old server, with twice the capacity. Backup files need to be transferred to the new server with no disruptions to the existing backup chains.
The Veeam engineer has begun to move backup files to the new repository but is now getting alerts that it is running out of space.
How could the engineer have avoided this issue?
Correct Answer:
C
To avoid running out of space when moving backups to a new repository, the "Move backup..." function in Veeam Backup & Replication should be used. This function allows you to relocate backup files to a new repository without duplicating data, which can save space. Unlike a simple copy action, the move function ensures that the backup chain remains intact and does not require additional space for a copy of the backups during the transfer. When the move is initiated, Veeam will also automatically update the configuration to point to the new backup location, thus preventing any disruptions in the backup chain.
A customer wants to set up a Scale-Out Backup Repository. Due to malware concerns, immutability is recommended. An on-premises server can be used to hold primary backups, but it can only hold about 21 days of backups. A copy of the backups should be stored in AWS. The retention for all backups is 60 days.
Which configuration of a Scale-out Backup repository meets these requirements?
Correct Answer:
D
To meet the requirements of setting up a Scale-Out Backup Repository (SOBR) with immutability for malware protection and specific retention policies, the most fitting configuration is D: Copy and move mode with a Performance Tier configured on a Linux Hardened Repository using the XFS file system and immutability set for 21 days, and a Capacity Tier on Amazon S3 with immutability set for 60 days. This setup utilizes the copy and move mode to ensure that backups are first stored on the on-premises Linux Hardened Repository with an immutability setting that prevents modifications to backups, providing protection against malware for the most recent 21 days of backups. As this on- premises server has limited capacity, older backups beyond 21 days are moved to the Capacity Tier in Amazon S3, where they are also protected with immutability for the entire 60-day retention period. This configuration leverages the strengths of both on-premises and cloud storage while ensuring that all backups are protected from modification or deletion by immutability, aligning with the customer's malware protection concerns and retention requirements.