00:00

QUESTION 66

- (Exam Topic 2)
A solutions architect needs to assess a newly acquired company’s portfolio of applications and databases. The solutions architect must create a business case to migrate the portfolio to AWS. The newly acquired company runs applications in an on-premises data center. The data center is not well documented. The solutions architect cannot immediately determine how many applications and databases exist. Traffic for the applications is variable. Some applications are batch processes that run at the end of each month.
The solutions architect must gain a better understanding of the portfolio before a migration to AWS can begin. Which solution will meet these requirements?

Correct Answer: C
The company should use Migration Evaluator to generate a list of servers and build a report for a business case. The company should use AWS Migration Hub to view the portfolio and use AWS Application Discovery Service to gain an understanding of application dependencies. This solution will meet the requirements because Migration Evaluator is a migration assessment service that helps create a data-driven business case for AWS cloud planning and migration. Migration Evaluator provides a clear baseline of what the company is running today and projects AWS costs based on measured on-premises provisioning and utilization1. Migration Evaluator can use an agentless collector to conduct broad-based discovery or securely upload exports from existing inventory tools2. Migration Evaluator integrates with AWS Migration Hub, which is a service that provides a single location to track the progress of application migrations across multiple AWS and partner solutions3. Migration Hub also supports AWS Application Discovery Service, which is a service that helps systems integrators quickly and reliably plan application migration projects by automatically identifying applications running in on-premises data centers, their associated dependencies, and their performance profile4.
AWS-Certified-Solutions-Architect-Professional dumps exhibit https://aws.amazon.com/migration-evaluator/
AWS-Certified-Solutions-Architect-Professional dumps exhibit https://docs.aws.amazon.com/migration-evaluator/latest/userguide/what-is.html
AWS-Certified-Solutions-Architect-Professional dumps exhibit https://aws.amazon.com/migration-hub/
AWS-Certified-Solutions-Architect-Professional dumps exhibit https://aws.amazon.com/application-discovery/
AWS-Certified-Solutions-Architect-Professional dumps exhibit https://aws.amazon.com/server-migration-service/
AWS-Certified-Solutions-Architect-Professional dumps exhibit https://aws.amazon.com/dms/
AWS-Certified-Solutions-Architect-Professional dumps exhibit https://docs.aws.amazon.com/controltower/latest/userguide/what-is-control-tower.html
AWS-Certified-Solutions-Architect-Professional dumps exhibit https://aws.amazon.com/application-migration-service/
AWS-Certified-Solutions-Architect-Professional dumps exhibit https://aws.amazon.com/storagegateway/

QUESTION 67

- (Exam Topic 3)
A live-events company is designing a scaling solution for its ticket application on AWS. The application has high peaks of utilization during sale events. Each sale event is a one-time event that is scheduled.
The application runs on Amazon EC2 instances that are in an Auto Scaling group. The application uses PostgreSOL for the database layer.
The company needs a scaling solution to maximize availability during the sale events. Which solution will meet these requirements?

Correct Answer: D
The correct answer is D. Use a scheduled scaling policy for the EC2 instances. Host the database on an Amazon Aurora PostgreSQL Multi-AZ DB cluster. Create an Amazon EventBridge rule that invokes an AWS Lambda function to create a larger Aurora Replica before a sale event. Fail over to the larger Aurora Replica. Create another EventBridge rule that invokes another Lambda function to scale down the Aurora Replica after the sale event.
This solution will meet the requirements of maximizing availability during the sale events. A scheduled scaling policy for the EC2 instances will allow the application to scale up and down according to the predefined schedule of the sale events. Hosting the database on an Amazon Aurora PostgreSQL Multi-AZ DB cluster will provide high availability and durability, as well as compatibility with PostgreSQL. Creating an Amazon EventBridge rule that invokes an AWS Lambda function to create a larger Aurora Replica before a sale event will ensure that the database can handle the increased read traffic during the peak periods. Failing over to the larger Aurora Replica will make it the primary instance, which will also improve the write performance of the database. Creating another EventBridge rule that invokes another Lambda function to scale down the Aurora Replica after the sale event will reduce the cost and resources of the database.
Reference: [3], section “Scaling Amazon Aurora MySQL and PostgreSQL with Aurora Auto Scaling”

QUESTION 68

- (Exam Topic 3)
A company is using AWS Control Tower to manage AWS accounts in an organization in AWS Organizations. The company has an OU that contains accounts. The company must prevent any new or existing Amazon EC2 instances in the OUs accounts from gaining a public IP address.
Which solution will meet these requirements?

Correct Answer: C
This option will meet the requirements of preventing any new or existing EC2 instances in the OU’s accounts from gaining a public IP address. An SCP is a policy that you can attach to an OU or an account in AWS Organizations to define the maximum permissions for the entities in that OU or account. By creating an SCP that denies the ec2:RunInstances and ec2:AssociateAddress actions when the value of the aws:RequestTag/aws:PublicIp condition key is true, you can prevent any user or role in the OU from launching instances that have a public IP address or attaching a public IP address to existing instances. This will effectively enforce a security best practice and reduce the risk of unauthorized access to your EC2 instances.

QUESTION 69

- (Exam Topic 3)
A company runs its application on Amazon EC2 instances and AWS Lambda functions. The EC2 instances experience a continuous and stable load. The Lambda functions experience a varied and unpredictable load. The application includes a caching layer that uses an Amazon MemoryDB for Redis cluster.
A solutions architect must recommend a solution to minimize the company's overall monthly costs. Which solution will meet these requirements?

Correct Answer: A
This option uses different types of savings plans and reserved nodes to minimize the company’s overall monthly costs for running its application on EC2 instances, Lambda functions, and MemoryDB cache nodes. Savings plans are flexible pricing models that offer significant savings on AWS usage (up to 72%) in exchange for a commitment of a consistent amount of usage (measured in $/hour) for a one-year or three-year term. There are two types of savings plans: Compute Savings Plans and EC2 Instance Savings Plans. Compute Savings Plans apply to any compute usage across EC2 instances, Fargate containers, Lambda functions, SageMaker notebooks, and ECS tasks. EC2 Instance Savings Plans apply to a specific instance family within a region and provide more savings than Compute Savings Plans (up to 66% versus up to 54%). Reserved nodes are similar to savings plans but apply only to MemoryDB cache nodes. They offer up to 55% savings compared to on-demand pricing.

QUESTION 70

- (Exam Topic 2)
A manufacturing company is building an inspection solution for its factory. The company has IP cameras at the end of each assembly line. The company has used Amazon SageMaker to train a machine learning (ML) model to identify common defects from still images.
The company wants to provide local feedback to factory workers when a defect is detected. The company must be able to provide this feedback even if the factory’s internet connectivity is down. The company has a local Linux server that hosts an API that provides local feedback to the workers.
How should the company deploy the ML model to meet these requirements?

Correct Answer: B
The company should use AWS IoT Greengrass to deploy the ML model to the local server and provide local feedback to the factory workers. AWS IoT Greengrass is a service that extends AWS cloud capabilities to local devices, allowing them to collect and analyze data closer to the source of information, react autonomously to local events, and communicate securely with each other on local networks1. AWS IoT
Greengrass also supports ML inference at the edge, enabling devices to run ML models locally without requiring internet connectivity2.
The other options are not correct because:
AWS-Certified-Solutions-Architect-Professional dumps exhibit Setting up an Amazon Kinesis video stream from each IP camera to AWS would not work if the factory’s internet connectivity is down. It would also incur unnecessary costs and latency to stream video data to the cloud and back.
AWS-Certified-Solutions-Architect-Professional dumps exhibit Ordering an AWS Snowball device would not be a scalable or cost-effective solution for deploying the ML model. AWS Snowball is a service that provides physical devices for data transfer and edge computing, but it is not designed for continuous operation or frequent updates3.
AWS-Certified-Solutions-Architect-Professional dumps exhibit Deploying Amazon Monitron devices on each IP camera would not work because Amazon Monitron is a service that monitors the condition and performance of industrial equipment using sensors and machine learning, not cameras4.
References:
AWS-Certified-Solutions-Architect-Professional dumps exhibit https://aws.amazon.com/greengrass/
AWS-Certified-Solutions-Architect-Professional dumps exhibit https://docs.aws.amazon.com/greengrass/v2/developerguide/use-machine-learning-inference.html
AWS-Certified-Solutions-Architect-Professional dumps exhibit https://aws.amazon.com/snowball/
AWS-Certified-Solutions-Architect-Professional dumps exhibit https://aws.amazon.com/monitron/