00:00

QUESTION 81

- (Topic 3)
A company's application runs on AWS. The application stores large documents in an Amazon S3 bucket that uses the S3 Standard-infrequent Access (S3 Standerd-IA) storage class. The company will continue paying to store the data but wants to save on its total S3 costs. The company wants authorized external users to have the ability to access the documents in milliseconds.
Which solution will meet these requirements MOST cost-effectively?

Correct Answer: D
This option is the most efficient because it uses Amazon CloudFront, which is a web service that speeds up distribution of your static and dynamic web content, such as .html,
.css, .js, and image files, to your users1. It also uses CloudFront to handle all the requests to the S3 bucket, which reduces the S3 costs by caching the content at the edge locations and serving it from there. It also allows authorized external users to access the documents in milliseconds, as CloudFront delivers the content with low latency and high data transfer rates. This solution meets the requirement of continuing paying to store the data but saving on its total S3 costs. Option A is less efficient because it configures the S3 bucket to be a Requester Pays bucket, which is a way to shift the cost of data transfer and requests from the bucket owner to the requester2. However, this does not reduce the total S3 costs, as the company still has to pay for storing the data and for any requests made by its own users. Option B is less efficient because it changes the storage tier to S3 Standard for all existing and future objects, which is a way to store frequently accessed data with high durability and availability3. However, this does not reduce the total S3 costs, as S3 Standard has higher storage costs than S3 Standard-IA. Option C is less efficient because it turns on S3 Transfer Acceleration for the S3 bucket, which is a way to speed up transfers into and out of an S3 bucket by routing requests through CloudFront edge locations4. However, this does not reduce the total S3 costs, as S3 Transfer Acceleration has additional charges for data transfer and requests.

QUESTION 82

- (Topic 4)
A manufacturing company runs its report generation application on AWS. The application generates each report in about 20 minutes. The application is built as a monolith that runs on a single Amazon EC2 instance. The application requires frequent updates to its tightly coupled modules. The application becomes complex to maintain as the company adds new features.
Each time the company patches a software module, the application experiences downtime. Report generation must restart from the beginning after any interruptions. The company wants to redesign the application so that the application can be flexible, scalable, and gradually improved. The company wants to minimize application downtime.
Which solution will meet these requirements?

Correct Answer: C
The solution that will meet the requirements is to run the application on Amazon Elastic Container Service (Amazon ECS) as microservices with service auto scaling. This solution will allow the application to be flexible, scalable, and gradually improved, as well as minimize application downtime. By breaking down the monolithic application into microservices, the company can decouple the modules and update them independently, without affecting the whole application. By running the microservices on Amazon ECS, the company can leverage the benefits of containerization, such as portability, efficiency, and isolation. By enabling service auto scaling, the company can adjust the number of containers running for each microservice based on demand, ensuring optimal performance and cost. Amazon ECS also supports various deployment strategies, such as rolling update or blue/green deployment, that can reduce or eliminate downtime during updates.
The other solutions are not as effective as the first one because they either do not meet the requirements or introduce new challenges. Running the application on AWS Lambda as a single function with maximum provisioned concurrency will not meet the requirements, as it will not break down the monolith into microservices, nor will it reduce the complexity of maintenance. Lambda functions are also limited by execution time (15 minutes), memory size (10 GB), and concurrency quotas, which may not be sufficient for the report generation application. Running the application on Amazon EC2 Spot Instances as microservices with a Spot Fleet default allocation strategy will not meet the requirements, as it will introduce the risk of interruptions due to spot price fluctuations. Spot Instances are not guaranteed to be available or stable, and may be reclaimed by AWS at any time with a two-minute warning. This may cause report generation to fail or restart from scratch. Running the application on AWS Elastic Beanstalk as a single application environment with an all-at- once deployment strategy will not meet the requirements, as it will not break down the monolith into microservices, nor will it minimize application downtime. The all-at-once deployment strategy will deploy updates to all instances simultaneously, causing a brief outage for the application.
References:
✑ Amazon Elastic Container Service
✑ Microservices on AWS
✑ Service Auto Scaling - Amazon Elastic Container Service
✑ AWS Lambda
✑ Amazon EC2 Spot Instances
✑ [AWS Elastic Beanstalk]

QUESTION 83

- (Topic 2)
A company runs a stateless web application in production on a group of Amazon EC2 On- Demand Instances behind an Application Load Balancer. The application experiences heavy usage during an 8-hour period each business day. Application usage is moderate and steady overnight Application usage is low during weekends.
The company wants to minimize its EC2 costs without affecting the availability of the application.
Which solution will meet these requirements?

Correct Answer: B
Reserved is cheaper than on demand the company has. And it's meet the availabilty (HA) requirement as to spot instance that can be disrupted at any time. PRICING BELOW. On- Demand: 0% There’s no commitment from you. You pay the most with this option. Reserved : 40%-60%1-year or 3-year commitment from you. You save money from that commitment. Spot 50%-90% Ridiculously inexpensive because there’s no commitment from the AWS side.

QUESTION 84

- (Topic 3)
A company has a large dataset for its online advertising business stored in an Amazon RDS for MySQL DB instance in a single Availability Zone. The company wants business reporting queries to run without impacting the write operations to the production DB instance.
Which solution meets these requirements?

Correct Answer: A
Read replica use cases - You have a production database that is taking on normal load & You want to run a reporting application to run some analytics • You create a Read Replica to run the new workload there • The production application is unaffected • Read replicas are used for SELECT (=read) only kind of statements (not INSERT, UPDATE, DELETE)

QUESTION 85

- (Topic 4)
A company uses an organization in AWS Organizations to manage AWS accounts that contain applications. The company sets up a dedicated monitoring member account in the organization. The company wants to query and visualize observability data across the accounts by using Amazon CloudWatch.
Which solution will meet these requirements?

Correct Answer: A
CloudWatch cross-account observability is a feature that allows you to monitor and troubleshoot applications that span multiple accounts within a Region. You can seamlessly search, visualize, and analyze your metrics, logs, traces, and Application Insights applications in any of the linked accounts without account boundaries1. To enable CloudWatch cross-account observability, you need to set up one or more AWS accounts as monitoring accounts and link them with multiple source accounts. A monitoring account is a central AWS account that can view and interact with observability data shared by other accounts. A source account is an individual AWS account that shares observability data and resources with one or more monitoring accounts1. To create links between monitoring accounts and source accounts, you can use the CloudWatch console, the AWS CLI, or the AWS API. You can also use AWS Organizations to link accounts in an organization or organizational unit to the monitoring account1. CloudWatch provides a CloudFormation template that you can deploy in each source account to share observability data with the monitoring account. The template creates a sink resource in the monitoring account and an observability link resource in the source account. The template also creates the necessary IAM roles and policies to allow cross-account access to the observability data2. Therefore, the solution that meets the requirements of the question is to enable CloudWatch cross- account observability for the monitoring account and deploy the CloudFormation template provided by the monitoring account in each AWS account to share the data with the monitoring account.
The other options are not valid because:
✑ Service control policies (SCPs) are a type of organization policy that you can use to manage permissions in your organization. SCPs offer central control over the maximum available permissions for all accounts in your organization, allowing you to ensure your accounts stay within your organization’s access control guidelines3.
SCPs do not provide access to CloudWatch in the monitoring account, but rather restrict the actions that users and roles can perform in the source accounts. SCPs are not required to enable CloudWatch cross-account observability, as the CloudFormation template creates the necessary IAM roles and policies for cross- account access2.
✑ IAM users are entities that you create in AWS to represent the people or
applications that use them to interact with AWS. IAM users can have permissions to access the resources in your AWS account4. Configuring a new IAM user in the monitoring account and an IAM policy in each AWS account to have access to query and visualize the CloudWatch data in the account is not a valid solution, as it does not enable CloudWatch cross-account observability. This solution would require the IAM user to switch between different accounts to view the observability data, which is not seamless and efficient. Moreover, this solution would not allow the IAM user to search, visualize, and analyze metrics, logs, traces, and Application Insights applications across multiple accounts in a single place1.
✑ Cross-account IAM policies are policies that allow you to delegate access to
resources that are in different AWS accounts that you own. You attach a cross- account policy to a user or group in one account, and then specify which accounts the user or group can access5. Creating a new IAM user in the monitoring account and cross-account IAM policies in each AWS account is not a valid solution, as it does not enable CloudWatch cross-account observability. This solution would also require the IAM user to switch between different accounts to view the observability data, which is not seamless and efficient. Moreover, this solution would not allow the IAM user to search, visualize, and analyze metrics, logs, traces, and Application Insights applications across multiple accounts in a single place1.
References: CloudWatch cross-account observability, CloudFormation template for CloudWatch cross-account observability, Service control policies, IAM users, Cross- account IAM policies