100% satisfaction guarantee Immediately available after payment Both online and in PDF No strings attached
logo-home
2024 AWS CERTIFIED SOLUTIONS ARCHITECT -ASSOCIATE PRACTICE EXAM QUESTIONS WITH ANSWERS $30.99   Add to cart

Exam (elaborations)

2024 AWS CERTIFIED SOLUTIONS ARCHITECT -ASSOCIATE PRACTICE EXAM QUESTIONS WITH ANSWERS

 8 views  0 purchase
  • Course
  • AWS CERTIFIED ARCHITECT -ASSOCIATE
  • Institution
  • AWS CERTIFIED ARCHITECT -ASSOCIATE

2024 AWS CERTIFIED SOLUTIONS ARCHITECT -ASSOCIATE PRACTICE EXAM QUESTIONS WITH ANSWERS

Preview 4 out of 297  pages

  • September 5, 2024
  • 297
  • 2024/2025
  • Exam (elaborations)
  • Questions & answers
  • AWS CERTIFIED ARCHITECT -ASSOCIATE
  • AWS CERTIFIED ARCHITECT -ASSOCIATE
avatar-seller
Elitaa
2024 AWS CERTIFIED SOLUTIONS
ARCHITECT -ASSOCIATE PRACTICE
EXAM QUESTIONS WITH ANSWERS



Amazon Glacier is designed for: (Choose 2 answers)

A. active database storage.
B. infrequently accessed data.
C. data archives.
D. frequently accessed data.
E. cached session data. - CORRECT-ANSWERSB. infrequently accessed data.
C. data archives.

You require the ability to analyze a large amount of data, which is stored on
Amazon S3 using Amazon Elastic
Map Reduce. You are using the cc2 8x large Instance type, whose CPUs are
mostly idle during processing.
Which of the below would be the most cost efficient way to reduce the
runtime of the job?

A.Create more smaller flies on Amazon S3.

B.Add additional cc2 8x large instances by introducing a task group.

C.Use smaller instances that have higher aggregate I/O performance.

D.Create fewer, larger files on Amazon S3. - CORRECT-ANSWERSC. Use
smaller instances that have higher aggregate I/O performance.

https://aws.amazon.com/elasticmapreduce/faqs/

A,D- Irrelevant
B- Adding more , C'mon the situation is idle, reducing would be the option!

This is the only line relevant to understanding to support C, it talks about if
you need more but here the situation is for idle so think accordingly:

As a general guideline, we recommend that you limit 60% of your disk space
to storing the data you will be processing, leaving the rest for intermediate
output. Hence, given 3x replication on HDFS, if you were looking to process 5

,TB on m1.xlarge instances, which have 1,690 GB of disk space, we
recommend your cluster contains at least (5 TB * 3) / (1,690 GB * .6) = 15
m1.xlarge core nodes. You may want to increase this number if your job
generates a high amount of intermediate data or has significant I/O
requirements.

Your department creates regular analytics reports from your company's log
files All log data is collected in
Amazon S3 and processed by daily Amazon Elastic MapReduce (EMR) jobs
that generate daily PDF reports and
aggregated tables in CSV format for an Amazon Redshift data warehouse.
Your CFO requests that you optimize the cost structure for this system.
Which of the following alternatives will lower costs without compromising
average performance of the system
or data integrity for the raw data?

A.Use reduced redundancy storage (RRS) for PDF and csv data in Amazon S3.
Add Spot instances to Amazon
EMR jobs Use Reserved Instances for Amazon Redshift.

B.Use reduced redundancy storage (RRS) for all data in S3. Use a
combination of Spot instances and Reserved
Instances for Amazon EMR jobs use Reserved instances for Amazon Redshift.

C.Use reduced redundancy storage (RRS) for all data in Amazon S3 Add Spot
Instances to - CORRECT-ANSWERSAnswer is A - Agree with Sandeep

A. Use reduced redundancy storage (RRS) for PDF and csv data in Amazon
S3. Add Spot instances to Amazon
EMR jobs Use Reserved Instances for Amazon Redshift.

C- not possible as it is for temporary purpose
core nodes should be reserved for the capacity that is required until your
cluster completes(temporary)
EMR uses spot instances, only AWS GovCloud (US) region does not support
spot instances.

B,c- in any case not recommended RRS all Data

D-It is not possible as Redshift recommends reserved instances.

Reserved Instances (a.k.a. Reserved Nodes) are appropriate for steady-state
production workloads, and offer significant discounts over On-Demand
pricing.

https://aws.amazon.com/redshift

,Last but not the least its A because :

Q: What are some EMR best practices?

If you are running EMR in production you should specify an AMI version, Hive
version, Pig version, etc. to make sure the version does not change
unexpectedly (e.g. when EMR later adds support for a newer version). If your
cluster is mission critical, only use Spot instances for task nodes because if
the Spot price increases you may lose the instances. In development, use
logging and enable debugging to spot and correct errors faster. If you are
using GZIP, keep your file size to 1-2 GB because GZIP files cannot be split.
Click here to download the white paper on Amazon EMR best practices.

https://aws.amazon.com/elasticmapreduce/faqs/

You are the new IT architect in a company that operates a mobile sleep
tracking application
When activated at night, the mobile app is sending collected data points of 1
kilobyte every 5 minutes to your
backend
The backend takes care of authenticating the user and writing the data
points into an Amazon DynamoDB
table.
Every morning, you scan the table to extract and aggregate last night's data
on a per user basis, and store the
results in Amazon S3.
Users are notified via Amazon SMS mobile push notifications that new data is
available, which is parsed and
visualized by (he mobile app Currently you have around 100k users who are
mostly based out of North
America.
You have been tasked to optimize the architecture of the backend system to
lower cost what would you
recommend? (Choose 2 answers)

A.Create a new Amazon DynamoDB (able each day and drop the one for the
previous day after its data is on
Amazon S3.

B.Have th - CORRECT-ANSWERSA and C are the right answers.
A: you store around 1.2GB/hour (100000*1kb*60/5), most customers being in
the US it means you would store that kind of data mostly over 10 hours,
that's 12GB/day. Storing that kind of data would be expensive so we drop the
previous data that was already stored in S3.

, C: Second most costly factor is your write units, using a SQS queue would
split that in half (most customers being in north america).

B is wrong because it doesn't help with reducing costs. You will still need to
parse files and storing raw files in S3 is cheaper than in DynamoDB.

Your website is serving on-demand training videos to your workforce. Videos
are uploaded monthly in high
resolution MP4 format. Your workforce is distributed globally often on the
move and using company-provided
tablets that require the HTTP Live Streaming (HLS) protocol to watch a video.
Your company has no video
transcoding expertise and it required you may need to pay for a consultant.
How do you implement the most cost-efficient architecture without
compromising high availability and quality
of video delivery'?

A.Elastic Transcoder to transcode original high-resolution MP4 videos to HLS
S3 to host videos with Utecycle
Management to archive original flies to Glacier after a few days CloudFront
to serve HLS transcoded videos
from S3

B.A video transcoding pipeline running on EC2 using SQS to distribute tasks
and Auto Scaling to adjust the
number or nodes depending on the length of the queue S3 to host videos wi -
CORRECT-ANSWERSA is more appropriate as b says glacier as origin for
cloudfront distribution which is of no use

You've been hired to enhance the overall security posture for a very large e-
commerce site They have a well
architected multi-tier application running in a VPC that uses ELBs in front of
both the web and the app tier with
static assets served directly from S3 They are using a combination of RDS
and DynamoOB for their dynamic
data and then archiving nightly into S3 for further processing with EMR They
are concerned because they
found questionable log entries and suspect someone is attempting to gain
unauthorized access.
Which approach provides a cost effective scalable mitigation to this kind of
attack?

A.Recommend mat they lease space at a DirectConnect partner location and
establish a 1G DirectConnect
connection to tneirvPC they would then establish Internet connectivity into
their space, filter the traffic in

The benefits of buying summaries with Stuvia:

Guaranteed quality through customer reviews

Guaranteed quality through customer reviews

Stuvia customers have reviewed more than 700,000 summaries. This how you know that you are buying the best documents.

Quick and easy check-out

Quick and easy check-out

You can quickly pay through credit card or Stuvia-credit for the summaries. There is no membership needed.

Focus on what matters

Focus on what matters

Your fellow students write the study notes themselves, which is why the documents are always reliable and up-to-date. This ensures you quickly get to the core!

Frequently asked questions

What do I get when I buy this document?

You get a PDF, available immediately after your purchase. The purchased document is accessible anytime, anywhere and indefinitely through your profile.

Satisfaction guarantee: how does it work?

Our satisfaction guarantee ensures that you always find a study document that suits you well. You fill out a form, and our customer service team takes care of the rest.

Who am I buying these notes from?

Stuvia is a marketplace, so you are not buying this document from us, but from seller Elitaa. Stuvia facilitates payment to the seller.

Will I be stuck with a subscription?

No, you only buy these notes for $30.99. You're not tied to anything after your purchase.

Can Stuvia be trusted?

4.6 stars on Google & Trustpilot (+1000 reviews)

67866 documents were sold in the last 30 days

Founded in 2010, the go-to place to buy study notes for 14 years now

Start selling
$30.99
  • (0)
  Add to cart