Q491. A company is running a data-intensive application on AWS. The application runs on a cluster of hundreds of Amazon EC2 instances. A shared file system also runs on several EC2 instances that store 200 TB of data. The appli

欢迎免费使用小程序搜题/刷题/查看解析,提升学历,成考自考报名,论文代写、论文查重请加客服微信skr-web


Q491. A company is running a data-intensive application on AWS. The application runs on a cluster of hundreds of Amazon EC2 instances. A shared file system also runs on several EC2 instances that store 200 TB of data. The application reads and modifies the data on the shared file system and generates a report. The job runs once monthly reads a subset of the files from the shared file system and takes about 72 hours to complete. The compute instances scale in an Auto Scaling group but the instances that host the shared file system run continuously. The compute and storage instances are all in the same AWS Region. A solutions architect needs to reduce costs by replacing the shared file system instances. The file system must provide high performance access to the needed data for the duration of the 72-hour run. Which solution will provide the LARGEST overall cost reduction while meeting these requirements?

A.Migrate the data from the existing shared file system to an Amazon S3 bucket that uses the S3 Intelligent-Tiering storage class. Before the job runs each month use Amazon FSx for Lustre to create anew file system with the data from Amazon S3 by using lazy loading. Use the new file system as the shared storage for the duration of the job. Delete the file system when the job is complete.
B.Migrate the data from the existing shared file system to a large Amazon Elastic Block Store (Amazon EBS) volume with Multi-Attach enabled Attach the EBS volume to each of the instances by using a user data script in the Auto Scaling group launch template. Use the EBS volume as the shared storage for the duration of the job. Detach the EBS volume when the job is complete.
C.Migrate the data from the existing shared file system to an Amazon S3 bucket that uses the S3 Standard storage class Before the job runs each month use Amazon FSx for Lustre to create anew file system with the data from Amazon S3 by using hatch loading Use the new file system as the shared storage for the duration of the job. Delete the file system when the job is complete
D.Migrate the data from the existing shared file system to an Amazon S3 bucket Before the job runs each month use AWS Storage Gateway to create a file gateway with the data from Amazon S3 Use the file gateway as the shared storage for the job. Delete the file gateway when the job is complete.
正确答案A
访客
邮箱
网址

通用的占位符缩略图

人工智能机器人,扫码免费帮你完成工作


  • 自动写文案
  • 自动写小说
  • 马上扫码让Ai帮你完成工作
通用的占位符缩略图

人工智能机器人,扫码免费帮你完成工作

  • 自动写论文
  • 自动写软件
  • 我不是人,但是我比人更聪明,我是强大的Ai
Top