Loading...
Aws

AWS Interview Questions Phase 6

1. Is one Elastic IP address enough for every instance that I have running?
Ans:Depends! Every instance comes with its own private and public address. The private address is associated exclusively with the instance and is returned to Amazon EC2 only when it is stopped or terminated. Similarly, the public address is associated exclusively with the instance until it is stopped or terminated. However, this can be replaced by the Elastic IP address, which stays with the instance as long as the user
doesn’t manually detach it. But what if you are hosting multiple websites on your EC2 server, in that case you may require more than one Elastic IP address.

2. What are the best practices for Security in Amazon EC2?
Ans:There are several best practices to secure Amazon EC2. A few of them are given below:
• Use AWS Identity and Access Management (IAM) to control access to your AWS resources.
• Restrict access by only allowing trusted hosts or networks to access ports on your instance.
• Review the rules in your security groups regularly, and ensure that you apply the principle of least
• Privilege – only open up permissions that you require.
• Disable password-based logins for instances launched from your AMI. Passwords can be found or cracked, and are a security risk.
Section 3: Amazon Storage
3. You need to configure an Amazon S3 bucket to serve static assets for your public-facing web application. Which method will ensure that all objects uploaded to the bucket are set to public read?
A. Set permissions on the object to public read during upload.
B. Configure the bucket policy to set all objects to public read.
C. Use AWS Identity and Access Management roles to set the bucket to public read.
D. Amazon S3 objects default to public read, so no action is needed.
Answer B.
Explanation: Rather than making changes to every object, its better to set the policy for the whole bucket. IAM is used to give more granular permissions, since this is a website, all objects would be public by default.
4. A customer wants to leverage Amazon Simple Storage Service(S3) and Amazon Glacier as part of their backup and archive
infrastructure. The customer plans to use third-party software to support this integration. Which approach will limit the access of the
third party software to only the Amazon S3 bucket named “company-backup”?
A. A custom bucket policy limited to the Amazon S3 API in three Amazon Glacier archive “company-backup”
B. A custom bucket policy limited to the Amazon S3 API in “company-backup”
C. A custom IAM user policy limited to the Amazon S3 API for the Amazon Glacier archive “company-backup”.
D. A custom IAM user policy limited to the Amazon S3 API in “company-backup”.
Answer D.
Explanation: Taking queue from the previous questions, this use case involves more granular permissions, hence IAM would be used here.
5. Can S3 be used with EC2 instances, if yes, how?
Ans:Yes, it can be used for instances with root devices backed by local instance storage. By using Amazon S3, developers have access to the
same highly scalable, reliable, fast, inexpensive data storage infrastructure that Amazon uses to run its own global network of web
sites. In order to execute systems in the Amazon EC2 environment, developers use the tools provided to load their Amazon Machine Images
(AMIs) into Amazon S3 and to move them between Amazon S3 and Amazon EC2. Another use case could be for websites hosted on EC2 to load their static
content from S3. For a detailed discussion on S3, please refer our S3 AWS blog.
6. A customer implemented AWS Storage Gateway with a gateway- cached volume at their main office. An event takes the link between
the main and branch office offline. Which methods will enable the branch office to access their data?
A. Restore by implementing a lifecycle policy on the Amazon S3 bucket.
B. Make an Amazon Glacier Restore API call to load the files into another Amazon S3 bucket within four to six hours.
C. Launch a new AWS Storage Gateway instance AMI in Amazon EC2, and restore from a gateway snapshot.
D. Create an Amazon EBS volume from a gateway snapshot, and mount it to an Amazon EC2 instance.
Answer C.
Explanation: The fastest way to do it would be launching a new storage gateway instance. Why? Since time is the key factor which drives every
business, troubleshooting this problem will take more time. Rather than we can just restore the previous working state of the storage gateway on
a new instance.
7. When you need to move data over long distances using the internet, for instance across countries or continents to your Amazon
S3 bucket, which method or service will you use?
A. Amazon Glacier
B. Amazon CloudFront
C. Amazon Transfer Acceleration
D. Amazon Snowball
Answer C.
Explanation: You would not use Snowball, because for now, the snowball service does not support cross region data transfer, and since,
we are transferring across countries, Snowball cannot be used. Transfer Acceleration shall be the right choice here as it throttles your data
transfer with the use of optimized network paths and Amazon’s content delivery network upto 300% compared to normal data transfer speed.
8. How can you speed up data transfer in Snowball?
Ans:The data transfer can be increased in the following way:
• By performing multiple copy operations at one time i.e. if the workstation is powerful enough, you can initiate multiple cp
commands each from different terminals, on the same Snowball device.
• Copying from multiple workstations to the same snowball.
• Transferring large files or by creating a batch of small file, this will reduce the encryption overhead.
• Eliminating unnecessary hops i.e. make a setup where the source machine(s) and the snowball are the only machines active on the
switch being used, this can hugely improve performance.
Section 4: AWS VPC
9. If you want to launch Amazon Elastic Compute Cloud (EC2) instances and assign each instance a predetermined private IP
address you should:
A. Launch the instance from a private Amazon Machine Image (AMI).
B. Assign a group of sequential Elastic IP address to the instances.
C. Launch the instances in the Amazon Virtual Private Cloud (VPC).
D. Launch the instances in a Placement Group.
Answer C.
Explanation: The best way of connecting to your cloud resources (for ex- ec2 instances) from your own data center (for eg- private cloud) is a
VPC. Once you connect your datacenter to the VPC in which your instances are present, each instance is assigned a private IP address
which can be accessed from your datacenter. Hence, you can access your public cloud resources, as if they were on your own network.
10. Can I connect my corporate datacenter to the Amazon Cloud?
Ans:Yes, you can do this by establishing a VPN(Virtual Private Network) connection between your company’s network and your VPC (Virtual
Private Cloud), this will allow you to interact with your EC2 instances as if they were within your existing network.
11. Is it possible to change the private IP addresses of an EC2 while it is running/stopped in a VPC?
Ans:Primary private IP address is attached with the instance throughout its
lifetime and cannot be changed, however secondary private addresses can be unassigned, assigned or moved between interfaces or instances
at any point.
12. Why do you make subnets?
A. Because there is a shortage of networks
B. To efficiently utilize networks that have a large no. of hosts.
C. Because there is a shortage of hosts.
D. To efficiently utilize networks that have a small no. of hosts.
Answer B.
Explanation: If there is a network which has a large no. of hosts, managing all these hosts can be a tedious job. Therefore we divide this
network into subnets (sub-networks) so that managing these hosts becomes simpler.
13. Which of the following is true?
A. You can attach multiple route tables to a subnet
B. You can attach multiple subnets to a route table
C. Both A and B
D. None of these.
Answer B.
Explanation: Route Tables are used to route network packets, therefore in a subnet having multiple route tables will lead to confusion as to
where the packet has to go. Therefore, there is only one route table in a subnet, and since a route table can have any no. of records or
information, hence attaching multiple subnets to a route table is possible.
14. In CloudFront what happens when content is NOT present at an Edge location and a request is made to it?
A. An Error “404 not found” is returned
B. CloudFront delivers the content directly from the origin server and stores it in the cache of the edge location
C. The request is kept on hold till content is delivered to the edge location
D. The request is routed to the next closest edge location
Answer B.
Explanation: CloudFront is a content delivery system, which caches data to the nearest edge location from the user, to reduce latency. If data is
not present at an edge location, the first time the data may get transferred from the original server, but from the next time, it will be
served from the cached edge.
15. If I’m using Amazon CloudFront, can I use Direct Connect to transfer objects from my own data center?
Ans:Yes. Amazon CloudFront supports custom origins including origins from outside of AWS. With AWS Direct Connect, you will be charged with the
respective data transfer rates.
16. If my AWS Direct Connect fails, will I lose my connectivity?
Ans:If a backup AWS Direct connect has been configured, in the event of a failure it will switch over to the second one. It is recommended to enable
Bidirectional Forwarding Detection (BFD) when configuring your connections to ensure faster detection and failover. On the other hand, if
you have configured a backup IPsec VPN connection instead, all VPC traffic will failover to the backup VPN connection automatically. Traffic
to/from public resources such as Amazon S3 will be routed over the Internet. If you do not have a backup AWS Direct Connect link or a IPsec
VPN link, then Amazon VPC traffic will be dropped in the event of a failure.
Section 5: Amazon Database
17. If I launch a standby RDS instance, will it be in the same Availability Zone as my primary?
A. Only for Oracle RDS types
B. Yes
C. Only if it is configured at launch
D. No
Answer D.
Explanation: No, since the purpose of having a standby instance is to avoid an infrastructure failure (if it happens), therefore the standby
instance is stored in a different availability zone, which is a physically different independent infrastructure.18. When would I prefer Provisioned IOPS over Standard RDS storage?
A. If you have batch-oriented workloads
B. If you use production online transaction processing (OLTP) workloads.
C. If you have workloads that are not sensitive to consistent performance
D. All of the above
Answer A.
Explanation: Provisioned IOPS deliver high IO rates but on the other hand it is expensive as well. Batch processing workloads do not require
manual intervention they enable full utilization of systems, therefore a provisioned IOPS will be preferred for batch oriented workload.
19. How is Amazon RDS, DynamoDB and Redshift different?
• Amazon RDS is a database management service for relational databases, it manages patching, upgrading, backing up of data
etc. of databases for you without your intervention. RDS is a Db management service for structured data only.
• DynamoDB, on the other hand, is a NoSQL database service, NoSQL deals with unstructured data.
• Redshift, is an entirely different service, it is a data warehouse product and is used in data analysis.
20. If I am running my DB Instance as a Multi-AZ deployment, can I use the standby DB Instance for read or write operations along with
primary DB instance?
A. Yes
B. Only with MySQL based RDS
C. Only for Oracle RDS instances
D. No
Answer D.
Explanation: No, Standby DB instance cannot be used with primary DB instance in parallel, as the former is solely used for standby purposes, it
cannot be used unless the primary instance goes down.
21. Your company’s branch offices are all over the world, they use a software with a multi-regional deployment on AWS, they use
MySQL 5.6 for data persistence. The task is to run an hourly batch process and read data from every region to compute cross-regional reports which will be distributed
to all the branches. This should be done in the shortest time possible. How will you build the DB architecture in order to meet the requirements?
A. For each regional deployment, use RDS MySQL with a master in the region and a read replica in the HQ region
B. For each regional deployment, use MySQL on EC2 with a master in the region and send hourly EBS snapshots to the HQ region
C. For each regional deployment, use RDS MySQL with a master in the region and send hourly RDS snapshots to the HQ region
D. For each regional deployment, use MySQL on EC2 with a master in the region and use S3 to copy data files hourly to the HQ region
Answer A.
Explanation: For this we will take an RDS instance as a master, because it will manage our database for us and since we have to read from every
region, we’ll put a read replica of this instance in every region where the data has to be read from. Option C is not correct since putting a read
replica would be more efficient than putting a snapshot, a read replica can be promoted if needed to an independent DB instance, but with a
Db snapshot it becomes mandatory to launch a separate DB Instance.
22. Can I run more than one DB instance for Amazon RDS for free?
Ans:Yes. You can run more than one Single-AZ Micro database instance, that too for free! However, any use exceeding 750 instance hours, across all Amazon RDS Single-AZ Micro DB instances,
across all eligible database engines and regions, will be billed at standard Amazon RDS prices. For
example: if you run two Single-AZ Micro DB instances for 400 hours each in a single month, you will accumulate 800 instance hours of usage, of
which 750 hours will be free. You will be billed for the remaining 50 hours at the standard Amazon RDS price.
For a detailed discussion on this topic, please refer our RDS AWS blog.
23. Which AWS services will you use to collect and process e- commerce data for near real-time analysis?
A. Amazon ElastiCache
B. Amazon DynamoDB
C. Amazon Redshift
D. Amazon Elastic MapReduce
Answer B,C.
Explanation: DynamoDB is a fully managed NoSQL database service. DynamoDB, therefore can be fed any type of unstructured data, which
can be data from e-commerce websites as well, and later, an analysis can be done on them using Amazon Redshift. We are not using Elastic
MapReduce, since a near real time analyses is needed.
24. Can I retrieve only a specific element of the data, if I have a nested JSON data in DynamoDB?
Ans:Yes. When using the GetItem, BatchGetItem, Query or Scan APIs, you can define a Projection Expression to determine which attributes should be
retrieved from the table. Those attributes can include scalars, sets, or elements of a JSON document.
25. A company is deploying a new two-tier web application in AWS. The company has limited staff and requires high availability, and the
application requires complex queries and table joins. Which configuration provides the solution for the company’s
requirements?
A. MySQL Installed on two Amazon EC2 Instances in a single Availability Zone
B. Amazon RDS for MySQL with Multi-AZ
C. Amazon ElastiCache
D. Amazon DynamoDB
Answer D.
Explanation: DynamoDB has the ability to scale more than RDS or any other relational database service, therefore DynamoDB would be the apt choice.
26. What happens to my backups and DB Snapshots if I delete my DB Instance?
Ans:When you delete a DB instance, you have an option of creating a final DB snapshot, if you do that you can restore your database from that
snapshot. RDS retains this user-created DB snapshot along with all other manually created DB snapshots after the instance is deleted, also
automated backups are deleted and only manually created DB Snapshots are retained.
27. Which of the following use cases are suitable for Amazon DynamoDB? Choose 2 answers
A. Managing web sessions.
B. Storing JSON documents.
C. Storing metadata for Amazon S3 objects.
D. Running relational joins and complex updates.
Answer C,D.
Explanation: If all your JSON data have the same fields eg [id,name,age] then it would be better to store it in a relational database, the metadata
on the other hand is unstructured, also running relational joins or complex updates would work on DynamoDB as well.
28. How can I load my data to Amazon Redshift from different data sources like Amazon RDS, Amazon DynamoDB and Amazon EC2?
Ans:You can load the data in the following two ways:
• You can use the COPY command to load data in parallel directly to Amazon Redshift from Amazon EMR, Amazon DynamoDB, or any
SSH-enabled host.
• AWS Data Pipeline provides a high performance, reliable, fault tolerant solution to load data from a variety of AWS data sources.
You can use AWS Data Pipeline to specify the data source, desired data transformations, and then execute a pre-written import script
to load your data into Amazon Redshift.
29. Your application has to retrieve data from your user’s mobile every 5 minutes and the data is stored in DynamoDB, later every day
at a particular time the data is extracted into S3 on a per user basis and then your application is later used to visualize the data to the
user. You are asked to optimize the architecture of the backend system to lower cost, what would you recommend?

A. Create a new Amazon DynamoDB (able each day and drop the one for the previous day after its data is on Amazon S3.
B. Introduce an Amazon SQS queue to buffer writes to the Amazon DynamoDB table and reduce provisioned write throughput.
C. Introduce Amazon Elasticache to cache reads from the Amazon DynamoDB table and reduce provisioned read throughput.
D. Write data directly into an Amazon Redshift cluster replacing both
Amazon DynamoDB and Amazon S3.
Answer C.
Explanation: Since our work requires the data to be extracted and analyzed, to optimize this process a person would use provisioned IO,
but since it is expensive, using a ElastiCache memoryinsread to cache the results in the memory can reduce the provisioned read throughput and
hence reduce cost without affecting the performance.
30. You are running a website on EC2 instances deployed across multiple Availability Zones with a Multi-AZ RDS MySQL Extra Large
DB Instance. The site performs a high number of small reads and writes per second and relies on an eventual consistency model. After
comprehensive tests you discover that there is read contention on RDS MySQL. Which are the best approaches to meet these
requirements? (Choose 2 answers)
A. Deploy ElastiCache in-memory cache running in each availability zone
B. Implement sharding to distribute load to multiple RDS MySQL instances
C. Increase the RDS MySQL Instance size and Implement provisioned IOPS
D. Add an RDS MySQL read replica in each availability zone
Answer A,C.
Explanation: Since it does a lot of read writes, provisioned IO may become expensive. But we need high performance as well, therefore the
data can be cached using ElastiCache which can be used for frequently reading the data. As for RDS since read contention is happening, the
instance size should be increased and provisioned IO should be introduced to increase the performance.
31. A startup is running a pilot deployment of around 100 sensors to measure street noise and air quality in urban areas for 3 months. It
was noted that every month around 4GB of sensor data is generated. The company uses a load balanced auto scaled layer of
EC2 instances and a RDS database with 500 GB standard storage. The pilot was a success and now they want to deploy at least 100K
sensors which need to be supported by the backend. You need to store the data for at least 2 years to analyze it. Which setup of the
following would you prefer?
A. Add an SQS queue to the ingestion layer to buffer writes to the RDS instance
B. Ingest data into a DynamoDB table and move old data to a Redshift cluster
C. Replace the RDS instance with a 6 node Redshift cluster with 96TB of storage
D. Keep the current architecture but upgrade RDS storage to 3TB and 10K provisioned IOPS
Answer C.
Explanation: A Redshift cluster would be preferred because it easy to scale, also the work would be done in parallel through the nodes,
therefore is perfect for a bigger workload like our use case. Since each month 4 GB of data is generated, therefore in 2 year, it should be around
96 GB. And since the servers will be increased to 100K in number, 96 GB will approximately become 96TB. Hence option C is the right answer.
Section 6: AWS Auto Scaling, AWS Load Balancer
32. Suppose you have an application where you have to render images and also do some general computing. From the following services which
service will best fit your need?
A. Classic Load Balancer
B. Application Load Balancer
C. Both of them
D. None of these
Answer B.
Explanation: You will choose an application load balancer, since it supports path based routing, which means it can take decisions based on
the URL, therefore if your task needs image rendering it will route it to a different instance, and for general computing it will route it to a different
instance.
33. What is the difference between Scalability and Elasticity?
Ans:Scalability is the ability of a system to increase its hardware resources to handle the increase in demand. It can be done by increasing the
hardware specifications or increasing the processing nodes. Elasticity is the ability of a system to handle increase in the workload by
adding additional hardware resources when the demand increases(same as scaling) but also rolling back the scaled resources, when the resources
are no longer needed. This is particularly helpful in Cloud environments, where a pay per use model is followed.34. How will you change the instance type for instances which are running in your application tier and are using Auto Scaling. Where
will you change it from the following areas?
A. Auto Scaling policy configuration
B. Auto Scaling group
C. Auto Scaling tags configuration
D. Auto Scaling launch configuration
Answer D.
Explanation: Auto scaling tags configuration, is used to attach metadata to your instances, to change the instance type you have to use auto
scaling launch configuration.
35. You have a content management system running on an Amazon EC2 instance that is approaching 100% CPU utilization. Which
option will reduce load on the Amazon EC2 instance?
A. Create a load balancer, and register the Amazon EC2 instance with it
B. Create a CloudFront distribution, and configure the Amazon EC2 instance as the origin
C. Create an Auto Scaling group from the instance using the CreateAutoScalingGroup action
D. Create a launch configuration from the instance using the CreateLaunchConfigurationAction
Answer A.
Explanation:Creating alone an autoscaling group will not solve the issue, until you attach a load balancer to it. Once you attach a load
balancer to an autoscaling group, it will efficiently distribute the load among all the instances. Option B – CloudFront is a CDN, it is a data
transfer tool therefore will not help reduce load on the EC2 instance. Similarly the other option – Launch configuration is a template for
configuration which has no connection with reducing loads.
36. When should I use a Classic Load Balancer and when should I use an Application load balancer?
Ans: A Classic Load Balancer is ideal for simple load balancing of traffic across multiple EC2 instances, while an Application Load Balancer is ideal for
microservices or container-based architectures where there is a need to route traffic to multiple services or load balance across multiple ports on
the same EC2 instance.
37. What does Connection draining do?
A. Terminates instances which are not in use.
B. Re-routes traffic from instances which are to be updated or failed a health check.
C. Re-routes traffic from instances which have more workload to instances which have less workload.
D. Drains all the connections from an instance, with one click.
Answer B.
Explanation: Connection draining is a service under ELB which constantly monitors the health of the instances. If any instance fails a
health check or if any instance has to be patched with a software update, it pulls all the traffic from that instance and re routes them to other
instances.
38. When an instance is unhealthy, it is terminated and replaced with a new one, which of the following services does that?
A. Sticky Sessions
B. Fault Tolerance
C. Connection Draining
D. Monitoring
Answer B.
Explanation: When ELB detects that an instance is unhealthy, it starts routing incoming traffic to other healthy instances in the region. If all the
instances in a region becomes unhealthy, and if you have instances in some other availability zone/region, your traffic is directed to them. Once
your instances become healthy again, they are re routed back to the original instances.
39. What are lifecycle hooks used for in AutoScaling?
A. They are used to do health checks on instances
B. They are used to put an additional wait time to a scale in or scale out event.
C. They are used to shorten the wait time to a scale in or scale out event
D. None of these
Answer B.
Explanation: Lifecycle hooks are used for putting wait time before any lifecycle action i.e launching or terminating an instance happens. The
purpose of this wait time, can be anything from extracting log files before terminating an instance or installing the necessary softwares in an
instance before launching it.
40. A user has setup an Auto Scaling group. Due to some issue the group has failed to launch a single instance for more than 24 hours.
What will happen to Auto Scaling in this condition?
A. Auto Scaling will keep trying to launch the instance for 72 hours
B. Auto Scaling will suspend the scaling process
C. Auto Scaling will start an instance in a separate region
D. The Auto Scaling group will be terminated automatically
Answer B.
Explanation: Auto Scaling allows you to suspend and then resume one or more of the Auto Scaling processes in your Auto Scaling group. This
can be very useful when you want to investigate a configuration problem or other issue with your web application, and then make changes to your
application, without triggering the Auto Scaling process.
Section 7: CloudTrail, Route 53
41. You have an EC2 Security Group with several running EC2 instances. You changed the Security Group rules to allow inbound traffic on a new
port and protocol, and then launched several new instances in the same
Security Group. The new rules apply:
A. Immediately to all instances in the security group.
B. Immediately to the new instances only.
C. Immediately to the new instances, but old instances must be stopped and restarted before the new rules apply.
D. To all instances, but it may take several minutes for old instances to see the changes.
Answer A.
Explanation: Any rule specified in an EC2 Security Group applies immediately to all the instances, irrespective of when they are launched
before or after adding a rule.
42. To create a mirror image of your environment in another region for disaster recovery, which of the following AWS resources do not
need to be recreated in the second region? ( Choose 2 answers )
A. Route 53 Record Sets
B. Elastic IP Addresses (EIP)
C. EC2 Key Pairs
D. Launch configurations
E. Security Groups
Answer A.
Explanation: Route 53 record sets are common assets therefore there is no need to replicate them, since Route 53 is valid across regions

43. A customer wants to capture all client connection information from his load balancer at an interval of 5 minutes, which of the
following options should he choose for his application?
A. Enable AWS CloudTrail for the loadbalancer.
B. Enable access logs on the load balancer.
C. Install the Amazon CloudWatch Logs agent on the load balancer.
D. Enable Amazon CloudWatch metrics on the load balancer.
Answer A.
Explanation: AWS CloudTrail provides inexpensive logging information for load balancer and other AWS resources This logging information can
be used for analyses and other administrative work, therefore is perfect for this use case.
44. A customer wants to track access to their Amazon Simple Storage Service (S3) buckets and also use this information for their
internal security and access audits. Which of the following will meet the Customer requirement?
A. Enable AWS CloudTrail to audit all Amazon S3 bucket access.
B. Enable server access logging for all required Amazon S3 buckets.
C. Enable the Requester Pays option to track access via AWS Billing
D. Enable Amazon S3 event notifications for Put and Post.
Answer A.
Explanation: AWS CloudTrail has been designed for logging and tracking API calls. Also this service is available for storage, therefore
should be used in this use case.
45. Which of the following are true regarding AWS CloudTrail?
(Choose 2 answers)
A. CloudTrail is enabled globally
B. CloudTrail is enabled on a per-region and service basis
C. Logs can be delivered to a single Amazon S3 bucket for aggregation.
D. CloudTrail is enabled for all available services within a region.
Answer B,C.
Explanation: Cloudtrail is not enabled for all the services and is also not available for all the regions. Therefore option B is correct, also the logs
can be delivered to your S3 bucket, hence C is also correct.
46. What happens if CloudTrail is turned on for my account but my Amazon S3 bucket is not configured with the correct policy?
Ans:CloudTrail files are delivered according to S3 bucket policies. If the bucket is not configured or is misconfigured, CloudTrail might not be
able to deliver the log files.
47. How do I transfer my existing domain name registration to Amazon Route 53 without disrupting my existing web traffic?
Ans:You will need to get a list of the DNS record data for your domain name first, it is generally available in the form of a “zone file” that you can get
from your existing DNS provider. Once you receive the DNS record data, you can use Route 53’s Management Console or simple web-services
interface to create a hosted zone that will store your DNS records for your domain name and follow its transfer process. It also includes steps
such as updating the nameservers for your domain name to the ones associated with your hosted zone. For completing the process you have
to contact the registrar with whom you registered your domain name and follow the transfer process. As soon as your registrar propagates the
new name server delegations, your DNS queries will start to get answered.
Section 8: AWS SQS, AWS SNS, AWS SES, AWS ElasticBeanstalk
48. Which of the following services you would not use to deploy an app?
A. Elastic Beanstalk
B. Lambda
C. Opsworks
D. CloudFormation
Answer B.
Explanation: Lambda is used for running server-less applications. It can be used to deploy functions triggered by events. When we say serverless,
we mean without you worrying about the computing resources running in the background. It is not designed for creating applications which are
publicly accessed.
49. How does Elastic Beanstalk apply updates?
A. By having a duplicate ready with updates before swapping.
B. By updating on the instance while it is running
C. By taking the instance down in the maintenance window
D. Updates should be installed manually
Answer A.
Explanation: Elastic Beanstalk prepares a duplicate copy of the instance, before updating the original instance, and routes your traffic to the
duplicate instance, so that, incase your updated application fails, it will switch back to the original instance, and there will be no downtime
experienced by the users who are using your application.
50. How is AWS Elastic Beanstalk different than AWS OpsWorks?
Ans: AWS Elastic Beanstalk is an application management platform while
OpsWorks is a configuration management platform. BeanStalk is an easy to use service which is used for deploying and scaling web applications
developed with Java, .Net, PHP, Node.js, Python, Ruby, Go and Docker. Customers upload their code and Elastic Beanstalk automatically handles
the deployment. The application will be ready to use without any infrastructure or resource configuration.
In contrast, AWS Opsworks is an integrated configuration management platform for IT administrators or DevOps engineers who want a high
degree of customization and control over operations.
51. What happens if my application stops responding to requests in beanstalk?
Ans:AWS Beanstalk applications have a system in place for avoiding failures in the underlying infrastructure. If an Amazon EC2 instance fails for any
reason, Beanstalk will use Auto Scaling to automatically launch a new instance. Beanstalk can also detect if your application is not responding
on the custom link, even though the infrastructure appears healthy, it will be logged as an environmental event( e.g a bad version was deployed)
so you can take an appropriate action.
 

Leave a Reply

Your email address will not be published. Required fields are marked *