Posted on

AWS cloud support engineer interview questions

Enter your email address:

Delivered by FeedBurner

AWS is an Amazon company with lots of openings for fresh talent, open to fresh ideas, innovation. Amazon web services the cloud based service that has migrated infrastructure from physical data center onto online cloud has been hiring engineers in various capacity including cloud support associate, cloud engineer, senior cloud support engineer, cloud architect, support manager etc. As a fresh graduate out of college this is a lucrative better career option you can eye on. Here we have proposed some interview questions that will help you crack the AWS interview including aws cloud support engineer interview questions. The interview questions does overlap with AWS cloud support associate, AWS cloud support engineer, AWS cloud architect as all these positions demand good knowledge, skill and expertise in Linux/UNIX OS,networking basisc to start with.
Note that these are not actual interview questions and this is just an aid prepared based on AWS stack analysis, job role responsibilities advertised by them in popular websites
These are some of the interview questions that you can expect during interview of AWS cloud support engineer, aws cloud support associate, AWS cloud support manager. We have analysed the technology stack, current job openings and created this based on them. These are not actual interview questions and has nothing to do with them
1) Why should we consider AWS? How would you convince a customer to start using AWS?
Primary advantage is going to be cost savings. As a customer support engineer your job role involves talking to current customers, prospective customers to help them determine if they really have to move to AWS from their current infrastructure. In addition to providing convincing answer in terms of cost savings it would be better if you give them real simple explanation of flexibility, elastic capacity planning that offers option of pay as you use infrastructure, easy to manage AWS console etc
2) What is your current job profile?How would you add value to customer?
Though AWS is looking to hire fresh talent for cloud support engineer openings, if you have some work experience in infrastructure side of business say system administrator, network administrator, database administrator, firewall administrator, security administrator, storage administrator etc you are still a candidate to be considered for interview.
All they are looking for is infrastructure knowledge in overall, little knowledge about different tech stack , how they inter-operate, what will it be like once the infrastructure is in web rather than physical data center.
If you don’t have experience with AWS don’t worry. Try to leverage the ways and means you did adopt to solve customer support calls both internal and external to let them know how you can bring value to the table.
Have some overview on how different components of infrastructure interact.
AWS wants to know your pro-active measure towards customer relationship. Say, if you are going to discuss a project or an issue with customer , it would be better if you have some preparatory work that comes handy rather than being reactive. Value addition comes in terms of recommending the best solution , utilization of services in AWS that will help them make decision easy and fast
3) Do you know networking?
Make sure you can be from many different backgrounds say from development, infrastructure, QA, customer support , network administration, system administration, firewall administration, system administration etc. You should know networking. Cloud is network based and to fix the application issues escalated, networking knowledge is very important
4) What networking commands do you make use of on daily basis to fix issues?
When we work with servers be it physical or virtual first command that comes handy to locate the request response path taken would be traceroute. In windows systems equivalent command is tracert
There are some more important interesting commands – ping, ipconfig, ifconfig that talks about network communication, network address, interface configuration etc
DNS commands – nslookup, Lookup of /etc/resolv.conf file in Linux systems to get details on DNS
5) What is the advantage of using TCP protocol?
TCP is used to exchange data reliably . IT used mechanisms of sequence and acknowledgment, error detection, error recovery. This comes with advantage of reliable application but comes with cost of hit in transmission time
6) What is UDP?
User datagram protocol called UDP is a connection less protocol that can be used for fast efficient applications that need less time compared to TCP
7) Do you know how an internet works in your environment?
This can be your home or office. Learn more on modem and its utilization in connection establishment
8) What is a process? How do you manage processes in Linux:-
In Linux/Unix based OS process is started or created when a command is issued. In simple terms while a program is running in an OS an instance of the program is created. This is the process. To manage the processes in Linux process management commands come handy
ps – this is the commonly used process management command to start with. ps command provides details on currently running active processes
top – This command provides details on all running processes. ps command lists active processes whereas top lists all the processes (i.e) activity of processor in real-time. This includes details on processor and memory being used
kill – To kill a process using the process id kill command is used. ps command provides details on process id. To kill a process issue
kill pid
killall proc – This command is same as kill command. To kill all the processes by name proc we can use this
9) Give details on foreground and background jobs command in Linux:-
fg – this command brings most recent job to foreground. Typing the command fg will resume most recently suspended job
fg n – This command brings job n to the foreground. If a job is recently back grounded by typing fg 1 this can be brought foreground
bg – This command is used to resume a suspended program without bringing it to foreground. This command also provides details on list of stopped jobs as well as current background jobs
10) How to get details on current date and time in Linux?
Make use of the command date that shows details on current date and time. To get current month’s calendar use cal command
uptime – shows current uptime
11) What is difference between command df and du?
In linux both df and du are space related commands showing system space information
df – this command provides details on disk usage
du – To get details on directory space usage use this command
free – this command shows details on memory and swap usage
12) What are the different commands and options to compress files in Linux?
Lets start with creating a tar and name it test.tar containing the needed files
tar cf test.tar files
Once the tar is available, uploaded on AWS there is a need to untar the files. Use the command as follows:
tar xf file.tar
We can create a tar with gzip compression that will minimize the size of files to be transferred and creates test.tar.gz at the end
tar czf test.tar.gz files
To extract the gzipped tar compressed files use the command:
tar xzf test.tar.gz
Bzip2 compression can be used to create a tar as follows
tar cjf test.tar.bz2
To extract bzip2 compressed files use
tar xjf test.tar.bz2
To simply make use of gzip compression use
gzip testfile – This creates testfile.gz
To decompress testfile.gz use gzip -d testfile.gz
13) Give examples on some common networking commands you have made use of?
Note that AWS stack is primarily dependent on linux and over the cloud architecture makes it heavily network dependent. As a result AWS interview could be related to networking irrespective of your system admin, database admin, bigdata admin background. Learn these simple networking commands:
When a system is unreachable first step is to ping the host and make sure it is up and running
ping host – This pings the host and output results
Domain related commands as AWS has become preferred hosting for major itnernet based companies, SaaS firms
To get DNS information of the domain use – dig domain
To get whois information on domain use – whois domain
Host reverse lookup – dig -x host
Download file – wget file
To continue stopped download – wget -c file


14) What is your understanding of SSH?
SSH the secure shell is widely used for safe communication. This is a cryptographic network protocol used for operating network services securely over an unsecured network. Some of the commonly used ssh commands include
To connect to a host as a specified user using ssh use this command:
ssh username@hostname
To connect to a host on a specified port make use of this command
ssh -p portnumber username@hostname
To enable a keyed or passwordless login into specified host using ssh use
ssh-copy-id username@hostname
15) How do you perform search in Linux environment?
Searching and pattern matching are some common functions that typically happens in Linux environment. Here are the Linux commands:
grep – Grep command is the first and foremost when it comes to searching for files with pattern. Here is the usage:
grep pattern_match test_file – This will search for pattern_match in test_file
Search for pattern in directory that has set of files using recursive option as follows – grep -r pattern dir – Searches for pattern in directory recursively
Pattern can be searched in concatenation with another command (i.e) output of a command can be used as input for pattern search and match – first command| grep pattern
To find all instances of a file use locate command – locate file
16) Give details on some user related commands in Linux:-
Here are some user related Linux commands:
w – displays details on who is online
whoami – to know whom you are logged in as
finger user – displays information about the user
17) How to get details on kernel information in Linux?
uname -a command provides details on kernel information
18) How to get CPU and memory info in Linux machine?
Issue the following commands:
cat /proc/cpuinfo for cpu information
cat /proc/meminfo for memory information
19) What are the file system hierarchy related commands in linux?
File system hierarchy starting with raw disks, the way disks are formatted into files, files are grouped together as directory all are important for cracking AWS interview. Here are some file system hierarchy related commands that come handy
touch filename – creates a file with name filename. This command can also be used to update a file
ls- lists the directories
ls -al – All files including hidden files are listed with proper formatting
cd dir – change to specified directory
cd – Changes to home directory
pwd – called present working directory that shows details on current directory
Make a new directory using mkdir command as follows – mkdir directory_name
Remove file using rm command – rm file – removes file
To delete directory use -r option – rm -r directory_name
Remove a file forcefully using -f option – rm -f filename
To force remove directory use – rm -rd directory_name
Copy the contents from one file to another – copy file1 file2
Copy the contents across directory use – cp -r dir1 new_dir – If new directory does not exist create this first before issuing copy command
Move or rename a file using mv command – mv file1 new_File
If new_Dir is a file that already exists new_File will be directory into which file1 will be moved into
more filename – output the contents of the file
head file – output the first 10 lines of the file
tail file – output the last 10 lines of the file
tail -f filename – output the contents of the file as it grows, to start with display last 10 lines
Create symbolic link to a file using ln command – ln -s file link – called soft link
20) What command is used for displaying manual of a command?
Make use of the command man command_name
21) Give details on app related commands in linux:-
which app – shows details on which app will be run by default
whereis app – shows possible locations of application
22) What are the default port numbers of http and https?
Questions on http and https port number is first step in launching webapp while customer reports an issue
Default port number of http is 80 (or) 8080
Default port number of https is 443
23) What is use of load balancer?
Load balancer is used to increase the capacity and reliability of applications. What capacity means is number of users connecting to applications. Load balancer distributes the traffic network and application traffic across many different servers increasing application capacity
24) What is sysprep tool?
System preparation tool comes as free tool with windows that can be accessed from %systemroot%\system32\sysprep folder. IT is used to duplicate, test and deliver new installation of windows based on established installation
25) User is not able to RDP into server. What could be the reason?
Probable reason is that user is not part of remote desktop users local group of the terminal servers
26) How would you approach a customer issue?
Most work of AWS support engineer involves dealing with customer issue.As with any other support engineer AWS engineer should follow approach of question customer, listen to them, confirm what you have collected. This is called QLC approach much needed step to cover details on issue description and confirm it
27) What types of questions can you ask customer?
A support engineer can ask two types of questions
1) Open ended questions – In this case your question will be single statement, answer you expect from customer is detailed
2) Closed questions – In this case your question will have answers yea (or) No, true (or) false type answers, single word answer in some cases
28) How do you consider customer from AWS technology perspective?
Even though the customer can be long standing customer of AWS, always think of customer as common man with no knowledge of AWS to talk more to them, explain more details to them to get correct issue description statement
29) Give details on operators in linux?
> – greater than symbol is input re-direction operator used to write content as input into file. Typically this is used to redirect the output of command into logfile. IF file already exists the contents are overwritten and only last recent content is retained
>> – this is same as input redirection except that this is appending content of a file if the file already exists
30) Explain difference between hardlink and softlink in simple terms?
Hardlink is link to inode that talks about file contents, softlink is link to filename. If filename changes the changes are not reflected. For both hard and soft link ln command is used. In case of hardlink it will be simply ln, for soft link ln -s option is used
31) What are some common linux commands AWS engineer should be aware of?
1) cat – This is plain simple command to access a file in UNIX
2) ls – Provides details on list of files and directories
3) ps – The process command provides details on list of processes in the system
4) vmstat – Virtual memory statistics comes handy during performance tuning
5) iostat – Command to determine I/O issues
6) top – This command provides details on top resource consuming processes
7) sar – This is a UNIX utility mainly used for tuning purpose
8) rm – This command is used to remove files
9) mv – moving the files and directories
cd – Enables us to change directories
date – gives us the time and date
echo – we can display text on our screen
grep – It is a pattern recognition command.It enables us to see if a certain word or set of words occur in a file or the output of any other command.
history – gives us the commands entered previously by us or by other users
passwd – this command enables us to change our password
pwd – to find out our present working directory or to simply confirm our current location in the file system
uname – gives all details of the system when used with options. We get details including systemname,kernel version etc.
whereis – gives us exact location of the executable file for the utility in the question
which – the command enables us to find out which version(of possibly multiple versions)of the command the shell is using
who – this command provides us with a list of all the users currently logged into the system
whoami – this command indicates who you are logged in as. If user logs in as a userA and does a su to userB,whoami displays userA as the output.
man – this command will display a great detail of information about the command in the question
find – this command gives us the location of the file in a given path
more – this command shows the contents of a file,one screen at a time
ps – this command gives the list of all processes currently running on our system
cat – this command lets us to read a file
vi – this is referred to as text editor that enables us to read a file and write to it
emacs- this is a text editor that enables us to read a file and write to it
gedit – this editor enables us to read a file and write to it
diff – this command compares the two files, returns the lines that are different,and tells us how to make the files the same
export – we can make the variable value available to the child process by exporting the variable.This command is valid in bash,ksh.
setenv – this is same as export command and used in csh,tcsh
env – to display the set of environment variables at the prompt
echo <$variablsname> – displays the current value of the variable
source – whenever an environment variable is changed, we need to export the changes.source command is used to put the environment variable changes into immediate effect.It is used in csh,tcsh
.profile – in ksh,bash use . .profile command to get same result as using source command
set noclobber – to avoid accidental overwriting of an existing file when we redirect output to a file.It is a good idea to include this command in a shell-startup file such as .cshrc
32) What are the considerations while creating username/user logins for Security Administration purpose?
It is a good practice to follow certain rules while creating usernames/user logins
1) User name/user login must be unique
2) User name/user login must contain a combination of 2 to 32 letters, numerals, underscores(_),hyphens(-), periods(.)
3) There should not be any spaces/tab spaces while creating user name/usr logins
4) User name must begin with a letter and must have atleast one lowercase letter
5) Username must be between three to eight characters long
6) It is a best practice to have alphanumeric user names/user logins. It can be a combination of lower case letters, upper case letters, numerals, punctuation
33) Give details on /etc/profile the system profile file and its usage in linux environment:-
.This is another important UNIX system administration file. This file has much to do with user administration. This file is run when we first log into the system.This is system profile file. After this user profile file is run. User profile is the file wherein we define the users environment details.Following are teh different forms of user profile files :
.profile
.bash_profile
.login
.cshrc
/home/username is the default home directory.User’s profile file resides in the user’s home directory.
34) How to perform core file configuration in Linux environment?
Lets consider a linux flavor say solaris. Core File Configuration involves the following steps. We need to follow the steps given below to configure the core file.
1) As a root user, use the coreadm command to display the current coreadm configuration :
# coreadm
2) As a root user, issue the following command to change the core file setup :
# coreadm -i /cores/core_new.%n.%f
3) Run the coreadm command afain to verify that the changes has been made permanent
# coreadm
The O/P line “init core file pattern : ” will reflect the new changes made to the corefile configuration.
From solaris 10 onwards, coreadm process is configured by the Service Management Facility (SMF) at system boot time.We can use svcs command to check the status .The service name for coreadm process is :
svc:/system/coreadm:default
35) How do you configure or help with customer printer configuration?
Administering Printers details the steps needed to administer a printer.
Once the printer server and printer client is set up, we may need to perform the following administrative tasks frequently :
1) Check the status of printers
2) Restart the print scheduler
3) Delete remote printer access
4) Delete a printer
36) How is zombie process recognized in linux and its flavors? How do you handle zombie process in linux environment?
Zombie Process in UNIX/LINUX/Sun Solaris/IBM AIX is recognized by the state Z.It doesn’t use CPU resources.It still uses space in the process table.
It is a dead process whose parent did not clean up after it and it is still occupying space in the process table.
They are defunct processes that are automatically removed when a system reboots.
Keeping OS and applications up to date and with latest patches prevents zombie processes.
By properly using wait() call in parent process will prevent zombie processes.
SIGCHLD is teh signal sent by child to parent upon task completion and parent kills child(proper termination).
kill -18 PID – Kills childs process
37) What is the use of /etc/ftpd/ftpusers in Linux?
/etc/ftpd/ftpusers is used to restrict users who can use FTP(File Transfer Protocol).Ftp is a security threat as passwor is not encrypted while using ftp. ftp must not be used by sensitive user accounts such as root,snmp,uucp,bin,admin(default system user accounts).
As a security measure we have a file called /etc/ftpd/ftpusers created by default. The users listed in this file are not allowed to do ftp.The ftp server in.ftpd reads this file before allowing users to perform ftp. If we want to restrict a user from doing ftp we may have to include their name in this file.
38) Have you ever helped a customer restore a root file system in their environment?
Restoring root file system (/)  provides steps we need to follow to restore the root file system (/ system) in SPARC and x86 (intel) machines.
1) Log in as root user. It is a security practice to login as normal user and perform an su to take root user (super user) role.
2) Appearance of # prompt is an indication that the user is root
3) Use who -a command to get information about current user
4) When / (root filesystem) is lost because of disk failure. In this case we boot from CD or from the network.
5) Add a new system disk to the system on which we want to restore the root (/) file system
6) Create a file system using the command :
newfs /dev/rdsk/partitionname
7) Check the new file system with the fsck command :
fsck /dev/rdsk/partitionname
8) Mount the filesystem on a temporary mount point :
mount /dev/dsk/devicename /mnt
9) Change to the mount directory :
cd /mnt
10) Write protect the tape so that we can’t accidentally overwrite it. This is an optional but important step
11) Restore the root file system (/) by loading the first volume of the appropriate dump level tape into the tape drive. The appropriate dump level is the lowest dump level of all the tapes that need to be restored. Use the following command :
ufsrestore -rf /dev/rmt/n
12) Remove the tape and repeat the step 11 if there is more than one tape for the same level
13) Repeat teh step 11 and 12 with next ddump levels. Always begin with the lowest dump level and use highest ump level tape
14) Verify that file system has been restored :
la
15) Delete the restoresymtable file which is created and used by the ufsrestore utility :
rm restoresymtable
16) Change to the root directory (/) and unmount the newly restored file system
cd /
umount /mnt
17) Check the newly restored file system for consistency :
fsck /dev/rdsk/devicename
18) Create the boot blocks to restore the root file system :
installboot /usr/platform/sun4u/lib/fs/ufs/bootblk /dev/rdsk/devicename — SPARC system
installboot /usr/platform/`uname -i`/lib/fs/ufs/pboot /usr/platform/`uname -i`/lib/fs/ufs/bootblk /dev/rdsk/devicename — x86 system
19) Remove the last backup tape, and insert a new tape onto which we can write. Make a dump level 0 backup of the newly restored system by issuing the following command :
ufsdump 0ucf /dev/rmt/n /dev/rdsk/deviceName
This step is needed because ufsrestore repositions the files and changes the inode allocations – the old backup will not truly represent the newly restored file system
20) Reboot the system :
#reboot (or)
# init 6
System gets rebooted and newly restored file systems are ready to be used.
21) What is the monitoring and reporting tool that comes as part of AWS console?
Cloudwatch the tool listed under management section of AWS console helps with monitoring and reporting emtrics in AWS environment. Following metrics can be monitored as part of cloudwatch including
1) CPU
2) Disk utilization
3) Network
4) Status Check
In addition to the above mentioned metrics RAM the custom metric can be monitored using cloudwatch
22) Give details on status check in cloudwatch?
In an AWS environment status of both instance and system needs to be monitored. As such there are system status check as well as instance status check sections associated with each and every EC2 instance. As the name implies system status check makes sure that physical machines on which the instances have been hosted is in good shape. Instance status check is at the EC2 instance which literally translates to virtual machine in AWS environment
23) What happens if a failure is reported in status check section of AWS?
Depending on what type of failure has been reported following actions can be taken:
In case of system failure – Restart the virtual machine. In AWS terms, restart the EC2 instance. This will automatically bring up the virtual machine in a physical hardware that is issue free
Instance Failure – Depending on the type of failure reported in EC2 instance this can be stopping and starting of virtual machine to ix the issue. In case of disk failure appropriate action can be taken at operating system level to fix the issues
24) What is an EC2 instance in AWS?
This is the basic component of AWS infrastructure. EC2 translates to Elastic compute cloud. In real-time this is a pre-built virtual machine template hosted in AWS that can be chosen, customized to fit the application needs
This is the prime AWS service that eliminates a business necessity to own a data center to maintain their servers, hosts etc
25) What is an ephemeral storage?
An ephemeral storage is a storage that is temporary (or) non-persistent
26) What is the difference between instance and system status check in cloudwatch?
An instance status check checks the EC2 instance in an AWS environment whereas system status check checks the host
27) What is the meaning of EBS volume status check warning?
An EBS volume is degraded or severly degraded. Hence, a warning in an EBS environment is something that can’t be ignored as with other systems
28) What is the use of replicalag metric in AWS?
Replicalag is a metric used to monitor the lag between primary RDS the relational database service a database equivalent in AWS environment and the read replica the secondary database system that is in read only mode
29) What is the minimum granularity level that a cloudwatch can monitor?
Minimum granularity that cloudwatch can monitor is 1 minute. In most real-time cases 5 minute metric monitoring is configured
30) What is the meaning of ebs volume impaired ?
EBS volume impaired means that the volume is stalled or not available
31) Where is ELB latency reported?
In cloudwatch the latency reported by elastic load balancer ELB is available
32) What is included in EC2 instance launch log?
Once the EC2 instance is created, configured and launched following details are recorded in instance launch log:
Creating security groups – The result needs to be Successful. In case of issues the status will be different
Authorizing inbound rules – For proper authorization this should show Successful
Initiating launches – Again this has to be Successful
At the end we see a message that says Launch initiation complete
33) What will happen once an EC2 instance is launched?
After the EC2 instance has been launched it will be in running state. Once an instance is in running state it is ready for use. At this point usage hours which is typically billable resource usage starts. This keeps accruing until we stop or terminate our instance. NExt immediate step is to view instance
34) What is maximum segment size (mss)? How is this relevant to AWS?
The maximum segment size is the important factor that determines the size of an un fragmented data segment. AWS is cloud based and the products hosted are accessed via internet connection. For data segments to successfully pass through all the router during transit their size should be acceptable across routers. If they grow big the data segments get fragmented. This eventually leads to network slowness
35) How does a load balancer check the EC2 instance availability in an AWS environment?
Periodically load balancer sends pings, attempts connections, send requests to EC2 instances to cehck their availability. Often these tests are referred as health checks in an AWS environment
36) Give details on health check and status of instances in an AWS environment :-
In an AWS environment to check the status of EC2 instances the load balancer periodically sends pings, attempts connection, sends requests to EC2 instances. This process is referred to as health check in an AWS environment
If an EC2 instance is healthy and functioning normal at the time of health check the status will be InService
If an instance does not respond back this is unhealthy and its status will be OutOfService
37) What are the instances that are candidates to be part of health check?
If an instance is registered with a load balancer this is a candidate under health check process in AWS. This covers instances that are in both healthy and unhealthy statuses which are typically InService and OutOfService respectively
38) What happens when an instance in an AWS environment has been found to be in an unhealthy state?
The requests will not be routed to unhealthy instances by load balancer. Once the instance health is restored back to healthy status requests are routed here
39) What is IPSec?
IPSec refers to the internet protocol security that is used to securely exchange data over public network that no-one can view and read except the intended parties. IPSec makes use of two mechanisms that work together to exchange data in a secure manner over the public networks. Though both of these mechanisms are not mandate we can use just one or both togetehr. The two mechanisms of IPSec are
Authentication header – Used to digitally sign the entire contents of each packet that protects against tampering, spoofing, replay attacks. The major disadvantage of authentication header is that though this protects data packets against tampering the data is still visible to hackers. To overcome this ESP can be used
Encapsulating Security Payload – ESP provides authentication, replay-proofing and integrity checking by making use of 3 components namely ESP header, ESP trailer, ESP authentication block
40) What are the many different types of IPSec modes?
Tunnel more, transport mode are the two modes that we can configure IPSEc to operate in.Tunnel mode is the default mode and used for and used for communication between gateways like routers, ASA firewalls (or) at an end-station to a gateway. Transport mode is used for end-to-end communication between a client and a server, between a workstation and a gateway like telnet connection, remote desktop connection between a workstation and server over VPN etc
41) In a class B network give relationship between network /count and number of hosts possible :-
network
/count No of hosts possible
16 65536
17 32768
18 16384
19 8192
20 4096
21 2048
22 1024
23 512
24 256
25 128
26 64
27 32
28 16
29 8
30 4
42) In a class C network give relationship between network /count and number of hosts possible :-
network
/count No of hosts possible
24 256
25 128
26 64
27 32
28 16
29 8
30 4
43) You are a DBA and have been assigned task of migrating oracle database to AWS with minimal to no impact to source database. How will you achieve this?
Make use of Database Migration Service. This will help you migrate databases securely and easily. This tool enables live migration of data making sure source database is up and running during migration
44) Which AWS service will you make use of to monitor CPU utilization of an EC2 resource in AWS environment?
AWS Cloudwatch is a monitoring service that can be used for management as well in an AWS environment. We can get data insights to monitor system performance and optimize resource utilization in an AWS environment
44) Give details on some AWS terminologies you need to be aware of as support engineer :-
Here are some common terminologies that you will come across in your daily job
EC2 instance – This is how the virtual machine is referred to in an AWS environment
Region – The physical geographical locations that host AWS datacenters is referred to as region. This keeps expanding with growth of AWS
RDS – The database related service commonly called as relational database services
S3- The storage service from AWS
EBS – The elastic block storage, another storage option from AWS
Availability zone – Commonly referred to as AZ
Virtual private cloud – Commonly called as VPC is datacenter in virtual format in AWS
45) What is use of wireshark?
This is a open-source packet analyzer tool commonly used to monitor network traffic coming in and out of the servers hosting applications. At times is is used to monitor and make sure there are no security threats in this system

Free AWS tutorials. Enter your email address:

Delivered by FeedBurner

Posted on

AWS certified solutions architect associate level dumps

Enter your email address:

Delivered by FeedBurner

1) Your company sells both audio and video files. Which AWS service helps with converting your media files in to different formats?
a) SQS
b) SNS
c) Elastic transcoder
d) Appstream
Answer: c
2) You are done with code development. As a next step you upload your code onto AWS. Which AWS service automatically handles the deployment, from capacity provisioning, load balancing, auto-scaling to application health monitoring while you retain full control over the AWS resources powering your application and can access the underlying resources at any time?
a) SNS
b) Cloudtrail
c) Elastic Beanstalk
d) EMR
Answer: c
3) Your project makes use of chef configuration management in AWS. You want to leverage existing Chef recipes in AWS. Which AWS service will you make use of?
a) AWS Devworks
b) AWs Sysworks
c) AWS Opsworks
d) AWS Chefworks
Answer: c
4) You want to record logs of all AWS API calls. Which AWS service will you make use of for this purpose?
a) AWS CloudTrail
b) AWS EMR
c) AWS S3
d) AWS EC2
Answer: a
5) You are asked to deploy entire cloud environments via a JSON script. Which AWS service can you make use of?
a) CloudFormation
b) Cloudtrail
c) Cloudwatch
d) EBS
Answer: a
Explanation: Cloudformation is an automated provisioning engine designed
6) Your firm is undergoing annual security audit now. Your auditor is asking for details on logs as to who provision which resources on your AWS platform. Which AWS service will you make use of to collect logs?
a) Cloudtrail
b) Cloudwatch
c) CLoudscape
d) Cloudxx
Answer: a
7) Which service is the global content delivery network CDN from AWS?
a) Cloudwatch
b) Cloudtrail
c) Cloudfront
d) Cloudformation
Answer: c
8) Do you know what are the different level of support plans available in AWS?
a) Basic, developer, business, enterprise
b) developer, business
c) developer, enterprise
d) basic, developer, enterprise
Answer: a
9) Which is an AWS configuration management service that uses Chef ?
a) AWS OpsWorks
b) AWS Sysworks
c) AWS Chefworks
d) None
Answer: a
10) Which AWS service would best suit your need of configuration management service to allow your system administrators to configure and operate your web applications using Chef?
a) AWS OpsWorks
b) AWS Sysworks
c) AWS Chefworks
d) None
Answer: a
11) Is it true that amazon will not have root level SSH access to your EC2 instances?
a) Yes
b) No
Answer : a
12) How will you get details on API calls made to elastic load balancer in AWS environment for auditing purpose?
a) Enable cloud trail on the ELB
b) Enable cloud trail on the EC2 instance
c) Enable cloud watch on the ELB
d) Enable cloud audit on ELB
Answer : a
13) Does amazon S3 support website redirects?
a) Yes
b) No
answer : a
14) Why is the cluster placement group always designed to exist within 1 availability zone?
a) high latency
b) low latency
c) high availability
d) low availability
Answer : b
15) You turned on cloudtrail in AWS management console. Which AWS service logs can be accessed from cloud trail?
a) EC2
b) VPC
c) EBS
d) None of the above
Answer : a,b,c
16) You have a requirement to migrate objects stored in S3 onto another class of S3 based on age of data. Which S3 service will you make use of to achieve this?
a) Reduced redundancy
b) Frequent Access
c) Glacier
d) Lifecycle Management
Answer : d
17) Do classic ELB’s support IPV4 and IPv6?
a) Yes
b) No
Answer : a
18) You are looking for a hosting solution that is inexpensive, highly available, scales automatically as the website traffic grows. The website hosts static content. Which AWS service can you recommend as solution to your customer?
a) EC2 with EBS attached to it, including autoscaling group with a minimum configuration of 1
b) EC2 with EBS attached to it, includign autoscaling group with minimum configuration fo 4
c) S3 the object storage solution from AWS
d) Cloudwatch
Answer : c
Explanation : S3 is used to store the static information and it is cheapest option from AWS
19) Does amazon S3 and S3 standard-IA (Infrequent Access) offer same latency and throughput performance?
a) Yes
b) No
Answer : b
Explanation : Amazon S3 Standard-IA offers the high durability, throughput, and low latency of Amazon S3 Standard
20) What is the advantage of amazon S3 standard-IA over S3?
a) low per GB storage price
b) low per GB retrieval fee
c) high per GB storage price
d) high per GB retrieval fee
Answer : a,b
21) At what minimum time interval granularity does amazon cloudwatch does receive and aggregate the data?
a) 30 minutes
b) 30 seconds
c) 1 second
d) 1 minute
Answer : d
22) What is the minimum timeinterval for cloudwatch?
a) 1 hour
b) 1 second
c) 1 minute
d) 30 minute
Answer : c
23) You have implemented multipart uploads on a client project. You notice that incomplete parts of the object are still being stored in S3 because some objects are not uploaded successfully. This costs you in form of additional charges on these objects that are useless. Which amazon S3 service will you make use of to expire incomplete multipart uploads?
a) S3 bucket policy
b) Redundancy
c) S3 lifecycle policy
d) Cloud watch with alerts from data pipeline
Answer : c
Explanation : S3 lifecycle policies can be created to expire incomplete multipart uploads. Thus we can save cost by limiting the time charges associated with storing of non-completed multipart uploads
24) What is durability percentage of amazon S3 Infrequent Access IA?
a) 99.999999999%
b) 99.999999%
c) 99.999%
d) 99.9%
Answer : a
25) Does amazon glacier and amazon S3 standard offer same durability as S3 standard IA?
a) Yes
b) No
Answer : a
Explanation : 99.999999999% is durability offered by amazon S3 standard, S3-IA, glacier
26) In amazon S3 you want to define the lifecycle of your object with a predefined policy to reduce cost of storage. How can you accomplish that?
a) S3 lifecycle management
b) S3 cloudwatch
c) S3 EC2
d) S3 datapipeline
Answer : a
27) You have to migrate amazon S3 objects to standard-IA. How can you accomplish that?
a) S3 lifecycle transition policy
b) cloudwatch
c) datapipeline
d) none of the above
Answer : a
28) You have to migrate amazon S3 objects to Amazon Glacier. How can you accomplish that?
a) S3 lifecycle transition policy
b) cloudwatch
c) datapipeline
d) none of the above
Answer : a
29) Your client wants you to retain history of all the EC2 calls made on their account. This log detail is used for security audit and operational purposes. The operational requirement involves troubleshooting. Which AWS service will you make use of for this?
a) Cloudtrail
b) Cloudwatch
c) Cloudfront
d) Cloudtracker
Answer : a
30) Can you make use of your existing microsoft windows server licenses with amazon EC2 shared tenancy instance?
a) Yes
b) No
Answer : b
Explanation : This existing license can be used with dedicated host and not AWS
31) What is the number of elastic IP addresses that a region can have?
a) 4
b) 5
c) 6
d) 7
Answer : b
32) Is each EBS snapshot given a unique identifier?
a) Yes
b) No
Answer : a
33) Can multiple EBS snapshots be retained and can we read a older snapshot to do a point-in-time recovery?
a) Yes
b) No
Answer : a
34) Who can permanently delete a version of amazon S3 bucket?
a) Admin
b) Bucket owner
c) technologist
d) all
Answer : b
35) How can you move an EBS volume to a duplicate EBS volume in a separate region?
a) Move the EBS volume
b) Take a snapshot of the EBS volume and copy it to the region
c) USe cloudflare
d) None of the above
Answer : b
36) Which AWS service helps you enable governance, compliance, operational auditing and risk of auditing your AWS account?
a) Cloudtrail
b) Cloudwatch
c) S3
d) Cloudformation
Answer : a
37) You want to start using database engines in your AWS account. Which AWS service will you make use of for this purpose?
a) Redshift
b) Relational database service RDS
c) MDS
d) Kineses
Answer : b
38) You are making use of AWS VPC component. Which group of AWS service does this fall under?
a) Database service
b) Computing services
c) Networking services
d) command line services
Answer : c
39) You launch you media company and want your files to be viewable on a variety of devices. Which AWS service will you make use of?
a) Appstream
b) Elastic Transcoder
c) Simple Transcoder
d) RDS
Answer : b

Enter your email address:

Delivered by FeedBurner

Posted on

AWS Certified Solutions Architect Associate exam dumps

Enter your email address:

Delivered by FeedBurner

1) What data transfer charge is incurred when replicating data from your primary RDS instance to your secondary RDS instance in a AWS environment?
a) The charge is two times the standard data transfer charge
b) The charge is the same as the standard data transfer charge
c) There is no charge associated with this action
d) The charge is half of the standard data transfer charge
Answer : c
2) When you add a rule to an RDS security group, you must specify a port number or protocol. is it true or false?
a) True
b) False
Answer : b
3) What happens when ingress rules are configured in AWS RDS security group?
a) Same rules apply to all DB instances that are associated with that security group
b) only certain rules apply to specified db instances in security group
c) Security rules does apply to dynamodb instances
d) None of the above
Answer : a
4) When you have deployed an RDS database into multiple availability zones, can you use the secondary database as an independent read node. Is that correct?
a) No
b) Only in US-West-1 region
c) It depends on how you set it up
d) Yes
Answer : a
5) AWS ElastiCache uses which two cache engines?
a) Redis, Memory
b) Reddit, Memcrush
c) Redis, Memcached
d) MyISAM, InnoDB
Answer : c
Explanation : amazon elasticache supports memcached and redis cache engines. The cache engine choice depends on analyzing the benefits of using each engine and making best decision
6) You have amazon elasticache implementation to be done in clustered mode. Which cache engine will you make use of?
a) Memcahced
b) Redis
c) Rediff
d) Dynamodb
Answer : a,b
7) Your project uses SQL Server RDS. What is the maximum size for a Microsoft SQL Server DB with SQL Server Express edition in AWS?
a) 1GB per Database
b) 4TB per Database
c) 10GB per Database
d) 300GB per Database
Answer : c
Explanation : There are two different limits one that of the database 10GB in size, and that of the DB instance server storage 300GB in size. A DB server instance could quite easily host several DBs, or a DB and support files such as logs, dumps, and flat file backups. For express edition 10GB is used
8) The AWS platform is certified PCI DSS Level 1 Certified. True or false?
a) True
b) False
Answer : a
Explanation : The Payment Card Industry Data Security Standard (also known as PCI DSS) is a proprietary information security standard administered by the PCI Security Standards Council, which was founded by American Express, Discover Financial Services, JCB International, MasterCard Worldwide and Visa Inc.PCI DSS applies to all entities that store, process or transmit cardholder data (CHD) and/or sensitive authentication data (SAD) including merchants, processors, acquirers, issuers, and service providers. The PCI DSS is mandated by the card brands and administered by the Payment Card Industry Security Standards Council.AWS has been PCI DSS Certified since 2010. As of July 11, 2016, an external Qualified Security Assessor Company (QSAC), Coalfire Systems Inc. has validated that Amazon Web Services (AWS) successfully completed PCI Data Security Standards 3.2 Level 1 Service Provider assessment and were found to be compliant for all the services outlined below.
Service provider levels are defined as:
Level 1: Any service provider that stores, processes and/or transmits over 300,000 transactions annually
Level 2: Any service provider that stores, processes and/or transmits less than 300,000 transactions annually
9) When deploying databases on your EC2 instances, AWS recommends that you deploy on magnetic storage rather than SSD storage if you need better performance. Will this yield better performance?
a) True
b) False
Answer : b
Explanation : In general SSD the solid state devices are the fastest and are utilized in high throughput environment like SAP, highly transactional environments like OLTP
10) How does amazon SWF help users?
a) Manage user identification and authorization
b) Secure their VPCs
c) Store file based objects
d) Coordinate synchronous and asynchronous tasks
Answer : d
Explanation : The simple workflow service is used for task management in AWS environment . The tasks can be synchronous as well as asynchronous tasks
11) If an Amazon EBS volume is an additional partition not the root volume can I detach it without stopping the EC2 instance?
a) Yes, although it may take some time for detachment
b) No, you will need to stop the instance
c) Detaching will corrupt the instance
d) Detaching will be rapid
Answer : a
12) What kind of storage is amazon S3?
a) Object Based Storage
b) Block Based Storage
c) A Data Warehouse Solution suitable for data archival, not frequently used files
d) OLTP solution
Answer : a
Explanation : In simple terms an amazon S3 creates buckets and the information is stored in buckets as objects
13) In RDS project you are responsibly for maintaining the OS, application security patching, antivirus, etc in AWS. Say if this is true or false?
a) True
b) False
Answer : b
Explanation : To explain it better I attended a AWS online webinar few years back. One of the major selling point that the AWS RDS team even considers as USP is no need to maintain and manage OS as EC2 is re-built virtual image that are totally managed by AWS. Also, patching antivirus all managed by AWS RDS service department. So, if you are a linux system administrator, database administrator buckle up and start learning AWS as you job is going to be at stake in next few months
14) MySQL installations default to what port number?
a) 1433
b) 3389
c) 3306
d) 8080
Answer : c
Explanation : In a typical mysql implementation 3306 is the default port. This should be secured safe and should not be accessible by untrusted hosts
15) What does EBS stand for?
a) Energetic Block Store
b) Elastic Based Store
c) Equal Block Store
d) Elastic Block Store
Answer : d
Explanation : EBS is the storage volume that is associated with EC2 instance in an AWS environment
16) Auditing user access/API calls etc across the entire AWS estate can be achieved by using which of the AWS service?
a) CloudFront
b) CloudWatch
c) CloudFlare
d) CloudTrail
Answer : d
17) Which is a document that provides a formal statement of one or more permissions?
a) Policy
b) User
c) Group
d) Role
Answer : a
18) If you want your application to check RDS for an error then, have it look for which node in the response from the Amazon RDS API?
a) Incorrect
b) Error
c) Abort
d) Exit
Answer : b
19) With regards to RDS where should the standby be?
a) Same Availability Zone
b) Same Region
c) Same VPC
d) Same Subnet
Answer: b
Explanation : Your standby is automatically provisioned in a different Availability Zone of the same Region as your primary DB instance while using RDS service
20) Can you conduct your own vulnerability scans on your own VPC without alerting AWS first?
a) Yes
b) No
Answer : b
Explanation : Amazon Inspector is a security vulnerability assessment service that helps improve the security and compliance of applications deployed on Amazon EC2. Make use of it
21) I’ve a running EC2 instance with EBS volume as the root device of the instance. Can I detach it without stopping the instance?
a) Yes
b) No
Answer: b
Explanation : As in normal linux (or) unix machines root volume is the main volume that needs to be accessible for the machine to be up and running without any issues. Same does apply to EC2 instances as well
22) Your project demands deploying production database to EC2 instance. You are tasked with choosing best storage type. You anticipate that at peak you will need 40,000 IOPS and an average of 14,000 – 18,000 IOPS. What storage medium should you choose?
a) Magnetic Storage
b) Provisioned IOPS
c) General Purpose SSD
d) amazon S3
Answer : b
23) What should you do to trouble shoot the issue related to your system status check has failure. This is time sensitive and can’t wait. Pick the best possible solution?
a) Communicate the failure to AWS support by creating support ticket
b) Restart the instance
c) Stop the instance and then start it again
d) Terminate the instance and then delete your VPC
Answer : c
Explanation : Once a system status check fails we can have AWS take care of it or we can take care of it ourselves. For the issue to be taken care of by us simple stopping and starting the instance, terminating and replacing the instance are some possible solutions
24) You have configured read replicas. Can read replica’s have multiple availability zone for redundancy?
a) Yes
b) No
Answer : b
25) You are making use of AWs simple queue service. How many message queues can you create in SQS?
a) 200
b) 2000
c) unlimited
d) 2
Answer : c
Explanation : There is no limited on the number of message queues that can be created in SQS
26) What is default method of ingesting content into Amazon S3? Choose all that apply
a) Internet
b) Intranet
c) VPN
d) None of the above
Answer : a,c
27) You have decided to take advantage of AWS. You are part of migration project meeting. Project manager asks you time slot needed for code changes to migrate onto S3 with full speed based on trasnsfer acceleration service you recommended earlier. What is your timing recommendaiton on this?
a) 10% of total project time
b) 20% of time
c) 30% of time as major code change is needed
d) No code change needed to avail this feature
Answer : d
28) You are migrating data onto amazon S3. To take advantage of transfer acceleration to expedite transfer which software will you install at client/server end?
a) No additional software needed
b) Amazon EMR
c) Amazon Glacier
Answer : a
29) How much performance improvement is achieved using transfer acceleration from amazon S3?
a) 50% to 300%
b) 100% to 300%
c) 200% to 300%
d) 300%
Answer : a
30) Which amazon S3 feature /option enables fast, easy, and secure transfers of files over long distances between your client and an S3 bucket
a) Lightning Acceleration
b) Glacier Acceleration
c) Transfer Acceleration
d) Multipart Acceleration
Answer : c
Explanation : Transfer acceleration is a network based, protocol based service that enabled fast transfer of data onto S3 and reduces time between server s3 bucket and client transfer latency
31) You are moving data from amazon S3 to EC2 in the same region. How much do you need to pay for this?
a) $10
b) $20
c) $30
d) $0
Answer : d
Explanation : We dont need to pay any money as there is no cost associated with moving data from S3 to EC2 in the same region
32) Which AWS technologies will you make use of to ease the database load?
a) RDS Multi availability zone
b) RDS read replica
c) Elasticcache
d) cloudformation
Answer : b,c
33) Access keys should never be stored on an AMI. Is this true or false?
a) True
b) False
Answer : a
34) You have project that involves non-critical and reproducible data that needs to be stored with cost efficiency. Which AWS S3 solution will you recommend?
a) Standard S3
b) S3 reduced redundancy storage RRS
c) amazon glacier
d) Amazon elastic cache
Answer : b
35) What is main benefit of using SWF worflow?
a) SWF workflow ensures that actions are executed only once
b) SWF workflow ensures that actions are executed multiple times
c) SWF workflow ensures that actions are not executed at all
d) SWF workflow is for managing actions related to storage
Answer : a
36) Amazon RDS enables automated backups of your DB Instance at no cost. What is the retention period?
a) 1 day
b) 1 week
c) 1 month
d) 1 year
answer : a
37) You run a shopping portal in which order process involves EC2 instances processing messages from an SQS queue. You ahve to ensure that order processing does not repeat multiple times. How will you accomplish that?
a) Order processing workflow should be designed to make use of long-polling instead of SQS
b) Order processing workflow should be designed to make use of SWF instead of SQS
c) Increase visibility timeout on SQS
d) None of the above
Answer : b
Explanation : An SWF workflow action is executed only once and hence the order processing needs to be re-written using SWF
38) Which one is best recommended? Access keys (or) IAM roles?
a) Access keys
b) Assign roles using IAM roles as this involves least privilege model that involves granting each user an unique set of security credentials
c) Both are equally good
d) None of these
Answer : b
39) You are looking for AWS implementation to operate production applications and databases which are more highly available, fault tolerant and scalable than would be possible from a single data center. What is recommended solution?
a) Latency Zones
b) Availability zones
c) Comfort zone
d) RDS
Answer : b
40) Can you expedite uploads to S3 by writing directly to edge locations?
a) Yes
b) No
Answer : a
41) Can you attach multiple EC2 instances with single EBS block store?
a) Yes
b) No
Answer : b
Explanation : by design EBS is associated with EC2 instance as one-on-one basis. To accomplish multiple instances accessing a single file system make use of elastic file system the EFS
42) Where does a bastion host sit?
a) public subnet
b) private subnet
c) hybrid cloud
d) VPN
Answer : a
43) Which among the following serves as secure gateway?
a) EC2 host
b) EBS host
c) RDS host
d) Bastion host
Answer : d
44) Is it true that bastion host is a host that sits in a public subnet that serves as secure gateway using which users SSH into instance via private subnet?
a) Yes
b) No
Answer : a
45) Do we need to protect a bastion host by creating it in private subnet?
a) Yes
b) No
Answer : a
Explanation : A bastion host sits in a public subnet serving as secure gateway to SSH onto instance via private subnet
46) For denying root access to EC2 instances can you make use of IAM policies that utilizes least privilege concept to restrict access via role assignment?
a) Yes
b) No
Answer : b
Explanation : Root user is the super user in the system and has access to entire system including all of its resources and services. This can’t be restricted using IAM policies
47) You have to build a conversational interfaces using voice and text. Which AWs service will you make use of?
a) Polly
b) Lex
c) SQS
d) SWF
Answer : a,b
48) What is use of AWS Polly service?
a) build a conversational interfaces using voice and text
b) turns text into lifelike speech
c) message queue
d) database storage
Answer : a,b
49) Which AWS interfaces are used to create conversational interfaces like text to speech, voice recognition systems in AWS?
a) Molly
b) Jolly
c) Fix
d) Lex
e) Polly
Answer : d,e
50) What is used to build elastic, self-healing applications in AWS?
a) Autoscaling groups
b) EFS groups
c) EBS groups
d) Polly
Answer : a
51) What is the maximum number of swf domains allowed? Provide details on combined total of both registered and deprecated domains ?
a) 1000
b) 100
c) 30
d) 345
Answer : b
52) What is the maximum number of registered and deprecated workflow and activity types allowed per amazon SWF account?
a) 10000 each per domain
b) 1000 each per domain
c) 100 each per domain
d) 100000 each per domain
Answer : a
53) You are making use of EBS-optimized instances with provisioned IOPS volumes. What is latency in optimal scenario?
a) single digit millisecond
b) double digit millisecond
c) three digit millisecond
d) four digit millisecond
Answer : a

Enter your email address:

Delivered by FeedBurner

Posted on

AWS associate architect exam questions

Enter your email address:

Delivered by FeedBurner

1) What is the total volume of data and number of objects that can be stored in Amazon S3 bucket?
a) 1TB
b) 2TB
c) 5TB
d) Unlimited
Answer : a
2) You are uploading objects onto amazon S3 buckets using PUT. What is the largest object that can be uploaded onto S3 in single PUT operation?
a) 5TB
b) 5GB
c) 4MB
d) 5KB
Answer : b
3) What is the maximum size of amazon S3 objects that can be stored in S3?
a) 3TB
b) 5TB
c) 10TB
d) unlimited
Answer : b
4) You have requirement to upload object onto S3 bucket that is 4TB in size. Which capability will you make use of?
a) Multipart upload
b) Multipart PUT
c) Mutipart update
d) Multipate caching
Answer : a
5) What are the different storage classes offered by amazon S3. Choose all that apply?
a) S3 IA
b) S3 RRS
c) Amazon Glacier
d) EBS volumnes
Answer : a,b,c
6) Can I store 0byte sized file in amazon S3?
a) Yes
b) No
Answer : a
Explanation: S3 can store unlimited data with sizes ranging from 0 bytes to 5TB
7) You want to delete multiple objects from S3. How can you accomplish that?
a) Multi-Object Delete operation
b) Multi-Object Purge operation
c) Multi-Object Drop operation
d) Multi-Object Truncate operation
Answer : a
8) When using a custom VPC and placing an EC2 instance in to a public subnet, it will be automatically internet accessible and you do not need to apply an elastic IP address or ELB to the instance. Is it true or false?
a) True
b) False
Answer : b
Explanation : As part of this configuration we need to allocate an Elastic IP address and assign it to your instance after it’s launched
9) If an Amazon EBS volume is an additional partition and not the root volume we can detach it without stopping the instance. Say if this is possible?
a) Yes, although it may take some time
b) No, you will need to stop the instance
Answer : a
Explanation : Yes. An EBS volume other than root volume can be detached from live running instance. AWS CLI commands can be used for this purpose as well
10) You just started using AWS. Do you know how many regions are there in AWS?
a) 11
b) 13
c) 16
d) 20
Answer : c
Explanation : By 2018 new regions in stockholm sweden will be available. Lots more have been planned and expansion actions are underway owing to growing popularity of AWS. Keep watching for this value to change in coming days
11) You are using AWS migration service to migrate vmware VM’s. What is the maximum number of VM’s you can migrate concurrently?
a) 20
b) 30
c) 40
d) 50
Answer : d
12) What type of consistency does S3 provide for overwrite PUTS and DELETES ?
a) write-after-read
b) write-after-write
c) read-after-write
d) read-after-read
Answer : c
Expanation : S3 provides eventual consistency which is read-after-write consistency for overwrite PUTS and DELETES
13) You want to configure cross-regional replication of S3 bucket. Do you need to enable versioning on both source and target buckets?
a) Yes
b) No
Answer : a
Explanation : Versioning must be enabled in both source and target S3 buckets for cross-regional replication
14) You have to connect cloud resources to your IPSce VPN. Which AWS feature will you make use of to accomplish this?
a) S3
b) VPC
c) hybrid cloud
d) private cloud
Answer : b
15) Is multifactor authentication needed to delete objects from a S3 bucket?
a) Yes
b) No
Answer : b
Explanation : To delete objects from S3 buckets multifactor authentication is an optional feature and not mandatory
16) Where is the AMI ID used in autoscaling policy specified?
a) Autoscaling group
b) Autoscaling policy
c) Launch configuration
d) Security Group
Answer : c
17) You project makes use of EBS-backed EC2 instances that are on-demand instances that are to be stopped immediately. You want to make sure you dont incur charges but still preserve data. Is it possible with on-demand instances?
a) Yes
b) No
Answer : a
Explanation : EBS-backed on-demand EC2 instances can be stopped and still data is preserved that can be made use of later. These instances will not incur charges once stopped
18) What is the minimum size of an S3 object?
a) 0 bytes
b) 1 byte
c) 10 bytes
d) 100 GB
Answer : a
Explanation : Create a file with simple touch command and this will be 0 bytes. Such files with as small as 0 bytes can be created in S3
19) You are performing snapshot of an EBS volume. During this process will the volume become unavailable and the instance no longer has the ability to communicate with the EBS volume until the snapshot is complete?
a) Yes
b) No
Answer : b

Enter your email address:

Delivered by FeedBurner

Posted on

Free AWS certified solutions architect associate practice exam

Enter your email address:

Delivered by FeedBurner

AWS certified solutions architect associate exam is the foundation once you want to start using AWS cloud platform. IF you are into infrastructure career say system administrator, network administrator, database administrator, storage administrator upgrade your AWS without any delay. In next 2 years your AWS skills with your experience will be a mandate skill to be hired and retain your current job. AWS certified solutions architect associate practice exam will hep you prepare for the exam. This is not official exam question and answer. We provide this for preparation help only
We have updated latest questions to reflect 2018 exam syllabus and will help you pass exam in first attempt
1) You want to move your documents onto AWS for immediate availability. Which AWS component will you make use of?
a) EC2
b) Cloudfront
c) S3
d) Amazon Glacier
Answer : c
Explanation : Amazon S3 is where we need to upload the document onto for immediate access
2) What is a virtual machine image called in AWS?
a) EC2
b) Cloudwatch
c) Redshift
d) Kinmetrics
Answer : EC2
Explanation : AWS EC2 is the virtual image that is available by default in AWS library. Depending on requirement these virtual machine templates pre-built with proper OS, 32-bit/64-bit version, storage capacity etc makes it a choice to be deployed as part of free tier or on paid basis
3) You have been asked to choose appropriate EBS storage volume that can also act as boot volume for your application with about 3000 IOPS. Which one will you use?
a) HDD
b) SSD
c) Flash drive
d) USB
Answer : b
Explanation : In AWS always SSD can function as boot volume. HDD can’t be a boot volume. Boot volume can be general purpose SSD (or) provisioned IOPS SSD
4) Amazon S3 storage classes can be which of following?
a) Normal
b) standard
c) custom
d) reduced redundancy
Answer : b,d
Explanation : Once an objects gets stored in AWS S3 storage, storage class is assigned to these objects depending on criticality. Default storage class is standard storage
5) Where are thumbnails stored in Amazon S3?
a) Reduced redundancy storage
b) standard storage
c) Elastic cache
d) Amazon glacier
Answer : a
Reduced redundancy storage is used to store easily reproducible thumbnails owing to its cost effectiveness
6) You just uploaded your file onto AWS. You want this upload to trigger an associated job in hadoop ecosystem. Which AWS components can help with this requirement?
a) Amazon S3
b) SMS
c) SQS
d) SNS
e) Ec2
Answer : a,c,d
Explanation: In AWS a file is uploaded onto Amazon S3 bucket. this upload action will send event notifications. The event notifications are delivered by SQS, SNS.The S3 event notification can be directly send to amazon lambda as well. Once the lambda receives event notification in one of these methods it triggers workflows, alerts or other automated processing including start of job
7) What does cloudformation init script does?
a) Fetch and parse metadata from AWS:cloudformation::init key
b) Install packages
c) compress logs
d) Write files to disk
e) Enable/disable services
f) Start (or) stop services
Answer : a,b,d,e,f
Explanation : The cfn-init is the helper script that reads template metadata from the AWS::CloudFormation::Init key and acts accordingly . The AWS::Cloudformation::init key includes metadata on amazon EC2 instance
8) What is the use of AWS cloud formation list-stacks command?
a) 90 days history on all activity on stacks
b) List of all stacks that you have created
c) List of all stacks that you have deleted
d) List of all stacks that you have created or deleted upto 90 days ago
Answer : d
Explanation : list-stack helps us get list of all stacks created or deleted by us in last 90 days. There is a filtering option to filter based on stack status such as CREATE_COMPLETE, DELETE_COMPLETE. Stack information including the name, stack identifier, template, and status of both created, currently running, stacks deleted in last 90 days is available as result of running this command
9) What is amazon SWF?
a) Task management and task coordinator service
b) Storage service
c) Scheduling service
d) Provisioning service
Answer : a
Explanation : Amazon simple workflow service is a state tracker and task coordinator service in cloud
10) What are the considerations while creating amazon S3 bucket?
a) Bucket name should not already exist
b) Bucket name can already exist, suffix is automatically added internally
c) The bucket name should be all in lower case
d) Bucket name should be all in upper case
Answer : a,c
Explanation: While creating amazon S3 bucket the bucket name should be all lower case, bucket name should be unique and should not already exist
11) I’m creating an amazon S3 bucket from US standard region. Can I make it mixed case?
a) Yes. Mixed case is permitted in US standard region
b) No. Mixed case is not permitted in US standard region
c) No. Mixed case is not permitted. It should be all lower case in any region
d) Yes. Mixed case is permitted in all regions of AWS
Answer : c
Explanation: Amazon S3 buckets should be all lower case. This is across all AWS regions and not specific to Us standard region etc
12) Your employer is asking you to recommend AWS service to store infrequently accessed data and data archives. which one will you recommend?
a) S3
b) S3 redundant raid
c) Amazon glacier
d) S3 standard storage
Answer : c
13) Your organization stores critical data that needs to be patented onto S3. An accidental deletion of this information leads to data loss. You employer enquires you to determine what could have been done to avoid accidental data loss in s3. What is your answer?
a) S3 data should be accessed from infrequently accessed storage
b) Make sure data on S3 can only be accessed using signed URL’s
c) IAM bucket policy to disable deletes could have been created and applied onto S3 buckets
d) S3 bucket versioning could have been enabled and multifactor authentication could have been enabled on bucket
Answer : d
14) You are the DBA in AWS team. You have been asked to come up with estimate on data transfer charge from primary RDS to secondary RDS instance. Is this really expensive?
a) No. There is no charge associated with this action
b) Yes. This is two times the standard data transfer charge
c) No. The rate is same as standard data transfer charge
d) Yes. The rate is x times the number of secondary instances
Answer : a
15) DynamoDB is a relational database. Is this true or false
a) True
b) False
c) This is object oriented database
d) Hybrid database
Answer : b
Explanation : DynamoDB is a non-relational database
16) What is a QuickSight dashboard?
a) A QuickSight Dashboard is a read-only view into the data
b) A QuickSight Dashboard is a read-write view into the data
c) A QuickSight Dashboard is a write-only view into the data
d) None of the above
Answer: a
17) Can a A Jupyter Notebook contain live code
a) Yes
b) No
Answer: a
18) How will you accomplish extra capacity with additional CPU and RAM in EMR?
a) By adding more core nodes
b) By adding more task nodes
c) By adding more taco nodes
Answer: b
19) For large batch processing which is the recommended option?
a) Hive
b) Spark
c) Splunk
Answer: a
20) Your project makes use of batch processing. Can you make use of spark?
a) Yes
b) No
Answer: b
Explanation : For large batch jobs using spark is not recommended as it consumes lot of memory
21) When deploying databases on your EC2 instances, AWS recommends that you deploy on magnetic storage rather than SSD storage if you need better performance. Will this yield better performance?
a) True
b) False
Answer: b
Explanation : In general SSD the solid state devices are the fastest and are utilized in high throughput environment like SAP, highly transactional environments like OLTP
22) When you encrypt a Redshift cluster what happens to the data blocks and system metadata ?
a) They retain their original state and are not encrypted
b) They are encrypted for the cluster and its snapshots
c) None of the above
Answer : b
23) Does AWS Key Management Service supports both symmetric and asymmetric encryption
a) Yes
b) No
Answer : b
Explanation : Only symmetric encryption is supported in AWS key management service
24) You encounter an issue wherein one of the resources in a CloudFormation stack cannot be created. Automatic rollback on error the default setting is intact. What will happen in this scenario?
a) Resource creation continues
b) previously created resources are deleted
c) stack creation continues
d) stack creation terminates
Answer : b,d
25) Do you know what AWS products and features can be deployed by Elastic Beanstalk?
a) SNS service
b) SQS service
c) Elastic Load balancers aka ELB
d) Auto scaling groups
e) database migration tools
f) RDS instances
Answer : c,d,f
26) You have to restrict access to data in S3. How can you achieve this?
a) Using SSE-s3
b) Using ASE
c) S3 bucket policy can be set for security purposes
d) S3 ACL can be set on the bucket or the object
Answer : c,d
27) How can you enable client web applications that are loaded in one domain to interact with resources in a different domain?
a) Enabling multi-zone availability
b) By including IP address of both domains in .htaccess files of both of them
c) AWS S3 CORS configuration
d) All of the above
Answer : c
28) How will you verify that CORS configuration is set on the bucket?
a) AWS S3 console shows Edit CORS Configuration link in the Permissions section of the Properties bucket
b) AWS S3 console shows Show CORS Configuration link in the Permissions section of the Properties bucket
c) AWS glacier console
Answer : a
29) How will you enable cross origin resource sharing in S3?
a) AWS Management Console
b) AWS SDK for Java
c) AWS SDK for .NET
d) REST API
e) None of the above
Answer : a,b,c,d
30) Your project reads information from dynamodb tables that have provisioned throughput enabled by default. Is the provisioned throughput impacted by consistency model ?
a) Strongly consistent reads uses more provisioned read throughput than eventually consistent reads
b) Strongly consistent reads uses less provisioned read throughput than eventually consistent reads
c) Eventual consistent reads uses more provisioned throughput than strongly consistent reads
d) None of the above
31) Which among the following are valid SNS delivery transports?
a) HTTPS
b) HTTP
c) SMS
d) UDP and named pipes
Answer : b,c
31) You have been asked to transfer a reserved instance from one Availability Zone to another. can you do that?
a) Yes
b) No
Answer : a
32) You have been asked to copy amazon machine image across regions? Can you accomplish that?
a) Yes
b) No
Answer: a
Explanation : It is possible to copy AMI within as well as across AWS region. This is possible using AWS management console, AWS CLI or SDKs
33) You are utilizing tools like AWS management console to copy AMI across regions. Which action is internally made use of for this purpose?
a) MoveImage
b) CopyImage
c) MigrateImage
d) DumpImage
Answer: b
34) What types of AMI’s can be copied as part of CopyImage action?
a) EBS-backed AMI’s
b) instance store-backed AMI’s
c) Both a and b
d) None of the above
Answer: c
35) You have an AMI with encrypted snapshot. Will you be able to copy that using CopyImage action?
a) Yes
b) No
Answer: a
36) You are in process of creating AMI. Which API call is internally triggered during this process?
a) CreateImage
b) RegisterImage
c) ami-store-image
d) ami-deploy image
Answer: b
37) Can you have 1 subnet stretched across multiple availability zones?
a) Yes
b) No
Answer: b
38) You are trying to upload a 10GB file to S3. You keep getting the following error message “Your proposed upload exceeds the maximum allowed object size.”. How will you handle this?
a) Design your application to use the Multipart Upload API for all objects
b) Design your application to use small size objects
c) Modify s3 configuration parameter to upload larger value
d) Modify S3 bucket properties to 1TB
Answer: a
39) How will you enable encryption at rest using EC2 and Elastic Block Store?
a) Configure encryption when creating the EC2
b) Configure encryption when creating the EBS volume
c) Configure encryption when installing OS
d) Configure encryption using X.509 certificates
Answer: b
40) From which IP address will you retrieve instance metadata or userdata ?
a) http://169.254.169.254
b) http://127.0.0.1
c) http://10.0.0.1
d) http://192.168.0.254
Answer : a
41) You have an application that makes use of automated name tagging mechanism. Your app runs in fleet of EC2 instances. The digital products to be name tagged are sent via SQS queue. During thanks giving season your sales grows more than expected. The best thing comes with cost of backlog of products in queue. You need to add eC2 instances in cost efficient way to take care of name tagging without delay until the festival sale ends. Which type of instance will you go for in this case?
a) Dedicated instances
b) Reserved instances
c) Spot instances
d) on-demand instances
Answer: c
42) You are designing an application that should serve digital e-books only to paid users in a website. Upon testing you see that instead of front-end users access the books directly from S3 bucket via URL. How can you fix it today?
a) Make use of cloudformation to serve this static content
b) Store the content in EBS instead of S3
c) Make use of glacier
d) Remove the ability for books to be served publicly to the site and then use signed URLs with expiry dates
Answer: d
43) How will you deliver event notifications to AWS Lambda?
a) AWS S3, Amazon SQS, Amazon SNS
b) AWS S3, Amazon SQS
c) Amazon SQS, Amazon SNS
d) Amazon SNS
Answer: a
44) Which metric provides a count of the total number of requests that are pending submission (ie) queued for a registered instance?
a) SurgeQueueLength
b) HTTPCode_BackEnd_3XX
c) HTTPCode_BackEnd_4XX
d) HTTPCode_BackEnd_5XX
Answer: a
45) I’ve saved my files in S3. How durable are they?
a) 99.999999999%
b) 99.99999999%
c) 99.9999999%
d) 99%
Answer : a
Explanation: As a easy memory trick remember 99. nine 9’s. Count it for double confirmation 🙂
46) You are making use of oracle database in AWS RDS. Your performance tuning team recommended change of parallel_processes parameter followed by instance reboot to bring down CPU spike in production environment. Currently you have multi-AZ deployment in place. Can you reboot this oracle instance now?
a) Yes
b) No. Reboot not allowed in RDS
Answer : a
Explanation : It is possible as multi-AZ will failover the instance onto standby database, update DNS as reboot with failover is allowed by multi-AZ
47) Your manager asked you to test oracle RDS high availability. Is it possible to force a failover of oracle RDS configured in multi-AZ?
a) Yes
b) No
Answer : a
48) You have access to AWS CLI and have been asked to reboot the RDS instance with forced failover. You have oracle RDS to work with now. How will you accomplish that?
a) reboot-db-instance –db-instance-identifier ORACLE_SID –force-failover
b) restart-db-instance –db-instance-identifier ORACLE_SID –force-failover
c) shutdown-db-instance –db-instance-identifier ORACLE_SID –force-failover
d) switch-db-instance –db-instance-identifier ORACLE_SID –force-failover
Answer : a
49) You have created a new security group. Is all outbound traffic allowed by default?
a) Yes
b) No
Answer : a
Explanation : By default, a security group includes an outbound rule that allows all outbound traffic
50) You have been asked to choose an instance that are designed to provide moderate baseline performance and the capability to burst to significantly higher performance as required by your workload. Which one will you choose?
a) T2 instances
b) Compute Optimized Instances
c) Memory Optimized Instances
d) Storage Optimized Instances
Answer : a
51) You are looking for an instance that offers small amount of consistent CPU resources and allow you to increase CPU capacity in short bursts when additional cycles are available. Which one will you choose?
a) T2 instances
b) Compute Optimized Instances
c) Memory Optimized Instances
d) T1 Micro Instances
Answer : d
52) Where are individualized instances provisioned?
a) Regions
b) Availability Zones
c) Globally
Answer : b
53) You have to assign your own metadata that will help you manage your Amazon EC2 instances . Which form will you make use of?
a) Tags
b) Wildcards
c) Certificates
d) Notes
Answer :a
54) To save administration headaches, Amazon recommends that you leave all security groups in web facing subnets open on port 22 to 0.0.0.0/0 CIDR. That way, you can connect where ever you are in the world. Is this correct?
a) True
b) False
Answer : b
Explanation : This will be security issue
55) Your client asks you to create a bucket in the name of their company. Your client is based out of korea and you try to create a bucket with same name in korean region. You come to know that this bucket name has already been taken. What can you do?​
a) ​Create this bucket is net closest region
b) ​Create this bucket in american region as the bucket names can be duplicated internally
c) ​bucket names are global and not regional. It is not possible to create one. Notify the client and create a new name
d) Create bucket in paris region​
Answer : c
56) Which AWS service is designed for long term data archival​?
a) S3 buckets​
b) ​EBS storage
c) ​Glacier
d) ​Stackbox
Answer : c
57) Under what circumstances will you choose S3 RRS over S3 standard storage?
a) store critical, reproducible data at higher levels of redundancy
b) store critical, reproducible data at lower levels of redundancy
c) store non-critical, reproducible data at higher levels of redundancy
d) store noncritical, reproducible data at lower levels of redundancy
Answer : d
58) Under which SLA does Amazon S3 RRS work?
a) No SLA
b) Amazon glacier SLA
c) Amazon RRS SLA
d) Amazon S3 SLA
Answer : d
59) Your project makes use of RRS. You have made a decision to expand business in multiple zones. Can RRS sustain this data requirement?
a) Yes
b) No
Answer : b
Explanation : RRS is designed to sustain the loss of data at a single facility
60) What are the durability and availability percentages associated within design of S3 RRS?
a) 99.99%, 99.99%
b) 99.90%,99.90%
c) 99%,99%
d) 99.09%,99.09%
Answer : a

Free AWS tutorials. Enter your email address:

Delivered by FeedBurner

Posted on

AWS big data certification exam questions

Enter your email address:

Delivered by FeedBurner

1) Can we launch EMR cluster in a public subnet?
a) Yes
b) No
Answer: b
2) Can you run an EMR cluster in a private subnet with no public IP addresses or attached Internet Gateway?
a) Yes
b) No
Answer: a
3) You are running your EMR cluster in a private subnet. You will have to access your S3. How can you do that?
a) TCP/IP
b) VPC
c) hybrid cloud
d) public cloud
Answer: b
4) You have EMR clusters running in private subnet. You will have to connect to AWS services that do not currently support end points in VPC. How can you connect to those services?
a) By making use of NAT instance for your
b) By making use of WAT instance for your
c) By making use of SAT instance for your
d) By making use of BAT instance for your
Answer: a
5) You have Queries that scans on a local secondary index. Does this consume read capacity units from base table?
a) Yes
b) No
Answer: a
6) Does Kinesis Firehose buffers incoming data before delivering the data to your S3 bucket?
a) Yes
b) No
c) For specific S3 buckets
d) Buffers are available only during redshift load operation
Answer: a
7) Kinesis Firehose buffers incoming data before delivering the data to your S3 bucket. Could you tell what the buffer size range is like?
a) 1 MB to 128 MB
b) 1 KB to 128 MB
c) 1 GB to 128 MB
d) 89 MB to 128 MB
Answer: a
8) How long does each Kinesis firehose delivery stream stores data records in case the delivery destination is unavailable?
a) 12 hours
b) 24 hours
c) 48 hours
d) 72 hours
Answer: b
9) Which AWS IoT service transforms messages using a SQL-based syntax ?
a) Rule Actions
b) Rules Engine
c) Kinesis Firehose
d) Data Pipeline
Answer : b
10) How is fault tolerance possible in amazon redshift clusters when there is a drive failure?
a) Amazon Redshift continuously monitors the health of the cluster and automatically re-replicates data from failed drives and replaces nodes as necessary
b) RAID 5
c) RAID 1
d) iSCSI mirror
Answer : a

Enter your email address:

Delivered by FeedBurner

Posted on

AWS big data specialty certification exam dumps

Enter your email address:

Delivered by FeedBurner

1) If Kinesis Firehose experiences data delivery issues to S3, how long will it retry delivery to S3?
a) 7 hours
b) 7 days
c) 24 hours
d) 3 hours
Answer : c
2) You have to create a visual that depicts one or two measures for a dimension. Which one will you choose?
a) Heat Map
b) Tree Map
c) Pivot Table
d) Scatter Plot
Answer: b
3) You are looking for a way to reduce the amount of data stored in a Redshift cluster. How will you achieve that?
a) Compression algorithms
b) Encryption algorithms
c) Decryption algorithms
d) SPLUNK algorithms
Answer: a
4) How does UNLOAD automatically encrypts data files while writing the resulting file onto S3 from redshift?
a) CSE-S3 client side encryption S3
b) SSE-S3 server side encryption S3
c) ASE
d) SSH
Answer: b
5) What does area under curve mean that has a value 0.5?
a) This model is accurate
b) This model is not accurate
c) This creates lots of confidence
d) This creates less confidence beyond a guess
Answer: b,d
6) What is AUC?
a) Average unit computation
b) Average universal compulsion
c) Area under curve
d) None of the above
Answer: c
7) What does lower AUC mean?
a) improves accuracy of the prediction
c) reduces accuracy of prediction
b) mean of all predicted values
d) Mode of all predicted values
Answer: b
8) You have an auc value of 0.5. Does that mean that the guess is accurate and perfect?
a) Yes
b) No
c) Partially yes
Answer: c
Explanation: Value of 0.5 means that the guess is accurate but not a perfect guess but rather random guess
9) Can you make use of redshift manifest to automatically check files in S3 for data issues?
a) Yes
b) No
Answer: b
10) Can you control the encryption keys and cryptographic operations performed by the Hardware security module using cloudhsM?
a) Yes
b) No
Answer: a
11) You are in process of creating a table. Which among the following must be defined while table creation in AWS redshift. What are the required definition parameters?
a) The Table Name
b) RCU (Read Capacity Units)
c) WCU (Write Capacity Units)
d) DCU (Delete/Update Capacity Units)
e) The table capacity number of GB
f) Partition and Sort Keys
Answer: a,b,c,f
12) Can you run a EMR cluster in public subnet?
a) Yes
b) No
Answer: b
Explanation: Owing to compliance or security requirements we can run an EMR cluster in a private subnet with no public IP addresses or attached Internet Gateway
13) Your project makes use of redshift clusters. For security purpose you create a cluster with encryption enabled and load data into it. Now, you have been asked to present a cluster without encryption for final release what can you do?
a) Remove security keys from configuration folder
b) Remove encryption from live redshift cluster
c) Create a new redshift cluster without encryption, unload data onto S3 and reload onto new cluster
Answer: c
14) You are using on-perm HSM or cloudHSM while using security module with redshift. In addition to security what else is provided with this?
a) High availability
b) Scaling
c) Replication
d) Provisioning
Answer: a
15) CloudHSMis or on-perm HSM are the options that can be used while using hardware security module with Redshift. Is it true or false?
a) True
b) False
Answer: a
16) CloudHSMis the only option that can be used while using hardware security module with Redshift. Is it true or false?
a) True
b) False
Answer: b
17) You are making use of AWS key management service for encryption purpose. Will you make use of same keys or differnet keys or hybrid keys on case by case basis
a) same keys
b) different keys
c) hybrid keys
Answer: a
Explanation : AWS Key Management Service supports symmetric encryption where same keys are used to perform encryption and decryption
18) How is AWS key management service different than cloudHSM?
a) Both symmetric and asymmetric encryptions are supported in cloudHSM, only symmetric encryption is supported in key management service
b) CLoudHSM is used for security, key management service if for replication
c) The statement is wrong. Both are same
Answer: a
19) Which among the following are characteristics of cloudHSM?
a) High availability and durability
b) Single Tenancy
c) Usage based pricing
d) Both symmetric and asymmetric encryption support
e) Customer managed root of trust
Answer: b,d,e
20) In your hadoop ecosystem you are in shuffle phase. You want to secure the data that are in-transit between nodes within the cluster. How will you encrypt the data?
a) Data node encrypted shuffle
b) Hadoop encrypted shuffle
c) HDFS encrypted shuffle
d) All of the above
Answer: b
21) Your security team has made it a mandate to encrypt all data before sending it to S3 and you will have to maintain the keys. Wich encryption option will you choose?
a) SSE-KMS
b) SSE-S3
c) CSE-Custom
d) CSE-KMS
Answer : c
22) Is UPSERT supported in redshift?
a) Yes
b) No
Answer: b
23) Is single-line insert fast and most efficient way to load data into redshift?
a) Yes
b) No
Answer : b
24) Which command is the most efficient and fastest way to load data into Redshift?
a) Copy command
b) UPSERT
c) Update
d) Insert
Answer : a
25) How many concurrent queries can you run on a Redshift cluster?
a) 50
b) 100
c) 150
d) 500
Answer : a
26) Will primary and foreign key integrity constraints in a redshift project helps with query optimization?
a) Yes. They provide information to optimizer to come up with optimal query plan
b) No . They degrade performance
Answer : a
27) Is primary key and foreign key relationship definition mandatory while designing Redshift?
a) Yes
b) No
Answer : b
28) REdshift the AWS managed service is used for OLAP and BI. Are the queries used easy or complex queries?
a) Simple queries
b) Complex queries
c) Moderate queries
Answer : b
29) You are looking to choose a managed service in AWS that is specifically designed for online analytic processing and business intelligence. What will be your choice?
a) Redshift
b) Oracle 12c
c) amazon Aurora
d) Dynamodb
Answer : a
30) Can Kinesis streams be integrated with Redshift using the COPY command?
a) Yes
b) No
Answer : b
31) Will Machine Learning integrate directly with Redshift using the COPY command?
a) Yes
b) No
c) On case by case basis
Answer : b
32) Will Data Pipeline integrate directly with Redshift using the COPY command?
a) Yes
b) No
Answer : a
33) Which AWS services directly integrate with Redshift using the COPY command
a) Amazon Aurora
b) S3
c) DynamoDB
d) EC2 instances
e) EMR
Answer : b,c,d,e
34) Are columnar databases like redshift ideal for small amount of data?
a) Yes
b) No
Answer : b
Explanation : They are ideal for OLAP that does process heavy data loads called data warehouses
35) Which databases are best for online analytical processing applications OLAP?
a) Normalized RDBMS databases
b) NoSQL database
c) Column based database like redshift
d) Cloud databases
Answer : c
36) What is determined using F1 score?
a) Quality of the model
b) Accuracy of the input data
c) The compute ratio of Machine Learning overhead required to complete the analysis
d) Model types
Answer : a
Explanatin : F1 score can range from 0 to 1. If 1 is f1 score the model is of best quality
37) Which JavaScript library lets you produce dynamic, interactive data visualizations in web browsers?
a) Node.js
b) D3.js
c) JSON
d) BSON
Answer : b
38) How many transactions are supported per second for reads by each shard?
a) 500 transactions per second for reads
b) 5 transactions per second for reads
c) 5000 transactions per second for reads
d) 50 transactions per second for reads
Answer : b
39) Where does Amazon Redshift automatically and continuously backs up new data to ?
a) Amazon redshift datafiles
b) Amazon glacier
c) Amazon S3
d) EBS
Answer : c
40) Which one acts as an intermediary between record processing logic and Kinesis Streams?
a) JCL
b) KCL
c) BPL
d) BCL
Answer : b
41) What to do when Amazon Kinesis Streams application receives provisioned-throughput exceptions?
a) increase the provisioned throughput for the DynamoDB table
b) increase the provisioned ram for the DynamoDB table
c) increase the provisioned cpu for the DynamoDB table
d) increase the provisioned storage for the DynamoDB table
Answer : a
42) How many records are supported per second for write in a shard?
a) 1000 records per second for writes
b) 10000 records per second for writes
c) 100 records per second for writes
d) 100000 records per second for writes
Answer : a
43) You own an amazon kinesis streams application that operates on a stream that is composed of many shards. Will default provisioned throughput suffice?
a) Yes
b) No
Answer : b
44) You have an Amazon Kinesis Streams application does frequent checkpointing . Will default provisioned throughput suffice ?
a) Yes
b) No
answer : b
45) What is the default provisioned throughput in a table created with KCL?
a) 10 reads per second and 10 writes per second
b) 100 reads per second and 10 writes per second
c) 10 reads per second and 1000 writes per second
Answer : a
46) You have configured amazon kinesis firehose streams to deliver data onto redshift cluster. After sometime in amazon s3 buckets you see manifest file in an errors folder. What could have caused this?
a) Data delivery from Kinesis Firehose to your Redshift cluster has failed and retry did not succeed
b) Data delivery from Kinesis Firehose to your Redshift cluster has failed and retry did succeed
c) This is a warnign alerting user to add additional resources
d) Buffer size in kinesis firehose needs to be manually increased
Answer : a
47) Is it true that if amazon kinesis firehose fails to deliver to destination owing to the fact that buffer size is insufficient manual intervention is mandate to fix the issue?
a) Yes
b) No
Answer : b
48) What does amazon kinesis firehose do when data delivery to the destination is falling behind data ingestion into the delivery stream?
a) system is halted
b) firehose will wait until buffer size is increased manually
c) Amazon Kinesis Firehose raises the buffer size automatically to catch up and make sure that all data is delivered to the destination
d) none of the above
Answer : c
49) Your amazon kinesis firehose data delivery onto amazon s3 bucket fails. Automated retry has been happening for 1 day every 5 seconds. The issue is not found to have been resolved. What happens once this goes past 24 hours?
a) retry continues
b) retry does not happen and data is discarded
c) s3 initiates a trigger to lambda
d) All of the above
Answer : b
50) Amazon kinesis firehose has been cosntantly delivering data onto amazon S3 buckets. Kinesis firehose retires every five seconds. Is there a maximum duration until which kinesis keeps on retrying to deliver data onto S3 bucket?
a) 24 hours
b) 48 hours
c) 72 hours
d) 12 hours
Answer : a,b
51) Amazon kinesis firehose is delivering data to S3 buckets. All of sudden data delivery to amazon S3 bucket fails. In what interval does a retry happen from amazon kinesis firehose?
a) 50 seconds
b) 500 seconds
c) 5000 seconds
d) 5 seconds
Answer : d
52) How is data pipeline integrated with on-premise servers?
a) Task runner package
b) there is no integration
c) amazon kinesis firehose
d) all the above
Answer : a
53) Is it true that Data Pipeline does not integrate with on-premise servers?
a) True
b) False
Answer : b
54) Kinesis Firehose can capture, transform, and load streaming data into which of the amazon services?
a) Amazon S3
b) Amazon Kinesis Analytics
c) Amazon Redshift
d) Amazon Elasticsearch Service
e) None of the above
Answer : a,b,c,d
55) Which AWS service does a Kinesis Firehose does not load streaming data into?
a) S3
b) Redshift
c) DynamoDB
d) All of the above
Answer : c
56) You perform write to a table that does contain local secondary indexes as part of update statement. Does this consume write capacity units from base table?
a) Yes
b) No
Answer : a
Explanation : Yes because its local secondary indexes are also updated
57) You are working on a project wherein EMR makes use of EMRFS. What types of amazon S3 encryptions are supported?
a) server-side and client-side encryption
b) server-side encryption
c) client-side encryption
d) EMR encryption
Answer : a
58) Do you know which among the following is an implementation of HDFS which allows clusters to store data on Amazon S3?
a) HDFS
b) EMRFS
c) Both EMRFS and HDFS
d) NFS
Answer : b
59) Is EMRFS installed as a component with release in AWS EMR?
a) Yes
b) No
Answer : a
60) EMR cluster is connected to the private subnet. What needs to be done for this to interact with your local network that is connected to VPC?
a) VPC
b) VPN
c) Directconnect
d) Dishconenct
Answer : b,c

Free AWS big data specialty certification exam dumps . Enter your email address:

Delivered by FeedBurner

Posted on

AWS certification cost

Enter your email address:

Delivered by FeedBurner

AWS certification cost is something that needs to be considered while choosing your certification track in AWS.
To start with is an AWS certification does worth the cost?
In our opinion it is much more valuable than what it has been offered for right now. Always look at your career growth, stability, future negotiation prospects in long run maybe atleast 3 years form now on
Is AWS a stable company?
There are plethora of certification tracks from ton lots of vendors. Before shedding money all of us want to know if AWS will survive longer. Here are some facts that we’ve gathered from our regular news updates on technology stocks and tech industry:
1) A recent amazon related stock news was talking about $4.1 billion revenue that amazon has made this year from AWS alone. Looks like the e-commerce giant will fulcrum its businesses based on revenue from AWS
2) With such money spinning potential expect AWS to sustain and survive for another 20 years minimum. See to that AWS is developing a solid Cloud computing platform that will outbeat microsoft azure
3) If you are starting your career as new IT professional, already experienced infrastructure professional including system administrator, database administrator, security administrator, network administrator start preparing for AWS, learn to migrate the systems onto AWS via test trial runs so that you can be least assured that you are the last one to get pink slip in your department
4) AWS will be expanding its regions across china, sweden and many more countries in coming days
What is the cost of each and every AWS certification?
Here is the certification roadmap based on AWS website
AWS Architecting Certification Track
AWS Certified Solutions Architect Associate Exam Details:
Multiple choice and multiple answer questions
80 minutes to complete the exam. This includes regular exam Q&A as well as feedback type questions
Available in different languages like English, Japanese, Simplified Chinese, Traditional Chinese, Korean, German, Russian, Spanish, and Brazilian Portuguese
Certification Exam cost is $150
AWS Certified Solutions Architect Professional – This can be availed upon completion of any one of associate certification track
Certification exam cost is $300
AWS Developer Certification Track
AWS Certified Developer Associate – Certification Exam cost is $150
AWS Certified Devops Engineer Professional – This can be availed upon completion of any one of associate certification track
Certification exam cost is $300
AWS Operations Certification Track
AWS Certified Sysops administrator associate – Certification Exam cost is $150
AWS Certified Devops Engineer Professional – This can be availed upon completion of any one of associate certification track
Certification exam cost is $300
Specialty Certifications that requires one current associate or professional certification
Certification exam cost is $300 for both the speciality certification tracks
AWS Certified Bigdata Speciality
AWS Certified Advanced Networking Specialty
To summarize associate certs are priced at $150, professional level and speciality certificaitons cost $300

Enter your email address:

Delivered by FeedBurner

Posted on

AWS certified developer – associate dumps


Free AWS Associate exam dumps . Enter your email address:

Delivered by FeedBurner

1. Your project makes use of dynamodb. Users purchase product. Your table has four columns on many different id namely user_id, products_id,department_id,order_id. There are lots of users in the system currently registered to test this product on trial basis. There are only 3 products to start with. Which one can be primary key?
a) user_id
b) product_id
c) order_id
d) department_id
Answer: a
Explanation : While designating primary key always note that there should be many to few relationships
2. Which operation in dynamodb lets you support atomic counters?
a) IncrementItem
b) MoreItem
c) UpdateItem
d) UploadItem
Answer: c
3) You are making use of dynamodb for your products table. The application is so busy that provisioned throughput settings made has been exceeded. What will happen next?
a) It is subject to failure
b) table needs to be recreated
c) It is subject to request throttling
d) None of the above
Answer: c
4) While making use of dynamodb you have chosen provisioned throughput settings. Do you commit to pay one-time fees well in advance?
a) Yes
b) No
Answer: b
5) You have reserved capacity in dynamodb. Do you need to pay one-time fee well in advance?
a) Yes
b) No
Answer: a
6) You newly created an IAM user in AWS. What is the default level of access granted to this newly created IAM user?
a) All access
b) No access
c) Partial access
d) Role based access
Answer: b
7) You have hosted your website in S3. You have uploaded index.html. You get 403 forbidden error. What is the reasom?
a) index.ht l is corrupt
b) index.html cant function without hello.html
c) index.html should be given proper access permission to fix the issue
d) None of the above
Answer: c
8) Your project makes use of S3 to store file. If you attempt to write a new key to S3, you are able to retrieve any object immediately afterwards. Also, any newly created object or files are visible immediately, without any delay. What is the reason behind this?
a) S3 buckets in all Regions provide read-after-write consistency for PUTS of new objects
b) S3 buckets in all Regions provide eventual consistency for PUTS of new objects
c) S3 buckets in all Regions provide write-after-write consistency for PUTS of new objects
d) S3 buckets in all Regions provide write-after-read consistency for PUTS of new objects
Answer: a
9) Do amazon S3 buckets in usa regions alone provide eventual consistency for overwrite PUTS and DELETES?
a) Yes
b) No
Answer: b
Explanation: This is partially true. As per Amazon S3 documentation eventual consistency for overwrite PUTS and DELETES is available in all regions

Free AWS Associate exam dumps . Enter your email address:

Delivered by FeedBurner

Posted on

AWS certified sysops administrator free practice exam

Free AWS Associate exam dumps . Enter your email address:

Delivered by FeedBurner

1) What are the different types of EMR nodes?
a) core nodes
b) task nodes
c) block nodes
d) map nodes
Answer: a,b
2) You have bigdata interactive analysis project. Can you make use of spark?
a) Yes
b) No
Answer: b
3) Can you control the encryption keys and cryptographic operations performed by the Hardware security module using cloudhsM?
a) Yes
b) No
Answer: a
4) Is data in-transit between nodes is encrypted in AWS hadoop environment?
a) yes using hadoop encrypted shuffle
b) yes using cloudHSM
c) Yes using on-perm HSM
d) None
Answer: a
5) IS it true that you can have read replicas of read replicas?
a) true
b) false
Answer: a
6) You are working in business continuity project typically disaster recovery configuration design. Do you know what does RTO mean?
a) Recovery Time Objective
b) Read Time objective
c) Remote time objective
d) Redo time objective
Answer: a
7) You project makes use of mysql, php application. You have configure read replicas. What is the maximum number of read replicas that you can have in place?
a) 1
b) 3
c) 5
d) 7
Answer : c
8) You have configured read replicas. Can read replica’s have multiple availability zone for redundancy?
a) Yes
b) No
Answer: b
9) What does ICMP protocol translate to?
a) Information Control Message Protocol
b) Internet Control Message Protocol
c) Internal Control Message Protocol
d) Intuit Control Message Protocol
Answer : b
10) Which ELB metric does provide details on count of the total number of requests that are queued for a registered instance?
a) SurgeQueueLength
b) SetQueueLength
c) SumQueueLength
d) PendingQueueLength
Answer : a
11) You have been asked to copy amazon machine image across regions? Can you accomplish that?
a) Yes
b) No
Answer: a
Explanation: It is possible to copy AMI within as well as across AWS region. This is possible using AWS management console, AWS CLI or SDKs
12) You are utilizing tools like AWS management console to copy AMI across regions. Which action is internally made use of for this purpose?
a) CopyImage
b) MoveImage
c) MigrateImage
d) DumpImage
Answer: a
13) What types of AMI’s can be copied as part of CopyImage action?
a) EBS-backed AMI’s
b) instance store-backed AMI’s
c) Both a and b
d) None of the above
Answer: c
14) You have an AMI with encrypted snapshot. Will you be able to copy that using CopyImage action?
a) Yes
b) No
Answer: a
15) What are the virtualization types supported by Linux AMIs?
a) paravirtual
b) hardware virtual machine
c) software virtual machine
d) Firmware virtual machine
Answer: a,b
16) You have a HVM. How will you boot this AMI?
a) Executing the master boot record of the root block device of the image
b) Executing the master boot record of the EBS
c) Both a and b
d) None of the above
Answer: a
17) You are connecting to AWS using AWS CLI. For this do you need to create an IAM user?
a) Yes
b) No
Answer : a
18) You create an amazon S3 bucket. You upload a file called file1. Upload second file called file2. Now, you enable versioning. You upload file1 and file2 again. What will be the version id of file1 and file2 uploaded in first attempt before versioning is enabled?
a) zero
b) one
c) null
d) two
Answer : c
Explanation : An object uploaded prior to versioning will have version id as null
19) Does amazon S3 support SOAP?
a) Yes
b) No
Answer : b
Explanation : SOAP support is available over https. It is not available over http also amazon s3 new features dont support SOAP
20) You are trying to create a new bucket in amazon S3 and end up getting 409 conflict error. What is the reason behind this?
a) You dont have access permissions to create bucket
b) You dont have space allocated to create buckets
c) Bucket name already exists
d) Baddigest owing to wrong specification of MD-5
Answer: c
21) What is the best practice recommendation while making use of IAM role?
a) Create an IAM role that has specific access to AWS service. Grant access to the specified AWS service via the role
b) Grant access to service directly
c) Create IAM role for users
d) None of the above
Answer: a
22) How does amazon elastic cache improve system performance?
a) Information retrieved from in-memory system instead of disk-based traditional systems
b) Information retrieved from SSD
c) Information retrieved from flash drive
d) Information retrieved from RAID
Answer: a
23) You are in process of creating AMI. Which API call is internally triggered during this process?
a) CreateImage
b) RegisterImage
c) ami-store-image
d) ami-deploy image
Answer: b
24) You have been asked to configure an access policy that allows anonymous access to a message queue. How will you accomplish this?
a) via AWS CLI
b) Via IAM policies
c) Via roles
d) None of the above
Answer: b
25) You are making using of AWs simple queue service. How many message queues can you create in SQS?
a) 200
b) 2000
c) unlimited
d) 2
Answer: c
Explanation: There is no limited on the number of message queues that can be created in SQS
26) What is the maximum visibility timeout in SQS?
a) 24 hours
b) 36 hours
c) 12 hours
d) 72 hours
Answer: c
Explanation: Maximum visibility timeout for amazon SQS message is 12 hours

Free AWS Associate exam dumps . Enter your email address:

Delivered by FeedBurner