Posted on

AWS cloud support engineer interview questions

Enter your email address:

Delivered by FeedBurner

AWS is an Amazon company with lots of openings for fresh talent, open to fresh ideas, innovation. Amazon web services the cloud based service that has migrated infrastructure from physical data center onto online cloud has been hiring engineers in various capacity including cloud support associate, cloud engineer, senior cloud support engineer, cloud architect, support manager etc. As a fresh graduate out of college this is a lucrative better career option you can eye on. Here we have proposed some interview questions that will help you crack the AWS interview including aws cloud support engineer interview questions. The interview questions does overlap with AWS cloud support associate, AWS cloud support engineer, AWS cloud architect as all these positions demand good knowledge, skill and expertise in Linux/UNIX OS,networking basisc to start with.
Note that these are not actual interview questions and this is just an aid prepared based on AWS stack analysis, job role responsibilities advertised by them in popular websites
These are some of the interview questions that you can expect during interview of AWS cloud support engineer, aws cloud support associate, AWS cloud support manager. We have analysed the technology stack, current job openings and created this based on them. These are not actual interview questions and has nothing to do with them
1) Why should we consider AWS? How would you convince a customer to start using AWS?
Primary advantage is going to be cost savings. As a customer support engineer your job role involves talking to current customers, prospective customers to help them determine if they really have to move to AWS from their current infrastructure. In addition to providing convincing answer in terms of cost savings it would be better if you give them real simple explanation of flexibility, elastic capacity planning that offers option of pay as you use infrastructure, easy to manage AWS console etc
2) What is your current job profile?How would you add value to customer?
Though AWS is looking to hire fresh talent for cloud support engineer openings, if you have some work experience in infrastructure side of business say system administrator, network administrator, database administrator, firewall administrator, security administrator, storage administrator etc you are still a candidate to be considered for interview.
All they are looking for is infrastructure knowledge in overall, little knowledge about different tech stack , how they inter-operate, what will it be like once the infrastructure is in web rather than physical data center.
If you don’t have experience with AWS don’t worry. Try to leverage the ways and means you did adopt to solve customer support calls both internal and external to let them know how you can bring value to the table.
Have some overview on how different components of infrastructure interact.
AWS wants to know your pro-active measure towards customer relationship. Say, if you are going to discuss a project or an issue with customer , it would be better if you have some preparatory work that comes handy rather than being reactive. Value addition comes in terms of recommending the best solution , utilization of services in AWS that will help them make decision easy and fast
3) Do you know networking?
Make sure you can be from many different backgrounds say from development, infrastructure, QA, customer support , network administration, system administration, firewall administration, system administration etc. You should know networking. Cloud is network based and to fix the application issues escalated, networking knowledge is very important
4) What networking commands do you make use of on daily basis to fix issues?
When we work with servers be it physical or virtual first command that comes handy to locate the request response path taken would be traceroute. In windows systems equivalent command is tracert
There are some more important interesting commands – ping, ipconfig, ifconfig that talks about network communication, network address, interface configuration etc
DNS commands – nslookup, Lookup of /etc/resolv.conf file in Linux systems to get details on DNS
5) What is the advantage of using TCP protocol?
TCP is used to exchange data reliably . IT used mechanisms of sequence and acknowledgment, error detection, error recovery. This comes with advantage of reliable application but comes with cost of hit in transmission time
6) What is UDP?
User datagram protocol called UDP is a connection less protocol that can be used for fast efficient applications that need less time compared to TCP
7) Do you know how an internet works in your environment?
This can be your home or office. Learn more on modem and its utilization in connection establishment
8) What is a process? How do you manage processes in Linux:-
In Linux/Unix based OS process is started or created when a command is issued. In simple terms while a program is running in an OS an instance of the program is created. This is the process. To manage the processes in Linux process management commands come handy
ps – this is the commonly used process management command to start with. ps command provides details on currently running active processes
top – This command provides details on all running processes. ps command lists active processes whereas top lists all the processes (i.e) activity of processor in real-time. This includes details on processor and memory being used
kill – To kill a process using the process id kill command is used. ps command provides details on process id. To kill a process issue
kill pid
killall proc – This command is same as kill command. To kill all the processes by name proc we can use this
9) Give details on foreground and background jobs command in Linux:-
fg – this command brings most recent job to foreground. Typing the command fg will resume most recently suspended job
fg n – This command brings job n to the foreground. If a job is recently back grounded by typing fg 1 this can be brought foreground
bg – This command is used to resume a suspended program without bringing it to foreground. This command also provides details on list of stopped jobs as well as current background jobs
10) How to get details on current date and time in Linux?
Make use of the command date that shows details on current date and time. To get current month’s calendar use cal command
uptime – shows current uptime
11) What is difference between command df and du?
In linux both df and du are space related commands showing system space information
df – this command provides details on disk usage
du – To get details on directory space usage use this command
free – this command shows details on memory and swap usage
12) What are the different commands and options to compress files in Linux?
Lets start with creating a tar and name it test.tar containing the needed files
tar cf test.tar files
Once the tar is available, uploaded on AWS there is a need to untar the files. Use the command as follows:
tar xf file.tar
We can create a tar with gzip compression that will minimize the size of files to be transferred and creates test.tar.gz at the end
tar czf test.tar.gz files
To extract the gzipped tar compressed files use the command:
tar xzf test.tar.gz
Bzip2 compression can be used to create a tar as follows
tar cjf test.tar.bz2
To extract bzip2 compressed files use
tar xjf test.tar.bz2
To simply make use of gzip compression use
gzip testfile – This creates testfile.gz
To decompress testfile.gz use gzip -d testfile.gz
13) Give examples on some common networking commands you have made use of?
Note that AWS stack is primarily dependent on linux and over the cloud architecture makes it heavily network dependent. As a result AWS interview could be related to networking irrespective of your system admin, database admin, bigdata admin background. Learn these simple networking commands:
When a system is unreachable first step is to ping the host and make sure it is up and running
ping host – This pings the host and output results
Domain related commands as AWS has become preferred hosting for major itnernet based companies, SaaS firms
To get DNS information of the domain use – dig domain
To get whois information on domain use – whois domain
Host reverse lookup – dig -x host
Download file – wget file
To continue stopped download – wget -c file


14) What is your understanding of SSH?
SSH the secure shell is widely used for safe communication. This is a cryptographic network protocol used for operating network services securely over an unsecured network. Some of the commonly used ssh commands include
To connect to a host as a specified user using ssh use this command:
ssh username@hostname
To connect to a host on a specified port make use of this command
ssh -p portnumber username@hostname
To enable a keyed or passwordless login into specified host using ssh use
ssh-copy-id username@hostname
15) How do you perform search in Linux environment?
Searching and pattern matching are some common functions that typically happens in Linux environment. Here are the Linux commands:
grep – Grep command is the first and foremost when it comes to searching for files with pattern. Here is the usage:
grep pattern_match test_file – This will search for pattern_match in test_file
Search for pattern in directory that has set of files using recursive option as follows – grep -r pattern dir – Searches for pattern in directory recursively
Pattern can be searched in concatenation with another command (i.e) output of a command can be used as input for pattern search and match – first command| grep pattern
To find all instances of a file use locate command – locate file
16) Give details on some user related commands in Linux:-
Here are some user related Linux commands:
w – displays details on who is online
whoami – to know whom you are logged in as
finger user – displays information about the user
17) How to get details on kernel information in Linux?
uname -a command provides details on kernel information
18) How to get CPU and memory info in Linux machine?
Issue the following commands:
cat /proc/cpuinfo for cpu information
cat /proc/meminfo for memory information
19) What are the file system hierarchy related commands in linux?
File system hierarchy starting with raw disks, the way disks are formatted into files, files are grouped together as directory all are important for cracking AWS interview. Here are some file system hierarchy related commands that come handy
touch filename – creates a file with name filename. This command can also be used to update a file
ls- lists the directories
ls -al – All files including hidden files are listed with proper formatting
cd dir – change to specified directory
cd – Changes to home directory
pwd – called present working directory that shows details on current directory
Make a new directory using mkdir command as follows – mkdir directory_name
Remove file using rm command – rm file – removes file
To delete directory use -r option – rm -r directory_name
Remove a file forcefully using -f option – rm -f filename
To force remove directory use – rm -rd directory_name
Copy the contents from one file to another – copy file1 file2
Copy the contents across directory use – cp -r dir1 new_dir – If new directory does not exist create this first before issuing copy command
Move or rename a file using mv command – mv file1 new_File
If new_Dir is a file that already exists new_File will be directory into which file1 will be moved into
more filename – output the contents of the file
head file – output the first 10 lines of the file
tail file – output the last 10 lines of the file
tail -f filename – output the contents of the file as it grows, to start with display last 10 lines
Create symbolic link to a file using ln command – ln -s file link – called soft link
20) What command is used for displaying manual of a command?
Make use of the command man command_name
21) Give details on app related commands in linux:-
which app – shows details on which app will be run by default
whereis app – shows possible locations of application
22) What are the default port numbers of http and https?
Questions on http and https port number is first step in launching webapp while customer reports an issue
Default port number of http is 80 (or) 8080
Default port number of https is 443
23) What is use of load balancer?
Load balancer is used to increase the capacity and reliability of applications. What capacity means is number of users connecting to applications. Load balancer distributes the traffic network and application traffic across many different servers increasing application capacity
24) What is sysprep tool?
System preparation tool comes as free tool with windows that can be accessed from %systemroot%\system32\sysprep folder. IT is used to duplicate, test and deliver new installation of windows based on established installation
25) User is not able to RDP into server. What could be the reason?
Probable reason is that user is not part of remote desktop users local group of the terminal servers
26) How would you approach a customer issue?
Most work of AWS support engineer involves dealing with customer issue.As with any other support engineer AWS engineer should follow approach of question customer, listen to them, confirm what you have collected. This is called QLC approach much needed step to cover details on issue description and confirm it
27) What types of questions can you ask customer?
A support engineer can ask two types of questions
1) Open ended questions – In this case your question will be single statement, answer you expect from customer is detailed
2) Closed questions – In this case your question will have answers yea (or) No, true (or) false type answers, single word answer in some cases
28) How do you consider customer from AWS technology perspective?
Even though the customer can be long standing customer of AWS, always think of customer as common man with no knowledge of AWS to talk more to them, explain more details to them to get correct issue description statement
29) Give details on operators in linux?
> – greater than symbol is input re-direction operator used to write content as input into file. Typically this is used to redirect the output of command into logfile. IF file already exists the contents are overwritten and only last recent content is retained
>> – this is same as input redirection except that this is appending content of a file if the file already exists
30) Explain difference between hardlink and softlink in simple terms?
Hardlink is link to inode that talks about file contents, softlink is link to filename. If filename changes the changes are not reflected. For both hard and soft link ln command is used. In case of hardlink it will be simply ln, for soft link ln -s option is used
31) What are some common linux commands AWS engineer should be aware of?
1) cat – This is plain simple command to access a file in UNIX
2) ls – Provides details on list of files and directories
3) ps – The process command provides details on list of processes in the system
4) vmstat – Virtual memory statistics comes handy during performance tuning
5) iostat – Command to determine I/O issues
6) top – This command provides details on top resource consuming processes
7) sar – This is a UNIX utility mainly used for tuning purpose
8) rm – This command is used to remove files
9) mv – moving the files and directories
cd – Enables us to change directories
date – gives us the time and date
echo – we can display text on our screen
grep – It is a pattern recognition command.It enables us to see if a certain word or set of words occur in a file or the output of any other command.
history – gives us the commands entered previously by us or by other users
passwd – this command enables us to change our password
pwd – to find out our present working directory or to simply confirm our current location in the file system
uname – gives all details of the system when used with options. We get details including systemname,kernel version etc.
whereis – gives us exact location of the executable file for the utility in the question
which – the command enables us to find out which version(of possibly multiple versions)of the command the shell is using
who – this command provides us with a list of all the users currently logged into the system
whoami – this command indicates who you are logged in as. If user logs in as a userA and does a su to userB,whoami displays userA as the output.
man – this command will display a great detail of information about the command in the question
find – this command gives us the location of the file in a given path
more – this command shows the contents of a file,one screen at a time
ps – this command gives the list of all processes currently running on our system
cat – this command lets us to read a file
vi – this is referred to as text editor that enables us to read a file and write to it
emacs- this is a text editor that enables us to read a file and write to it
gedit – this editor enables us to read a file and write to it
diff – this command compares the two files, returns the lines that are different,and tells us how to make the files the same
export – we can make the variable value available to the child process by exporting the variable.This command is valid in bash,ksh.
setenv – this is same as export command and used in csh,tcsh
env – to display the set of environment variables at the prompt
echo <$variablsname> – displays the current value of the variable
source – whenever an environment variable is changed, we need to export the changes.source command is used to put the environment variable changes into immediate effect.It is used in csh,tcsh
.profile – in ksh,bash use . .profile command to get same result as using source command
set noclobber – to avoid accidental overwriting of an existing file when we redirect output to a file.It is a good idea to include this command in a shell-startup file such as .cshrc
32) What are the considerations while creating username/user logins for Security Administration purpose?
It is a good practice to follow certain rules while creating usernames/user logins
1) User name/user login must be unique
2) User name/user login must contain a combination of 2 to 32 letters, numerals, underscores(_),hyphens(-), periods(.)
3) There should not be any spaces/tab spaces while creating user name/usr logins
4) User name must begin with a letter and must have atleast one lowercase letter
5) Username must be between three to eight characters long
6) It is a best practice to have alphanumeric user names/user logins. It can be a combination of lower case letters, upper case letters, numerals, punctuation
33) Give details on /etc/profile the system profile file and its usage in linux environment:-
.This is another important UNIX system administration file. This file has much to do with user administration. This file is run when we first log into the system.This is system profile file. After this user profile file is run. User profile is the file wherein we define the users environment details.Following are teh different forms of user profile files :
.profile
.bash_profile
.login
.cshrc
/home/username is the default home directory.User’s profile file resides in the user’s home directory.
34) How to perform core file configuration in Linux environment?
Lets consider a linux flavor say solaris. Core File Configuration involves the following steps. We need to follow the steps given below to configure the core file.
1) As a root user, use the coreadm command to display the current coreadm configuration :
# coreadm
2) As a root user, issue the following command to change the core file setup :
# coreadm -i /cores/core_new.%n.%f
3) Run the coreadm command afain to verify that the changes has been made permanent
# coreadm
The O/P line “init core file pattern : ” will reflect the new changes made to the corefile configuration.
From solaris 10 onwards, coreadm process is configured by the Service Management Facility (SMF) at system boot time.We can use svcs command to check the status .The service name for coreadm process is :
svc:/system/coreadm:default
35) How do you configure or help with customer printer configuration?
Administering Printers details the steps needed to administer a printer.
Once the printer server and printer client is set up, we may need to perform the following administrative tasks frequently :
1) Check the status of printers
2) Restart the print scheduler
3) Delete remote printer access
4) Delete a printer
36) How is zombie process recognized in linux and its flavors? How do you handle zombie process in linux environment?
Zombie Process in UNIX/LINUX/Sun Solaris/IBM AIX is recognized by the state Z.It doesn’t use CPU resources.It still uses space in the process table.
It is a dead process whose parent did not clean up after it and it is still occupying space in the process table.
They are defunct processes that are automatically removed when a system reboots.
Keeping OS and applications up to date and with latest patches prevents zombie processes.
By properly using wait() call in parent process will prevent zombie processes.
SIGCHLD is teh signal sent by child to parent upon task completion and parent kills child(proper termination).
kill -18 PID – Kills childs process
37) What is the use of /etc/ftpd/ftpusers in Linux?
/etc/ftpd/ftpusers is used to restrict users who can use FTP(File Transfer Protocol).Ftp is a security threat as passwor is not encrypted while using ftp. ftp must not be used by sensitive user accounts such as root,snmp,uucp,bin,admin(default system user accounts).
As a security measure we have a file called /etc/ftpd/ftpusers created by default. The users listed in this file are not allowed to do ftp.The ftp server in.ftpd reads this file before allowing users to perform ftp. If we want to restrict a user from doing ftp we may have to include their name in this file.
38) Have you ever helped a customer restore a root file system in their environment?
Restoring root file system (/)  provides steps we need to follow to restore the root file system (/ system) in SPARC and x86 (intel) machines.
1) Log in as root user. It is a security practice to login as normal user and perform an su to take root user (super user) role.
2) Appearance of # prompt is an indication that the user is root
3) Use who -a command to get information about current user
4) When / (root filesystem) is lost because of disk failure. In this case we boot from CD or from the network.
5) Add a new system disk to the system on which we want to restore the root (/) file system
6) Create a file system using the command :
newfs /dev/rdsk/partitionname
7) Check the new file system with the fsck command :
fsck /dev/rdsk/partitionname
8) Mount the filesystem on a temporary mount point :
mount /dev/dsk/devicename /mnt
9) Change to the mount directory :
cd /mnt
10) Write protect the tape so that we can’t accidentally overwrite it. This is an optional but important step
11) Restore the root file system (/) by loading the first volume of the appropriate dump level tape into the tape drive. The appropriate dump level is the lowest dump level of all the tapes that need to be restored. Use the following command :
ufsrestore -rf /dev/rmt/n
12) Remove the tape and repeat the step 11 if there is more than one tape for the same level
13) Repeat teh step 11 and 12 with next ddump levels. Always begin with the lowest dump level and use highest ump level tape
14) Verify that file system has been restored :
la
15) Delete the restoresymtable file which is created and used by the ufsrestore utility :
rm restoresymtable
16) Change to the root directory (/) and unmount the newly restored file system
cd /
umount /mnt
17) Check the newly restored file system for consistency :
fsck /dev/rdsk/devicename
18) Create the boot blocks to restore the root file system :
installboot /usr/platform/sun4u/lib/fs/ufs/bootblk /dev/rdsk/devicename — SPARC system
installboot /usr/platform/`uname -i`/lib/fs/ufs/pboot /usr/platform/`uname -i`/lib/fs/ufs/bootblk /dev/rdsk/devicename — x86 system
19) Remove the last backup tape, and insert a new tape onto which we can write. Make a dump level 0 backup of the newly restored system by issuing the following command :
ufsdump 0ucf /dev/rmt/n /dev/rdsk/deviceName
This step is needed because ufsrestore repositions the files and changes the inode allocations – the old backup will not truly represent the newly restored file system
20) Reboot the system :
#reboot (or)
# init 6
System gets rebooted and newly restored file systems are ready to be used.
21) What is the monitoring and reporting tool that comes as part of AWS console?
Cloudwatch the tool listed under management section of AWS console helps with monitoring and reporting emtrics in AWS environment. Following metrics can be monitored as part of cloudwatch including
1) CPU
2) Disk utilization
3) Network
4) Status Check
In addition to the above mentioned metrics RAM the custom metric can be monitored using cloudwatch
22) Give details on status check in cloudwatch?
In an AWS environment status of both instance and system needs to be monitored. As such there are system status check as well as instance status check sections associated with each and every EC2 instance. As the name implies system status check makes sure that physical machines on which the instances have been hosted is in good shape. Instance status check is at the EC2 instance which literally translates to virtual machine in AWS environment
23) What happens if a failure is reported in status check section of AWS?
Depending on what type of failure has been reported following actions can be taken:
In case of system failure – Restart the virtual machine. In AWS terms, restart the EC2 instance. This will automatically bring up the virtual machine in a physical hardware that is issue free
Instance Failure – Depending on the type of failure reported in EC2 instance this can be stopping and starting of virtual machine to ix the issue. In case of disk failure appropriate action can be taken at operating system level to fix the issues
24) What is an EC2 instance in AWS?
This is the basic component of AWS infrastructure. EC2 translates to Elastic compute cloud. In real-time this is a pre-built virtual machine template hosted in AWS that can be chosen, customized to fit the application needs
This is the prime AWS service that eliminates a business necessity to own a data center to maintain their servers, hosts etc
25) What is an ephemeral storage?
An ephemeral storage is a storage that is temporary (or) non-persistent
26) What is the difference between instance and system status check in cloudwatch?
An instance status check checks the EC2 instance in an AWS environment whereas system status check checks the host
27) What is the meaning of EBS volume status check warning?
An EBS volume is degraded or severly degraded. Hence, a warning in an EBS environment is something that can’t be ignored as with other systems
28) What is the use of replicalag metric in AWS?
Replicalag is a metric used to monitor the lag between primary RDS the relational database service a database equivalent in AWS environment and the read replica the secondary database system that is in read only mode
29) What is the minimum granularity level that a cloudwatch can monitor?
Minimum granularity that cloudwatch can monitor is 1 minute. In most real-time cases 5 minute metric monitoring is configured
30) What is the meaning of ebs volume impaired ?
EBS volume impaired means that the volume is stalled or not available
31) Where is ELB latency reported?
In cloudwatch the latency reported by elastic load balancer ELB is available
32) What is included in EC2 instance launch log?
Once the EC2 instance is created, configured and launched following details are recorded in instance launch log:
Creating security groups – The result needs to be Successful. In case of issues the status will be different
Authorizing inbound rules – For proper authorization this should show Successful
Initiating launches – Again this has to be Successful
At the end we see a message that says Launch initiation complete
33) What will happen once an EC2 instance is launched?
After the EC2 instance has been launched it will be in running state. Once an instance is in running state it is ready for use. At this point usage hours which is typically billable resource usage starts. This keeps accruing until we stop or terminate our instance. NExt immediate step is to view instance
34) What is maximum segment size (mss)? How is this relevant to AWS?
The maximum segment size is the important factor that determines the size of an un fragmented data segment. AWS is cloud based and the products hosted are accessed via internet connection. For data segments to successfully pass through all the router during transit their size should be acceptable across routers. If they grow big the data segments get fragmented. This eventually leads to network slowness
35) How does a load balancer check the EC2 instance availability in an AWS environment?
Periodically load balancer sends pings, attempts connections, send requests to EC2 instances to cehck their availability. Often these tests are referred as health checks in an AWS environment
36) Give details on health check and status of instances in an AWS environment :-
In an AWS environment to check the status of EC2 instances the load balancer periodically sends pings, attempts connection, sends requests to EC2 instances. This process is referred to as health check in an AWS environment
If an EC2 instance is healthy and functioning normal at the time of health check the status will be InService
If an instance does not respond back this is unhealthy and its status will be OutOfService
37) What are the instances that are candidates to be part of health check?
If an instance is registered with a load balancer this is a candidate under health check process in AWS. This covers instances that are in both healthy and unhealthy statuses which are typically InService and OutOfService respectively
38) What happens when an instance in an AWS environment has been found to be in an unhealthy state?
The requests will not be routed to unhealthy instances by load balancer. Once the instance health is restored back to healthy status requests are routed here
39) What is IPSec?
IPSec refers to the internet protocol security that is used to securely exchange data over public network that no-one can view and read except the intended parties. IPSec makes use of two mechanisms that work together to exchange data in a secure manner over the public networks. Though both of these mechanisms are not mandate we can use just one or both togetehr. The two mechanisms of IPSec are
Authentication header – Used to digitally sign the entire contents of each packet that protects against tampering, spoofing, replay attacks. The major disadvantage of authentication header is that though this protects data packets against tampering the data is still visible to hackers. To overcome this ESP can be used
Encapsulating Security Payload – ESP provides authentication, replay-proofing and integrity checking by making use of 3 components namely ESP header, ESP trailer, ESP authentication block
40) What are the many different types of IPSec modes?
Tunnel more, transport mode are the two modes that we can configure IPSEc to operate in.Tunnel mode is the default mode and used for and used for communication between gateways like routers, ASA firewalls (or) at an end-station to a gateway. Transport mode is used for end-to-end communication between a client and a server, between a workstation and a gateway like telnet connection, remote desktop connection between a workstation and server over VPN etc
41) In a class B network give relationship between network /count and number of hosts possible :-
network
/count No of hosts possible
16 65536
17 32768
18 16384
19 8192
20 4096
21 2048
22 1024
23 512
24 256
25 128
26 64
27 32
28 16
29 8
30 4
42) In a class C network give relationship between network /count and number of hosts possible :-
network
/count No of hosts possible
24 256
25 128
26 64
27 32
28 16
29 8
30 4
43) You are a DBA and have been assigned task of migrating oracle database to AWS with minimal to no impact to source database. How will you achieve this?
Make use of Database Migration Service. This will help you migrate databases securely and easily. This tool enables live migration of data making sure source database is up and running during migration
44) Which AWS service will you make use of to monitor CPU utilization of an EC2 resource in AWS environment?
AWS Cloudwatch is a monitoring service that can be used for management as well in an AWS environment. We can get data insights to monitor system performance and optimize resource utilization in an AWS environment
44) Give details on some AWS terminologies you need to be aware of as support engineer :-
Here are some common terminologies that you will come across in your daily job
EC2 instance – This is how the virtual machine is referred to in an AWS environment
Region – The physical geographical locations that host AWS datacenters is referred to as region. This keeps expanding with growth of AWS
RDS – The database related service commonly called as relational database services
S3- The storage service from AWS
EBS – The elastic block storage, another storage option from AWS
Availability zone – Commonly referred to as AZ
Virtual private cloud – Commonly called as VPC is datacenter in virtual format in AWS
45) What is use of wireshark?
This is a open-source packet analyzer tool commonly used to monitor network traffic coming in and out of the servers hosting applications. At times is is used to monitor and make sure there are no security threats in this system

Free AWS tutorials. Enter your email address:

Delivered by FeedBurner

Posted on

AWS big data certification practice tests

Enter your email address:

Delivered by FeedBurner

1) Can you configure your system to permit users to create user credentials and log on to the database based on their IAM credentials
a) Yes
b) No
Answer: a
Explanation : Users in amazon redshift database can logon using their normal database account as well as IAM account credentials
2) You want to secure IAM credentials for a JDBC or ODBC connection. How can you accomplish this?
a) Encrypt the credentials
b) Make use of AWS profile creating named profiles
c) Not possible
Answer: b
3) How will you directly run SQL queries against exabytes of unstructured data in Amazon S3?
a) Kinesis UI
b) SQL Developer
c) Hue
d) Redshift specturm
Answer: d
4) In terms of data write-rate for data input onto kinesis stream what is the capacity of a shard in a Kinesis stream?
a) 9 MB/s
b) 6 MB/s
c) 4 MB/s
d) 1 MB/s
Answer: d
5) EMR cluster is connected to the private subnet. What needs to be done for this to interact with your local network that is connected to VPC?
a) VPC
b) VPN
c) Directconnect
d) Dishconenct
Answer : b,c
6) Which host will you make use of to have your local network connect to EMR cluster in a private subnet?
a) bastion host
b) station host
c) vision host
d) passion host
Answer : a
7) An EMR cluster must be launched in a private subnet. Can it be used with S3 or other AWS public endpoints if it is launched in a private subnet?
a) Yes it is possible
b) No not possible
c) Need to configure NAT to make it possible
d) Need VPC for this connection
Answer: a,d
8) Your organization is going to use EMR with EMRFS. However, your security policy requires that you encrypt all data before sending it to S3 and that you maintain the keys. Which encryption option will you recommend?
a) server side encryption S3
b) client side encryption custom
c) server side encryption key management system
d) client side encryption key management system
Answer: b
9) In an EMR are core nodes optional?
a) Yes
b) No
Answer: b
Explanation : In EMR task nodes are optional and core nodes are mandate
10) Do EMR task nodes include HDFS?
a) Yes
b) No
Answer: b
11) You created a redshift cluster. You have enabled encryption in this cluster. You have completed loading 9TB of data into this cluster. Your security team makes a decision not to encrypt this cluster. You have been asked to make the necessary changes and make sure cluster is not encrypted. Waht will you do?
a) Decrypt the existing cluster with redshift modify options
b) Remove on-perm HSM module
c) Create a new cluster that is not encrypted and reload the 9TB data
d) Remove the encryption keys file and the cluster is automatically decrypted
Answer: c
12) Does AWS Key Management Service supports both symmetric and asymmetric encryption
a) Yes
b) No
Answer : b
Explanation : Only symmetric encryption is supported in AWS key management service
13) How will you encrypt EC2 ephemeral volumes?
a) Using WICKS
b) Using KICKS
c) Using LUKS
d) Using BUCKS
Answer : c
14) You will have to encrypt data at rest on instance store volumes and EBS volumes. How will you accomplish this?
a) KMS
b) LUKS
c) Open source HDFS encryptionAWS KMS is used for enrypting data at rest
Answer : b,c
15) You want to automatically setup Hadoop encrypted shuffle upon cluster launch. How can you achieve that?
a) Select the in-transit encryption checkbox in the EMR security configuration
b) Select the KMS encryption checkbox in the EMR security configuration
c) Select the on-perm HSM encryption checkbox in the EMR security configuration
d) Select the CloudHSM encryption checkbox in the EMR security configuration
Answer : a
16) Do you know what a Hadoop Encryoted Shuffle means?
a) HDFS is encrypted using cloudHSM
b) AWS KMS is used for enrypting data at rest
c) Data intransit between nodes is encrypted
d) The files in S3 are encrypted and shuffled before being read by EMR
Answer : c
17) Your security team has made it a mandate to encrypt all data before sending it to S3 and S3 will manage keys for you. Which encryption option will you choose?
a) SSE-S3
b) CSE-Custom
c) SSE-KMS
Answer : a
18) You have been asked to handle a project that has lots of python development resources. As this is totally new you have responsibility to choose the open-source tools that integrate well with Python. This is a project that does not make use of spark. Which one is recommended?
a) Jupyter Notebook
b) D3.js
c) Node.js
d) Apache Zeppelin
Answer : a
19) You have been asked to handle a project that has lots of python development resources. As this is totally new you have responsibility to choose the open-source tools that integrate well with Python. This is a project that does make use of spark. Which one is recommended?
a) Apache Zeppelin
b) Hue
c) Jupyter Notebook
d) Kinesis
Answer : a
20) Why there are no backup data files accessible for file restores while using redshift?
a) REdshift is an ephemeral storage
b) Redshift is a NoSQL database
c) Redshift is a managed service
d) Redshift is a column based database that does not support backups
Answer : c

Enter your email address:

Delivered by FeedBurner

Posted on

AWS big data certification exam questions

Enter your email address:

Delivered by FeedBurner

1) Can we launch EMR cluster in a public subnet?
a) Yes
b) No
Answer: b
2) Can you run an EMR cluster in a private subnet with no public IP addresses or attached Internet Gateway?
a) Yes
b) No
Answer: a
3) You are running your EMR cluster in a private subnet. You will have to access your S3. How can you do that?
a) TCP/IP
b) VPC
c) hybrid cloud
d) public cloud
Answer: b
4) You have EMR clusters running in private subnet. You will have to connect to AWS services that do not currently support end points in VPC. How can you connect to those services?
a) By making use of NAT instance for your
b) By making use of WAT instance for your
c) By making use of SAT instance for your
d) By making use of BAT instance for your
Answer: a
5) You have Queries that scans on a local secondary index. Does this consume read capacity units from base table?
a) Yes
b) No
Answer: a
6) Does Kinesis Firehose buffers incoming data before delivering the data to your S3 bucket?
a) Yes
b) No
c) For specific S3 buckets
d) Buffers are available only during redshift load operation
Answer: a
7) Kinesis Firehose buffers incoming data before delivering the data to your S3 bucket. Could you tell what the buffer size range is like?
a) 1 MB to 128 MB
b) 1 KB to 128 MB
c) 1 GB to 128 MB
d) 89 MB to 128 MB
Answer: a
8) How long does each Kinesis firehose delivery stream stores data records in case the delivery destination is unavailable?
a) 12 hours
b) 24 hours
c) 48 hours
d) 72 hours
Answer: b
9) Which AWS IoT service transforms messages using a SQL-based syntax ?
a) Rule Actions
b) Rules Engine
c) Kinesis Firehose
d) Data Pipeline
Answer : b
10) How is fault tolerance possible in amazon redshift clusters when there is a drive failure?
a) Amazon Redshift continuously monitors the health of the cluster and automatically re-replicates data from failed drives and replaces nodes as necessary
b) RAID 5
c) RAID 1
d) iSCSI mirror
Answer : a

Enter your email address:

Delivered by FeedBurner

Posted on

AWS big data specialty certification

Enter your email address:

Delivered by FeedBurner

1) You have planned to come up with redshift cluster to support an upcoming project. You are looking for a way to reduce the total cost of ownership. How can you achieve this?
a) Ephemeral S3 buckets
b) Encryption algorithms
c) Compression algorithms
d) All of the above
Answer: b,c
2) You are making use of COPY command to load files onto redshift. Will redshift manifest allow you to load files that do not share the same prefix?
a) Yes
b) No
Answer: a
3) Why are single-line inserts slower with redshift?
a) Owing to row nature of redshift
b) Columnar nature of Redshift
c) Tabular nature of redshift
d) All of the above
Answer: b
4) You are in process of creating a table. Which among the following must be defined while table creation in AWS redshift. What are the required definition parameters?
a) The Table Name
b) RCU (Read Capacity Units)
c) WCU (Write Capacity Units)
d) DCU (Delete/Update Capacity Units)
e) The table capacity number of GB
f) Partition and Sort Keys
Answer: a,b,c,f
5) Does Amazon Redshift offer enhanced support for viewing external Redshift Spectrum tables
a) Yes
b) No
Answer: a
6) Will Machine Learning integrate directly with Redshift using the COPY command?
a) Yes
b) No
Answer: b
7) You have been asked to build custom applications that process or analyze streaming data for specialized needs. Which AWS Service will you make use of to accomplish this?
a) Amazon Kinesis Streams
b) Amazon Kinesis Analytics
c) Amazon LAMBDA
d) Amazon Spark
Answer: a
8) Can Federated Authentication with Single Sign-On be used with amazon redshift?
a) Yes
b) No
Answer: a
Explanation : This is a new feature that is possible with redshift based on press release from AWS released on august 11th 2017
9) How will you isolate amazon redshift clusters and secure them?
a) Amazon VPC
b) Amazon KMS
c) Server side encryption
d) all of the above
Answer: a
10) How long does each Kinesis firehose delivery stream stores data records in case the delivery destination is unavailable?
a) 12 hours
b) 24 hours
c) 48 hours
d) 72 hours
Answer: b
11) What is a shuffle phase in hadoop ecosystem?
a) Process of transferring from reducers back to mappers
b) Process of transferring from mappers to reducers
c) None of the above
Answer: b
12) Your security team has made it a mandate to encrypt all data before sending it to S3 and you will have to maintain the keys. Which encryption option will you choose?
a) SSE-KMS
b) SSE-S3
c) CSE-Custom
d) CSE-KMS
Answer: c
13) Client-Side Encryption with KMS-Managed Keys aka CSE-KMS is used by EMR cluster. How is key managed in this case?
a) S3 uses a customer master key that is managed in the Key Management Service to encrypt and decrypt the data before saving it to an S3 bucket
b) S3 uses a server generated key that is managed in the Key Management Service to encrypt and decrypt the data before saving it to an S3 bucket
c) EMR cluster uses a customer master key to encrypt data before sending it to Amazon S3 for storage and to decrypt the data after it is downloaded
d) All of the above
Answer: c
14) You have to create a visual that depicts one or two measures for a dimension. Which one will you choose?
a) Heat Map
b) Tree Map
c) Pivot Table
d) Scatter Plot
Answer: b
15) Your developers are fluent in python and are comfortable with tools that integrate with python. Which open-source tool will you recommend as business analyst to eb used for this project?
a) Jupyter Notebook
b) Hue
c) Ambari
d) Apache Zeppelin
Answer: a
16) What is SPICE in QuickSight?
a) SPICE is QuickSight’s Super-fast, Parallel, In-memory Calculation Engine
b) SPICE is QuickSight’s Super-fast, Parallel, In-memory analytical Engine
c) Not related to QuickSight
Answer: a
17) You have set up Hadoop encrypted shuffle. Which protocol makes Mapreduce Shuffle possible?
a) TCP/IP
b) HTTP
c) HTTPS
d) VPN
Answer: c
18) You own an apparel business that is supplied and sold across lots of global regions. The start of discal year revenue goals are set at region level. You work in finance and marketing department. Your manager asks you to get a visual that uses rectangle sizes and colors to show which regions have highest revenue goals. Which visual type will you go for to satisfy this requirement?
a) Scatter Plot
b) Pivot Table
c) Tree Map
d) Heat Map
Answer : c
19) What is the most effective way to merge data into an existing table?
a) Use a staging table to replace existing rows or update specific rows
b) Execute an UPSERT
c) Execute an UPSERTwithout index
d) Execute an UPSERT with index
Answer: a
20) What does F1 score signify?
a) better concurrency
b) better predictive accuracy
c) better analytical accuracy
d) None of the above
Answer: b

Enter your email address:

Delivered by FeedBurner

Posted on

AWS big data specialty certification exam dumps

Enter your email address:

Delivered by FeedBurner

1) If Kinesis Firehose experiences data delivery issues to S3, how long will it retry delivery to S3?
a) 7 hours
b) 7 days
c) 24 hours
d) 3 hours
Answer : c
2) You have to create a visual that depicts one or two measures for a dimension. Which one will you choose?
a) Heat Map
b) Tree Map
c) Pivot Table
d) Scatter Plot
Answer: b
3) You are looking for a way to reduce the amount of data stored in a Redshift cluster. How will you achieve that?
a) Compression algorithms
b) Encryption algorithms
c) Decryption algorithms
d) SPLUNK algorithms
Answer: a
4) How does UNLOAD automatically encrypts data files while writing the resulting file onto S3 from redshift?
a) CSE-S3 client side encryption S3
b) SSE-S3 server side encryption S3
c) ASE
d) SSH
Answer: b
5) What does area under curve mean that has a value 0.5?
a) This model is accurate
b) This model is not accurate
c) This creates lots of confidence
d) This creates less confidence beyond a guess
Answer: b,d
6) What is AUC?
a) Average unit computation
b) Average universal compulsion
c) Area under curve
d) None of the above
Answer: c
7) What does lower AUC mean?
a) improves accuracy of the prediction
c) reduces accuracy of prediction
b) mean of all predicted values
d) Mode of all predicted values
Answer: b
8) You have an auc value of 0.5. Does that mean that the guess is accurate and perfect?
a) Yes
b) No
c) Partially yes
Answer: c
Explanation: Value of 0.5 means that the guess is accurate but not a perfect guess but rather random guess
9) Can you make use of redshift manifest to automatically check files in S3 for data issues?
a) Yes
b) No
Answer: b
10) Can you control the encryption keys and cryptographic operations performed by the Hardware security module using cloudhsM?
a) Yes
b) No
Answer: a
11) You are in process of creating a table. Which among the following must be defined while table creation in AWS redshift. What are the required definition parameters?
a) The Table Name
b) RCU (Read Capacity Units)
c) WCU (Write Capacity Units)
d) DCU (Delete/Update Capacity Units)
e) The table capacity number of GB
f) Partition and Sort Keys
Answer: a,b,c,f
12) Can you run a EMR cluster in public subnet?
a) Yes
b) No
Answer: b
Explanation: Owing to compliance or security requirements we can run an EMR cluster in a private subnet with no public IP addresses or attached Internet Gateway
13) Your project makes use of redshift clusters. For security purpose you create a cluster with encryption enabled and load data into it. Now, you have been asked to present a cluster without encryption for final release what can you do?
a) Remove security keys from configuration folder
b) Remove encryption from live redshift cluster
c) Create a new redshift cluster without encryption, unload data onto S3 and reload onto new cluster
Answer: c
14) You are using on-perm HSM or cloudHSM while using security module with redshift. In addition to security what else is provided with this?
a) High availability
b) Scaling
c) Replication
d) Provisioning
Answer: a
15) CloudHSMis or on-perm HSM are the options that can be used while using hardware security module with Redshift. Is it true or false?
a) True
b) False
Answer: a
16) CloudHSMis the only option that can be used while using hardware security module with Redshift. Is it true or false?
a) True
b) False
Answer: b
17) You are making use of AWS key management service for encryption purpose. Will you make use of same keys or differnet keys or hybrid keys on case by case basis
a) same keys
b) different keys
c) hybrid keys
Answer: a
Explanation : AWS Key Management Service supports symmetric encryption where same keys are used to perform encryption and decryption
18) How is AWS key management service different than cloudHSM?
a) Both symmetric and asymmetric encryptions are supported in cloudHSM, only symmetric encryption is supported in key management service
b) CLoudHSM is used for security, key management service if for replication
c) The statement is wrong. Both are same
Answer: a
19) Which among the following are characteristics of cloudHSM?
a) High availability and durability
b) Single Tenancy
c) Usage based pricing
d) Both symmetric and asymmetric encryption support
e) Customer managed root of trust
Answer: b,d,e
20) In your hadoop ecosystem you are in shuffle phase. You want to secure the data that are in-transit between nodes within the cluster. How will you encrypt the data?
a) Data node encrypted shuffle
b) Hadoop encrypted shuffle
c) HDFS encrypted shuffle
d) All of the above
Answer: b
21) Your security team has made it a mandate to encrypt all data before sending it to S3 and you will have to maintain the keys. Wich encryption option will you choose?
a) SSE-KMS
b) SSE-S3
c) CSE-Custom
d) CSE-KMS
Answer : c
22) Is UPSERT supported in redshift?
a) Yes
b) No
Answer: b
23) Is single-line insert fast and most efficient way to load data into redshift?
a) Yes
b) No
Answer : b
24) Which command is the most efficient and fastest way to load data into Redshift?
a) Copy command
b) UPSERT
c) Update
d) Insert
Answer : a
25) How many concurrent queries can you run on a Redshift cluster?
a) 50
b) 100
c) 150
d) 500
Answer : a
26) Will primary and foreign key integrity constraints in a redshift project helps with query optimization?
a) Yes. They provide information to optimizer to come up with optimal query plan
b) No . They degrade performance
Answer : a
27) Is primary key and foreign key relationship definition mandatory while designing Redshift?
a) Yes
b) No
Answer : b
28) REdshift the AWS managed service is used for OLAP and BI. Are the queries used easy or complex queries?
a) Simple queries
b) Complex queries
c) Moderate queries
Answer : b
29) You are looking to choose a managed service in AWS that is specifically designed for online analytic processing and business intelligence. What will be your choice?
a) Redshift
b) Oracle 12c
c) amazon Aurora
d) Dynamodb
Answer : a
30) Can Kinesis streams be integrated with Redshift using the COPY command?
a) Yes
b) No
Answer : b
31) Will Machine Learning integrate directly with Redshift using the COPY command?
a) Yes
b) No
c) On case by case basis
Answer : b
32) Will Data Pipeline integrate directly with Redshift using the COPY command?
a) Yes
b) No
Answer : a
33) Which AWS services directly integrate with Redshift using the COPY command
a) Amazon Aurora
b) S3
c) DynamoDB
d) EC2 instances
e) EMR
Answer : b,c,d,e
34) Are columnar databases like redshift ideal for small amount of data?
a) Yes
b) No
Answer : b
Explanation : They are ideal for OLAP that does process heavy data loads called data warehouses
35) Which databases are best for online analytical processing applications OLAP?
a) Normalized RDBMS databases
b) NoSQL database
c) Column based database like redshift
d) Cloud databases
Answer : c
36) What is determined using F1 score?
a) Quality of the model
b) Accuracy of the input data
c) The compute ratio of Machine Learning overhead required to complete the analysis
d) Model types
Answer : a
Explanatin : F1 score can range from 0 to 1. If 1 is f1 score the model is of best quality
37) Which JavaScript library lets you produce dynamic, interactive data visualizations in web browsers?
a) Node.js
b) D3.js
c) JSON
d) BSON
Answer : b
38) How many transactions are supported per second for reads by each shard?
a) 500 transactions per second for reads
b) 5 transactions per second for reads
c) 5000 transactions per second for reads
d) 50 transactions per second for reads
Answer : b
39) Where does Amazon Redshift automatically and continuously backs up new data to ?
a) Amazon redshift datafiles
b) Amazon glacier
c) Amazon S3
d) EBS
Answer : c
40) Which one acts as an intermediary between record processing logic and Kinesis Streams?
a) JCL
b) KCL
c) BPL
d) BCL
Answer : b
41) What to do when Amazon Kinesis Streams application receives provisioned-throughput exceptions?
a) increase the provisioned throughput for the DynamoDB table
b) increase the provisioned ram for the DynamoDB table
c) increase the provisioned cpu for the DynamoDB table
d) increase the provisioned storage for the DynamoDB table
Answer : a
42) How many records are supported per second for write in a shard?
a) 1000 records per second for writes
b) 10000 records per second for writes
c) 100 records per second for writes
d) 100000 records per second for writes
Answer : a
43) You own an amazon kinesis streams application that operates on a stream that is composed of many shards. Will default provisioned throughput suffice?
a) Yes
b) No
Answer : b
44) You have an Amazon Kinesis Streams application does frequent checkpointing . Will default provisioned throughput suffice ?
a) Yes
b) No
answer : b
45) What is the default provisioned throughput in a table created with KCL?
a) 10 reads per second and 10 writes per second
b) 100 reads per second and 10 writes per second
c) 10 reads per second and 1000 writes per second
Answer : a
46) You have configured amazon kinesis firehose streams to deliver data onto redshift cluster. After sometime in amazon s3 buckets you see manifest file in an errors folder. What could have caused this?
a) Data delivery from Kinesis Firehose to your Redshift cluster has failed and retry did not succeed
b) Data delivery from Kinesis Firehose to your Redshift cluster has failed and retry did succeed
c) This is a warnign alerting user to add additional resources
d) Buffer size in kinesis firehose needs to be manually increased
Answer : a
47) Is it true that if amazon kinesis firehose fails to deliver to destination owing to the fact that buffer size is insufficient manual intervention is mandate to fix the issue?
a) Yes
b) No
Answer : b
48) What does amazon kinesis firehose do when data delivery to the destination is falling behind data ingestion into the delivery stream?
a) system is halted
b) firehose will wait until buffer size is increased manually
c) Amazon Kinesis Firehose raises the buffer size automatically to catch up and make sure that all data is delivered to the destination
d) none of the above
Answer : c
49) Your amazon kinesis firehose data delivery onto amazon s3 bucket fails. Automated retry has been happening for 1 day every 5 seconds. The issue is not found to have been resolved. What happens once this goes past 24 hours?
a) retry continues
b) retry does not happen and data is discarded
c) s3 initiates a trigger to lambda
d) All of the above
Answer : b
50) Amazon kinesis firehose has been cosntantly delivering data onto amazon S3 buckets. Kinesis firehose retires every five seconds. Is there a maximum duration until which kinesis keeps on retrying to deliver data onto S3 bucket?
a) 24 hours
b) 48 hours
c) 72 hours
d) 12 hours
Answer : a,b
51) Amazon kinesis firehose is delivering data to S3 buckets. All of sudden data delivery to amazon S3 bucket fails. In what interval does a retry happen from amazon kinesis firehose?
a) 50 seconds
b) 500 seconds
c) 5000 seconds
d) 5 seconds
Answer : d
52) How is data pipeline integrated with on-premise servers?
a) Task runner package
b) there is no integration
c) amazon kinesis firehose
d) all the above
Answer : a
53) Is it true that Data Pipeline does not integrate with on-premise servers?
a) True
b) False
Answer : b
54) Kinesis Firehose can capture, transform, and load streaming data into which of the amazon services?
a) Amazon S3
b) Amazon Kinesis Analytics
c) Amazon Redshift
d) Amazon Elasticsearch Service
e) None of the above
Answer : a,b,c,d
55) Which AWS service does a Kinesis Firehose does not load streaming data into?
a) S3
b) Redshift
c) DynamoDB
d) All of the above
Answer : c
56) You perform write to a table that does contain local secondary indexes as part of update statement. Does this consume write capacity units from base table?
a) Yes
b) No
Answer : a
Explanation : Yes because its local secondary indexes are also updated
57) You are working on a project wherein EMR makes use of EMRFS. What types of amazon S3 encryptions are supported?
a) server-side and client-side encryption
b) server-side encryption
c) client-side encryption
d) EMR encryption
Answer : a
58) Do you know which among the following is an implementation of HDFS which allows clusters to store data on Amazon S3?
a) HDFS
b) EMRFS
c) Both EMRFS and HDFS
d) NFS
Answer : b
59) Is EMRFS installed as a component with release in AWS EMR?
a) Yes
b) No
Answer : a
60) EMR cluster is connected to the private subnet. What needs to be done for this to interact with your local network that is connected to VPC?
a) VPC
b) VPN
c) Directconnect
d) Dishconenct
Answer : b,c

Free AWS big data specialty certification exam dumps . Enter your email address:

Delivered by FeedBurner

Posted on

AWS support engineer interview for database administrators

Enter your email address:

Delivered by FeedBurner

1) What is the difference between primary key and unique key?
Primary key uniquely identifies a row. Unique keys are created in columns that will the unique row values. Primary key is unique and not null however unique key can have null values
2) Why are you considering AWS while you have current job as DBA?
This is a very common interpersonal question. You can answer whatever you think of as the answer. But provide an answer that discusses advantage of cloud over on-perm systems. Also, mention that databases in future will be 100% hosted in cloud
3) Why AWS instead of oracle cloud or azure environment?
If your current database profile is oracle DBA, SQL Server DBA the companies that have designed, developed, marketed your databases have their properitor platform for hosting the databases in cloud. To choose AWS over the other vendors major reason is that AWS is a decade old firm , growing extremely fast, stable that is much needed for sustainability of enterprise businesses. They cant choose a vendor for price reasons and waste money on migration projects later
4) What is cloud computing?
This is a very common question that will be part of not only cloud dba but also other cloud support roles. Cloud computing in plain terms is migration of your data center to be taken care of by Amazon
5) What are the many different AWS services supporting databases?
AWS RDS the relational database service supports many databases like mysql, postgresql, oracle, sql server and many more. Amazon dynamodb is popular nosql database. Amazon aurora comes in mysql as well as postgressql flavors that is developed, currently heavily marketed by amazon
6) What is DMS?
The data migration service is a GUI from amazon that supports migration of current databases onto AWS cloud Aurora database. Interestingly feasibility , problems that could be encountered during migration including packages, objects that could be impacted post-upgrade will be listed while making use of DMS. This tool is used for migration of databases

Free AWS Associate exam dumps . Enter your email address:

Delivered by FeedBurner

Posted on

AWS certified solutions architect associate practice tests

Enter your email address:

Delivered by FeedBurner

1) Your application runs in a production environment that has 4 identical web servers that makes use of auto scaling. All of these web servers make use of same public subnet and belong to the same security group. All of these web servers are seated behind same elastic load balancer. Now, you add 5th instance into the same subnet, same security group. This does not have internet connectivity. Why is that?
a) This instance has not been assigned elastic IP address
b) Route table has not been updated
c) NAT is not configured properly
d) none of the above
Answer : a
2) Amazon’s Elasticache uses two caching engines. What are those two engines?
a) Redis & Memcached
b) Memcached and RDS
c) Reddit & Memcrush
d) Redis & Memory
Answer : a
3) Which AWS service is used for collating large amounts of data streamed from multiple sources?
a) Cloudwatch
b) Kinesis
c) SNS
d) Cloud Capture
Answer : b
4) An AWS computing service is specifically designed to process large data sets, Which one is that?
a) Cloudfront
b) EC2
c) Elasticache
d) Elastic MapReduce aka EMR
Answer : d
5) Do you know about about Amazon’s Glacier service? Which of the following best describes the use cases for Glacier?
a) Infrequently accessed data & data archives
b) Hosting active databases
c) Replicating Files across multiple availability zones and regions
d) Frequently Accessed Data
Answer : a
6) When you have a heavy OLTP environment with autoscaling in place is there a way to limit the number instances launched within a given timeslot?
a) Yes with autoscaling cooldowns
b) Nope
Answer : a
Explanation : The Auto Scaling cooldown period is a configurable setting for your Auto Scaling group that helps to ensure that Auto Scaling doesn’t launch or terminate additional instances before the previous scaling activity takes effect
7) You have web application that must be able to call the S3 API in order to be able to function. Where should you store your API credentials while maintaining the maximum level of security?
a) For safety purpose create a role in IAM and assign this role to an EC2 instance while creating it first
b) Save API credentials in a public github repository
c) Get the API credentials using the EC2 instances User Data
d) None of the above
Answer : a
8) Which of the AWS services can receive data emitted from Kinesis stream? Choose all that apply
a) RDS
b) Lambda
c) Elasticsearch
d) Redshift
e) DynamoDB
f) S3
Answer : c,d,e,f
9) Are Kinesis streams appropriate for persistent storage of your streaming data?
a) Yes
b) No
Answer : b
10) How long can a kinesis stream data be stored by default?
a) 10 hours
b) 24 hours
c) 48 hours
d) 72 hours
Answer : b
11) What is the maximum number of days that a kinesis stream data can be stored?
a) 7 days
b) 14 days
c) 21 days
d) 30 days
Answer : a
12) What allows emitting of data from streams to various AWS services?
a) Lambda connector library
b) Kinesis Connector Library
c) S3 connector library
d) SNS connector library
Answer : b
13) Can you add a local secondary index to a DynamoDB table after it has been created?
a) Yes
b) No
Answer : b
14) What is the capacity of a shard in a Kinesis stream in terms of data read-rate for data output?
a) 2 MB/s
b) 4 MB/s
c) 6 MB/s
d) 8 MB/s
Answer: a
15) Is it true that Route53 is Amazon DNS Service?
a) Yes
b) No
Answer : a
16) Does Route53 support MX the mail Records?
a) Yes
b) No
c) Only in Us-East Virginia region
d) In all regions except virginia
Answer : a
17) What is the reason behind Route53 naming convention?
a) The DNS Port is on Port 53 and Route53 is a DNS Service
b) It was invented in 1853
c) None of the above
Answer : a
18) SQS can have duplicate messages in queue. True or false
a) True
b) False
Answer: a
Explanation : Simple queue service offers default type of queue standard that allows duplicate messages
19) What is the maximum number of SWF domains allowed in a typical AWS account?
a) 50
b) 100
c) 150
d) 200
Answer : 100
Explanation : Amazon simple workflow service offers an option to maintain total of 100 registered domains that can be both registered and deprecated
20) You have configured custom VPC. How many internet gateways can I be attached to custom VPC ?
a) 1
b) 2
c) 3
d) 4
Answer : a
21) Is it true that amazon SQS keeps track of all tasks and events in an application?
a) True
b) False
Answer: b
Explanation : We must implement our own application level tracking while making use of SQS
22) Is it true that amazon SWF keeps track of all tasks and events in an application?
a) True
b) False
Answer : a
Explanation : It is true that this is tracked by AWS simple workflow service
23) Who is a owner in AWS permission model?
a) User identity
b) email address used to create AWS account
c) Phone number of user
d) Both user identity and email address used to create AWS account
Answer : d
24) What is the VisibilityTimeout value of an SQS message in a FIFO queue?
a) 1 hour
b) 12 hours
c) 24 hours
d) 48 hours
Answer : b
25) Is it true that visibility timeout controls how long a message is invisible in the queue while it is being worked on by a processing instance?
a) True
b) False
Answer : a
26) Is it true that visibility timeout controls how long the message can remain in the queue?
a) True
b) False
Answer : b
27) You have been asked to make use of AWS tool that is fault-tolerant and cost-effective while implementing AWS architectures. Which tool will you use?
a) autoscaling
b) autisharding
c) autodeploy
d) none of the above
Answer: a
28) You project makes use of DynamoDB. Do you need to provision this across multiple availability zones?
a) Yes
b) No
Answer : b
29) Your project makes use of S3 buckets as storage container. Do you need to provision this across multiple availability zones?
a) Yes
b) No
Answer : b
30) You are making use of SQS as your queuing solution. Do you need to provision this across multiple availability zones?
a) Yes
b) No
Answer : b
31) Which among these AWS services have automated already built in fault tolerant fashion and dont need provision across multiple zones?
a) S3
b) SWF
c) SQS
d) Dynamodb
d) RDS
Answer: a,c,d
32) Is organizational unit a component of IAM?
a) yes
b) No
Answer : b
33) Do you know which language is made use of while creating IAM policy documents?
a) javascript
b) JSON
c) BSON
d) python
Answer : b
34) Is power user same as root user?
a) Yes
b) No
Answer : b
Explanation : root is the superuser with supreme privileges
35) You have deployed RDS in multiple availability zones. You have primary and secondary databases in your configuration. You wanted to configure secondary database for reading the reports. Can this be an independent read node?
a) Yes this is possible to offload work
b) Nope not possible
c) Possible if active replication is in place
d) Possible in East-1 zone
Answer: a
36) You are in process of setting up RDS security group . You are now adding a rule to RDS security group. In this step is it mandatory to specify a port number or a protocol?
a) Yes
b) No
Answer: b
37) Which two engines form parts of amazons elasticcache?
a) Redis, memcrush
b) Redis, memcached
c) Redis, MyISAM
d) Redis, InnoDB
Answer: b
38) You are involved in Business intelligence tool datawarehouse projects. Which AWS service will you make use of?
a) InnoDB
b) DynamoDB
c) Redshift
d) Elasticcache
Answer: c
39) Your project makes use of Amazon RDS with provisioned IOPS storage. The database engine used is oracle or mysql. In this case what is the maximum RDS volume size you can have by default?
a) 3TB
b) 1TB
c) 6TB
d) 5TB
Answer: c
40) Which among the following AWS service is a non-relational database service?
a) Redshift
b) MySQL
c) DynamoDB
d) Elasticcache
Answer: c

Free AWS Associate exam dumps . Enter your email address:

Delivered by FeedBurner

Posted on

AWS solutions architect exam questions

Free AWS Associate exam dumps . Enter your email address:

Delivered by FeedBurner

1) You have have designed a CloudFormation script to automatically deploy a database server running on EC2 with an attached database volume. A predefined event triggers automated run of cloudformation script. The database volume must have provisioned IOPS and cannot have any kind of performance degradation after being deployed. What should you do to achieve this?
a) Design the CloudFormation script to attach the database volume using S3, rather than EBS even though the EC2 has provision to make use of its own EBS
b) Design the CloudFormation script to use MongoDB. MongoDB is designed for performance and is much better than any other database engine out there
c) Using a combination of CloudFormation and perl scripting, pre-warm the EBS volumes after the EBS volume has been deployed
d) Test run the CloudFormation script several times and load test it to a value matching the anticipated maximum peak load
e) You should not be using CloudFormation. Instead it would be better to script this using CodeDeploy.
Answer : d
2) You have deployed your media website in AWS. this website has lots of images and thumbnails. The thumbnails are stored in AWS S3 reduced redundancy storage. What is the durability on RRS?
a) 99.90%
b) 99%
c) 99.99%
d) 100%
Answer : c
Explanation: As per AWS documentation details AWS S3 reduced redundancy is designed to provide 99.99% durability and 99.99% availability of objects over a given year. This durability level corresponds to an average annual expected loss of 0.01% of objects.
3) Your manger asked you about importance of AWS glacier to see if this can be utilized in your architecture design. What is Amazon Glacier all about?
a) An AWS service designed for long term data archival
b) A tool that allows to freeze an EBS volume of EC2 instance
c) A highly secure firewall designed to keep everything out
d) It is a tool used to resurrect deleted EC2 snapshots
Answer : a
Explanation: AWS glacier is a longterm data archival solution. Amazon Glacier is an extremely low-cost storage service that provides service as low as $0.004 per gigabyte per month also keeping it secure, durable, and flexible storage for data backup and archival
4) You are working for department of defense. DoD has made a determination to store their old information onto AWS glacier. All these information are compliance sensitive. What will you do to achieve this?
a) Glacier security lock feature can be used
b) Glacier vault lock can be used
c) There is no security provision in glacier
d) Secure data in glacier with roles from IAM
Answer : b
Explanation : Vault lock is used to meet regulatory and compliance requirement when data is archived in glacier
5) The regulatory and compliance data stored in AWs glacier is immutable?
a) True
b) False
Answer : a
Explanation : In glacier with vault lock feature the regulatory and compliance data can be stored in immutable format
6) With vault lock the data stored in glacier is write once read once WORO
a) True
b) False
Answer : b
Explanation: Data of any type can be stored in glacier. Regulatory and compliance data is stored in immutable format utilizing vault lock feature. This data is write once read many called WORM format
7) How does glacier achieve 99.99% durability?
a) Data is stored in multiple facilities
b) Data is stored in multiple devices
c) Data is stored in multiple facilities and multiple devices within each facility
d) Glacier utilizes replication to make data durable
Answer : c
Explanation : Technically information stored in glacier is 99.99% durable. This is possible by storing data in multiple facility and multiple devices within each facility
8) What is the availability percentage supported by AWS S3?
a) 99.99%
b) 99.90%
c) 99%
d) 100%
Answer : a
Explanation : AWS S3 achieves 99.999999999% durability utilizing redundant storage of data across multiple facilities and multiple devices
9) What is the minimum file size that I can store on S3?
a) 1KB
b) 1MB
c) 0 bytes
d) 1 byte
Answer : c
10) S3 has eventual consistency for which HTTP Methods?
a) Overwrite PUTS and DELETES
b) PUTS of new Objects and DELETES
c) PUTS of new objects and UPDATES
d) UPDATES and DELETES
Answer : a
Explanation : AWS S3 provides eventual consistency for overwrite puts and deletes
11) What is S3 consistency for PUTS of new objects?
a) Read-after-read
b) Write-after-read
c) Read-after-write
d) Write-after-write
Answer : c
Explanation: For new PUTS objects in S3 AWS S3 offers read-after-write consistency even though new objects are not created with eventual consistency
12) What is the container for objects stored in amazon S3?
a) regions
b) domains
c) buckets
d) objects
Answer : c
Explanation : Buckets are containers for objects stored in amazon S3
13) What is AWS Storage Gateway?
a) It’s an on-premise virtual appliance that can be used to cache S3 locally at a customers site
b) It allows large scale import/exports in to the AWS cloud without the use of an internet connection
c) It allows a direct MPLS connection in to AWS
d) None of the above
Answer : a
Explanation: AWS storage gateway offers integration between on-premise software appliance with storage in the cloud that helps with scalability, cost-effective storage to maintain data security
14) You work for a media company.They have just released a new mobile app that allows users to post their photos in real time same as instagram. Your organization expects this app to grow very quickly, essentially doubling its user base each month. The app uses AWS S3 to store the images, and you are expecting sudden and sizeable increases in traffic to S3 when a major news event takes place. You need to keep your storage costs to a minimum, and it does not matter if some objects are lost. With these factors in mind, which storage media should you use to keep costs as low as possible?
a) S3 Infrequently Accessed Storage S3-IA
b) S3 Reduced Redundancy Storage
c) Glacier
d) S3 Provisioned IOPS
Answer : b
15) You run a popular photo sharing website that depends on S3 to store content. Paid advertising is your primary source of revenue. However, you have discovered that other websites are linking directly to the images in your buckets, not to the HTML pages that serve the content. This means that people are not seeing the paid advertising, and you are paying AWS unnecessarily to serve content directly from S3. How might you resolve this issue?
a) Use CloudFront to serve the static content
b) Remove the ability for images to be served publicly to the site and then use signed URLs with expiry dates
c) Use security groups to blacklist the IP addresses of the sites that link directly to your S3 bucket
d) Use EBS rather than S3 to store the content
Answer : b
16) Can you change the permissions to a role, even if that role is already assigned to an existing EC2 instance, and these changes will take effect immediately?
a) Yes
b) No
False
Answer : a
17) How are EBS snapshots backed up onto S3?
a) Incrementally
b) Exponentially
c) Decreasingly
d)EBS snapshots are not stored on S3
Answer : a
Explanation: Snapshots are incremental backups in EBS. They are point-in-time backups
18) ) Can you use the AWS Console to add a role to an EC2 instance after that instance has been created and powered-up?
a) Yes possible
b) Nope not possible
Answer : b
19) Can I delete a snapshot of an EBS Volume that is used as the root device of a registered AMI?
a) No
b) Yes
c) Not directly but only via the Command Line
d) Only using the AWS API
Answer : a
20) AWS CLI command that should be used to create a snapshot of an EBS volume
a) aws ec2 create-snapshot
b) aws ec2 fresh-snapshot
c) aws ec2 deploy-snapshot
d) aws ec2 new-snapshot
Answer : a
21) Is it possible for you to add multiple volumes to an EC2 instance and then create your own RAID configurations like RAID5 RAID 10 RAID 0 ?
a) Yes
b) No
Answer : a
22) Can new subnets in a custom VPC communicate with each other across Availability Zones?
a) Yes
b) No
Answer : a
23) Amazon S3 provides how much storage for storing data objects?
a) Limited size for objects
b) Unlimited storage
c) No storage
d) 500 GB
Answer : b
24) Can you create placement groups across 2 or more availability zones?
a) Yes
b) No
Answer : b
25) Does amazon redshift make use of byte size for columnar storage?
a) Yes
b) No
Answer : Amazon redshift does make use of block size for columnar storage
26) You are configuring encryption when creating the EBS volume. Does this enable encryption at rest ?
a) Yes
b) No
Answer : a
27) You have been asked to retrieve userdata or metadata over the web. Which IP address will you make use of?
a) 169.254.169.254
b) 169.254.169.253
c) 169.254.169.252
d) 169.254.169.251
Answer : a
28) You have an application with temporary backlog. Which EC2 instance will you make use of to reduce backlogs?
a) Spot instances
b) Dedicated instances
c) On-demand instances
d) Reserved instances
Answer : a
29) You have your amazon S3 buckets in us-east-1 region. Amazon S3 buckets in the us-east-1 region do not provide eventual consistency. Is this true or false?
a) True
b) False
Answer : b
Explanation : Amazon S3 buckets in all Regions provide read-after-write consistency for PUTS of new objects and eventual consistency for overwrite PUTS and DELETES
30) What is the main advantage of S3 Transfer Acceleration?
a) enables fast, easy, and secure transfers of files over long distances between your client and Amazon S3 bucket
b) Monitors files transferred over long distances between your client and Amazon S3 bucket
c) Both a and b
d) None of the above
Answer : a
31) How can you enable transfer acceleration in S3 bucket?
a) Amazon S3 console
b) AWS CLI
c) S3 API
d) All of the above
Answer : d
32) You have a multi-az enabled RDS instance and you decide to create a read replica. AWS will take a snapshot of your database. This snapshot will be from your primary database.
a) true
b) false
Answer: b

Free AWS Associate exam dumps . Enter your email address:

Delivered by FeedBurner

Posted on

Free AWS Solutions Architect Practice Test Questions

Free AWS Associate exam dumps . Enter your email address:

Delivered by FeedBurner

1) AWS service that is ideal for BI tools and datawarehousing is:-
a) dynamoDB
b) RDS
c) Elasticache
d) Redshift
Answer : d
Explanation : Redshift is datawarehouse, BI, big data solution from AWS
2) What are all the payment options associated with reserved instances?
a) All upfront
b) Partial upfront
c) No upfront
d) Two year upfront
Answer : a,b,c
Reserved instances is the term given to the discount package that comes as part of general EC2 instance saving discount option. This discount can be availed by paying all upfront, partial upfront , no upfront cost. This reserves computing capacity for an EC2 instance fr 1 year or 3 years. I can think of this akin to bulk discount on shared hosting with service providers like siteground that can start at 1 year minimum, 3 years maximum
3) My project got terminated unexpectedly. I bought a reserved instance thinking that it will last for 3 years. I have 2 years pending reserve. What should I do?
a) You are going to lose money
b) Sell this in reserved instance market place
c) Sell this directly
d) None of the above
Answer : b
Explanation : There is an option to resell the reserved instances in case of migration of EC2 instance to different availability zone, unexpected project termination, project completion before anticipated date. This will help you save money
4) You have a project wherein you are importing files from local to AWS simple storage service S3. You perform import everyday using automated batch file. You desktop crashes and you lose some of the project files. You forget the batch file and this remaining files are uploaded onto S3 automatically leaving the system in unstable state. But the project is not impacted. Why is that?
a) You have enabled S3 versioning that retained older version of missing files in AWS
b) The batch script skipped missing files
c) S3 takes care of this automatically
d) all of above
Answer : a
explanation : In S3 bucket once we enable versioning it is like retaining many copies of same file. It is true that versioning will consume space but can be a savior in times of critical projects
5) You have enabled versioning in your S3 environment. Your employer asked you to disable versioning. What will you do?
a) In S3 versioning can’t be disabled. We need to copy the existing bucket to new bucket without versioning
b) Disable versioning in S3 on the fly
c) Copy existing files to new bucket, disable versioning copy back
d) None of the above
Answer : a
Explanation : In S3 versioning once enabled can’t be disabled. Only buckets can be copied and original buckets can be deleted
6) Which amazon S3 resource stores data as objects?
a) key pair values
b) Buckets
c) Folders
d) Tablets
Answer : b
Explanation : While we start using amazon S3 to store files the first thing to do would be to creates AWS S3 buckets. This is the resource used to store data in the form of objects. In plain terms, once you upload your simple notepad file onto S3 it is stored in AWS s3 bucket as bucket object
7) You own a media website with lots of images, videos to support daily media news. After looking at hosting upgrde bill you make a choice to migrate from your current dedicated server plan to AWS. All these big data objects need to be migrated to AWS. Which big data object will help you save cost, help with scaling system to meet growing demand, increase website speed?
a) Amazon redshift
b) Amazon RDS
c) Amazon glacier
d) Amazon S3
Answer : d
Explanation : Major industries including media, healthcare, finance, pharmaceutical, entertainment make use of S3 for scaling the systems that support big data, analytics, transcoding, and archive applications. Amazon redshift is datawarehouse solution and should not be confused with storing of big data
8) What is primary use of elastic load balancer ELB in AWS environment?
a) Traffic distribution of incoming traffic among EC2 instances
b) Incoming traffic should be channelized to specific EC2 instance
c) Reduction of incoming traffic by filtering noise signals
d) Amplifying incoming traffic
Answer : a
Explanation: Elastic load balancer ELB as it is popularly called helps with distribution of incoming traffic across EC2 instances
9) ELB can distribute traffic across regions. True or false?
a) True
b) False
Answer : b
Explanation: ELB can distribute traffic across EC2 instances. The EC2 instances can be in same availability zone, different availability zone within same region. ELB does nto distribute traffic across regions
10) Your company has made a decision to migrate some of its projects to AWS. As the manager is not 100% certain he wants to know if there is a provision to maintain your existing data center infrastructure for next few days before all the existing projects are migrated onto AWS completely. He is lookign for active-active DR solution. Is there a provision to do so?
a) Nope. Not possible to make the AWS work with existing external data center infrastructure
b) Active Passive solution can be used
c) Multi-site solution can be used
d) Warm standby solution can be used.
Answer : c
Explanation : Multi-site offers active-active DR solution with AWs and external data center infrastructure
11) You have been assigned to gather requirements on your payments application development project in AWS. Waht AWS services can you make use of?
a) EBay payment service
b) Authorize.net payment service
c) Paypal Payment service
d) Amazon AWS FPS
e) Amazon AWS DevPay
Answer : d,e
Explanation: Amazon payments infrastructure are made use of by services amazon devpay and amazon FPS. amazon flexible payment services is a service used by developers to accept payments on websites. this includes micropayments support as well. When it comes to amazon devpay the scope is different. Say you have applications built on amazon s3 or amazon ec2. You want to resell these applications, Now, the customers pay amazon. Amazon deducts a small fee plus commission and pays rest to you. This is tightly integrated with AWS infrastructure
12) Your manager asked use to choose highly scalable, fast, reliable, inexpensive data storage infrastructure in AWS for your upcoming media file upload projects. Which AWS option will you recommend?
a) Amazon glacier
b) Multi instances stores
c) Amazon S3
d) Amazon EBS volumes
Answer : c
Explanation : Amazon s3 is scalable, fast, reliable storage infrastructure providing 99.999999% durability that is inexpensive
13) What is the primary difference between Amazon EBS and S3?
a) Amazon EBS is attached to EC2, S3 is a standalone storage options
b) EBS is temporary storage, S3 is permanent storage
c) EBS is always used with glacier, S3 can always be used only in combination with EC2
d) EBS uses SSD, S3 uses HDD
Answer : a
Amazon EBS the elastic block store is a persistent block storage volume same as S3. Only difference is that EBS is not a standalone service and can only be used with EC2
14) You have been asked to map amazon EBS to an amazon EC2 for AWS CloudFormation resources. What is used for reference?
a) Reference the logical ids of blockstore
b) Reference the logical Ids of EC2 instance
c) Reference the logical IDs of both EC2 and EBS
d) Reference the physical IDs of both storage and instances
Answer : c
Explanation: While creating CloudFormation template JSON needs to be built with all required attributes. These attributes are logical Id’s of EBS storage columns and EC2 instances. AWS cloudformation template created a JSON file with Resources section that declares the AWS resources to include in stack including EC2, block store that can be amazon S3 bucket, EBS etc. Resources of same type can be declared together. All resources must be declared separately. The logical ID is alphanumeric and unique within the template
15) Your project database is currently hosted in Amazon RDS in primary standby replica implementation. Now comes a planned or unplanned outage of your primary DB. What should you have enabled to switch amazon RDS from primary DB to standby replica that is hosted in another availability zone?
a) Multi region deployment
b) More than one write replica
c) More than one read replica
d) Multiple availability zones
Answer : d
Explanation : Multiple availability zones also called multi-availability zones takes care of disaster recovery at RDS as well as at the instance level
16) What is AWS point-in-time snapshots?
a) Backup data on Amazon S3 to Amazon EBS
b) Backup data on amazon EBS volumes to Amazon S3
c) Backup data on amazon EBS to amazon glacier
d) Backup data on amazon EBS to RDS
Answer : b
Explanation : The blocks on amazon EBS can be backed up onto Amazon S3 by taking point-in-time snapshots. These are incremental backups that stores just changed information similar to archived redo logs in an oracle database environment. Once the restore process is to happen all of these snapshots are needed
17) You are in process of copying a EBS volume snapshot that has been encrypted. There is a requirement to re encrypt it with a different key during copy process. Is ti possible?
a) Yes
b) No
Answer : a
Explanation : Yes. It is possible to copy an encrypted snapshot. During this process it is possible to re-encrypt with a different key
18) You have EBS volumes. You start creating snapshots of these EBS volumes. Do you need to encrypt these snapshots for safety purposes?
a) No need. Snapshots of encrypted EBS volumes are automatically encrypted
b) Yes. The snapshots of encrypted EBS volumes are not encrypted. They need to be encrypted separately
c) The snapshots of EBS encrypted volumes are encrypted with corruption. Need to fix it
d) None of the above
Answer : a
Explanation: While a EBS volume is encrypted, once we make snapshots of such volumes they are automatically encrypted. No need to separately encrypt them
19) What is a EBS snapshot?
a) State of EBS volume is captured at regular intervals
b) State of EBS volume is captured at a point-in-time
c) State of EBS volume is corrupted
d) Broken EBS volume is repaired
Answer : b
Explanation : In Amazon EBS terms, EBS snapshot is state of a EBS volume at a point-in-time
20) What is AWS cloud infrastructure built upon?
a) Regions
b) Availability ones
c) Domains
d) Subdomains
Answer : a,b
Explanation: AWS cloud infrastructure is built on top of region which is a physical location in the world. Some popular regions include US West, US East, Canada, South America, Europe, Asia, Pacific, China. Recently in AWs summit of 2017 proposal to open new region at Stockholm sweden by 2018 has been announced. As per AWS website two more regions will be coming up soon in Paris, Ningxia. Availability zone is similar to data center that can be one or many within a region. Typically while designing AWS application at architecture stage choice of regions, availability zones play important role with respect to project scope. Note that some features and services have region limitation meaning they can’t be configured to work across regions. Typically DR setup while using RDS with some features hae this kind of limitation. So, have good understanding of this
21) Can EBS volume snapshot information be viewed from AWS console front page?
a) Yes
b) No
Answer : b
Explanation. AWS console provides link to EC2. Amazon EBS is tightly integrated with EC2 instance. Once you choose and navigate the EC2 instance page in AWS console, pick and choose the snapshots in navigation pane
22) Which one acts as optional layer of security for VPC more likely acting as firewall controlling traffic in and out of one or more subnets?
a) Network ACLS(Access control lists)
b) Security group
c) Roles
d) Groups
Answer : a
23) What are components of ELB?
a) load balancer, subnet
b) load balancer, load enhancer
c) Load balancer, controller
d) Load balancer, S3
Answer : c
Explanation: ELastic load balancer ELB as it is popularly called is composed of two components the load balancer and the controller services
24) What is the use of controller service in ELB?
a) Monitor the traffic and handle requests that come in through internet
b) Monitor the traffic and handle requests that come in through subnet
c) Monitor the traffic and handle requests that come in through intranet, monitor load balancers
d) Monitors the load balancers , perform addition and removal of load balancers as needed and more related to,load balancer
Answer : d
Explanation : The controller service is totally associated with monitoring load balancers and administration of load balancers. This includes addition or deletion of load balancers as needed basis and making sure the existing load balancers are functioning properly
25) The IP address mechanisms supported by ELB includes which one of the following?
a) IPv2
b) IPv3
c) IPv4
d) IPv5
e) IPV6
Answer : c,e
Explanation : The IP address mechanisms IPv4 and IPV6 are both supported by ELB. IT is well known fact that IPv6 is the latest internet protocol addressing IP address issues. Utilizing this IP addresses are assigned to devices used to communicate via internet also called internet-enabled devices. In contrast to IPv4 the IPV6 utilizes 128-bit addressing mechanism
Also, with lots of devices over and beyond computer systems have become internet enabled say IoT of devices like mobile also has its unique IP. This has led to IPv4 quickly running out of addresses. Now, IPv6 is slowly replacing this demand
26) In Amazon VPC an instance does NOT retain its private IP. Is it true or false?
a) True
b) False
Answer : b
27) Is it possible to have private subnets in amazon VPC?
a) Yes
b) No
Answer : a
28) Is it true that in Amazon VPC an instance retains its private IP?
a) True
b) False
Answer : s
29) Can you have more than 1 internet gateway per VPC?
a) Yes
b) No
Answer : b
30) How many VPC’s am I allowed in each AWS Region by default?
a) 1
b) 2
c) 3
d) 5
Answer : d
31) What does VPC stand for?
a) Virtual private cloud
b) Virtual public cloud
c) Virtual pungent cloud
d) Very private cloud
Answer : a
32) Which ones act like firewall at AWS instance level?
a) IAM
b) security group
c) Roles
d) Domains
Answer : b
Explanation : Security groups act like a firewall at the instance level
33) In AWS VPC what is additional layer of security that act at the subnet level?
a) Network ACLs
b) Route tables
c) Subnet
d) Firewall
Answer : a
34) What is a naked domain names?
a) Domain name with www
b) Domain name with https
c) Domain name without www
d) Domain name with http
Answer : c
35) Is naked domain name also called zone apex records?
a) Yes
b) No
Answer : a
36) Route53 does support zone apex records. Is it true or false?
a) True
b) False
Answer : b
37) What is the default limit of number of domains that can be managed by Route53?
a) 10
b) 50
c) 100
d) 200
Answer : b
38) The limit on number of domains supported by Route53 can be modified with help from AWS Team. Say true or false?
a) True
b) False
Answer : a
39) You have installed your database software onto the root volume of EC2 instance. This is a OLTP database with ton lot of traffic. There comes a requirement to increase number of IOPS available to it. What would be the steps that you are going to take to accomplish this?
a) Add additional EBS SSD volumes and create a RAID 10 using these volumes
b) Migrate the database to S3 RRS
c) Migrate DB to S3-IA
d) Use caching mechanism
Answer : a
Explanation : SSD the solid state devices are known to take care of IoPS and is preferred even in normal environment
40) You have been asked to transfer a reserved instance from one Availability Zone to another. can you do that?
a) Yes
b) No
Answer : a

Free AWS Associate exam dumps . Enter your email address:

Delivered by FeedBurner

Posted on

AWS big data certification

Enter your email address:

Delivered by FeedBurner

AWS big data certification is a specialty certification from AWS. If you are a database administrator in oracle, sql server, mysql, mongoDB etc it is high time to upgrade yoru skill set to support databases, datawarehouse environments in AWS to retain your jobs
1) You have to locate all items in a table with a particular sort key value.What operation (or) feature (or) service can you make use of to accomplish this?
a) PutItem
b) Query
c) Query with a local secondary index
d) Query with a global secondary index
e) Scan against a table with filters
Answer : d,e
2) You are in process of creating a table. Which among the following must be defined while table creation in AWS redshift. What are the required definition parameters?
a) The Table Name
b) RCU (Read Capacity Units)
c) WCU (Write Capacity Units)
d) DCU (Delete/Update Capacity Units)
e) The table capacity number of GB
f) Partition and Sort Keys
Answer : a,b,c,f
3) How many transactions per second can be read by shard?
a) 2
b) 5
c) 7
d) 9
Answer : b
4) How many records per second for write are supported by shard?
a) 1000
b) 2000
c) 3000
d) 4000
Answer : a
5) In kinesis you are requested to perform the operation of sending data into stream for data ingestion and processing. You will have to write multiple data records into an Amazon Kinesis stream in a single call. Which command will you make use of?
a) PutRecords
b) GetRecords
c) InsertRecords
d) UpsertRecords
Answer : a
6) What is Amazon Kinesis Streams?
a) Managed service that scales elastically for real time processing of streaming big data
b) Managed service that scales elastically for online transaction processing of big data
c) Managed service that scales elastically for provisioning of big data
d) None of the above
Answer : a
7) What is the maximum number of tags can a amazon kinesis stream have?
a) 100
b) 30
c) 10
d) 40
Answer : c
8) In amazon kinesis stream you are creating streamname. What is the maximum length of the streamname string?
a) 110
b) 128
c) 139
d) 140
Answer : b
9) You are making use of amazon kinesis API to add tags to stream. How can you accomplish this action?
a) UpdateTagsToStream
b) AddTagsToStream
c) AddedTagsToStream
d) UpdatedTagsToStream
Answer : b
10) You are performing action of adding tags to amazon kinesis stream. You are making use of API AddTagsToStream. If there are some pre-existing tags what can happen?
a) Existing tags are overwritten
b) The action fails
Answer : a
11) You want to delete an Amazon Kinesis stream and all its shards and data. Which amazon kinesis API will you make use of to accomplish this action?
a) DropStream
b) PurgeStream
c) DeleteStream
d) TruncateStream
Answer : c
12) You are making use of kinesis UI to perform search from Dev console. In the search query you have specified size as 0. What does that mean?
a) When no results matching the search query is found, get a result that is an aggregate based on the data
b) Return all results
c) return no results
d) None of the above
Answer : a
13) You have been asked to build a system to analyse the customer behaviour. The data used for analysis comes from many different data sources including sales report, tweets, customer order logs from database. How will you build the basic framework for this project?
a) Datalake
b) Information lake
c) Loglake
d) Aggregator Lake
Answer : a
14) You are making use of amazon elastic search for your analytics project. How can you achieve high availability?
a) Region awareness
b) Instance awareness
c) Zone awareness
d) Storage awareness
Answer : c
15) in amazon elastic search project which is the highest structure used for data catalog?
a) index
b) shard
c) documents
d) blocks
Answer : a
16) For your amazon elastic search project you have to configure a shard. What is the maximum shard size preferred?
a) 30GB
b) 50GB
c) 100GB
d) 1TB
Answer : b
17) You are building a amazon elasticsearch in development. What instance type should you make use of?
a) R4 instance
b) M4 instance
c) A2 instance
d) D2 instance
Answer : b
18) You are building a amazon elasticsearch in production. What instance type should you make use of?
a) R4 instance
b) M4 instance
c) A2 instance
d) D2 instance
Answer : a
19) How is an index broken down in elastic search environment?
a) shard
b) document
c) block
d) bytes
Answer : a
20) You are looking for a robust delivery solution to transfer data from amazon lambda function onto amazon elastic search. Which service will you make use of?
a) Kinesis firehose
b) S3
c) EC2 instance
d) IAM role
Answer : a

Free AWS Associate exam dumps . Enter your email address:

Delivered by FeedBurner