Posted on

AWS certified sysops administrator associate practice exam


AWS certified sysops administrator associate practice exam will help you pass sysops exam. If you are a system administrator now prepare and upgrade your AWS skills to retain your job

Free AWS Dumps Sysops admin exam. Enter your email address:

Delivered by FeedBurner

1) What is other name of ELB sticky session?
a) Session dismissal
b) Session convergence
c) Session affinity
d) Duration dismissal
Answer : c
Explanation : ELB sticky session is also called as session affinity
2) What is advantage of session affinity in ELB?
a) Users session is bound to specific instance
b) Users session is routed to instance with the smallest load
c) Users session is routed to instance with largest load
d) Users session is bound to specific EBS
Answer : a
Explanation : The ELB new feature sticky session also called session affinity enabled user session to be bound to specific instance
3) Which AWS services access underlying OS?
a) EC2,Elastic Map reduce (EMR), Elastic beanstalk, Opswork
b) EC2,Elastic Map reduce (EMR), Elastic beanstalk,cloudwatch
c) Elastic Map reduce (EMR), Elastic beanstalk,Opswork,cloudwatch
d) EC2,Elastic Map reduce (EMR),Redis,Opswork
Answer : a
4) Can a RDS access underlying OS in AWS?
a) Yes
b) No
Answer : b
Explanation : Only E2, EMR,Beanstalk,Opswork can access underlying OS
5) What are the two types of Elastic Load Balancer Sticky Sessions?
a) Duration based session stickiness and server side session stickiness
b) Server side session stickiness and client side session stickiness
c) Duration based session stickiness and application-controlled session stickiness
d) Application-controlled session stickiness and server side session stickiness
Answer : c
6) What will you do when your instance status check shows a failure and you are unable to connect to your instance?
a) Pass it onto AWS by raising ticket
b) Restart the instance
c) Terminate the instance to delete your VPC
d) Stop the instance
Answer : b
7) How long does Amazon EC2 wait before a hard reboot is performed?
a) 1 minute
b) 3 minutes
c) 4 minutes
d) 10 minutes
Answer : c
Explanation : Amazon EC2 performs a hard reboot if an instance is not shutdown clean within 4 minutes
8) You are rebooting an amazon EC2 instance. Will this start a new billing hour?
a) Yes. Everytime an EC2 instance is rebooted, a new billing hour is created
b) No
c) It depends on EC2 instance type
d) It depends on EC2 configuration
Answer : b
Explanation : EC2 instance reboot does not start a new billing hour. If an instance is stopped and restarted new billing hour is started
9) What should you do to trouble shoot the issue related to your system status check has failure. This is time sensitive and cant wait. Pick the best possible solution?
a) Communicate the failure to AWS support by creating support ticket
b) Restart the instance
c) Stop the instance and then start it again
d) Terminate the instance and then delete your VPC
Answer : c
Explanation: Once a system status check fails we can have AWS take care of it or we can take care of it ourselves. For the issue to be taken care of by us simple stopping and starting the instance, terminating and replacing the instance are some possible solutions
10) What are types of AWS status checks?
a) System status checks
b) OS status checks
c) Instance status checks
d) Database status checks
Answer : a,c
check?
a) Checks the firewall
b) Checks the host
c) Checks the virtual machine
d) Checks the VPC
Answer : b
Explanation: A system status check in AWS EC2 instance checks for loss of network connectivity in host, Loss of system power in the host, software issues on the physical host, hardware issues on the physical host that impact network like NIC card issues, network slot issue
12) What is monitored as part of cloudwatch EC2 as part of default monitoring?
a) CPU
b) Disk
c) Network
d) Status Checks
Answer : a,b,c,d
13) What is the meaning of EBS Volume status check showing impaired?
a) The volume is degraded or severely degraded
b) The volume is stalled or not available
c) Their is sufficient data
d) The instance status must be impaired. You should stop and start the instance again using reboot process
Answer : b
14) Your EBS Volume status check is showing a warning. Why is this happening?
a) Your volume is degraded or severely degraded
b) Your volume is stalled or not available
c) Their is insufficient data
d) Your volume is performing as normal, but may need pre-warming
Answer : a
Explanation : The volume is degraded triggering EBS volume status check
15) What metric is used to monitor the lag between the primary RDS instance and the read replica instance?
a) DatabaseConnections
b) ReadReplicaLag
c) ReplicaQueueLength
d) ReplicaLag
Answer : d
16) What is checked as part of EC2 instance status check?
a) Checks the VPC
b) Checks the EC2 instance
c) Checks the EC2 Host
d) Checks the weather
Answer : b
17) Your project demands deploying production database to EC2 instance. You are tasked with choosing best storage type. You anticipate that at peak you will need 40,000 IOPS and an average of 14,000 – 18,000 IOPS. What storage medium should you choose?
a) Magnetic Storage
b) General Purpose SSD
c) Provisioned IOPS
d) amazon S3
Answer : c
18) What is the minimum granularity in terms of time that a CloudWAtch can monitor For a custom CloudWatch metric?
a) 5 minutes
b) 3 minutes
c) 1 minutes
d) 2 minute
Answer : c
19) You are using ElastiCache to cache your web application. The caching seems be gradually running slower and you want to diagnose the cause of this issue. You are using Memcached as your caching engine, what parameter should be adjusted if you find that the overhead pool is less than 50MB?
a) Memcached-Memory-Overhead
b) Memcached_Connections_Overhead
c) Reddis_Connections_Overhead
d) Reddis-Memory-Overhead.
Answer : b
Explanation : Memcached_Connections_overhead determines the amount of memory to be reserved for Memcached connections and other miscellaneous overhead
20) Your web application is using AutoScaling and Elastic Load Balancing the ELB feature. You want to monitor the application to make sure that it maintains a good customer experience. Typically a customer experience is defined by load time i.e how long it takes to load the application for the end user in their browser. You are using AWS cloudwatch to monitor the application. What metric in AWS CloudWatch can best be used for this?
a) Aggregate CPUUtilization for the web tier
b) RequestCount reported by the ELB
c) Aggregate NetworkIn for the web tier
d) Latency reported by the elastic load balancer (ELB)
Answer : d

Posted on

AWS certified solutions architect

Enter your email address:

Delivered by FeedBurner

AWS certified solutions architect exam is offered by amazon web services that does demonstrates the proficiency of an individual as a competent AWs architect to migrate existing projects onto cloud, design and deploy new projects in AWS cloud utilizing the AWS services.
This exam is offered in two levels starting with associate, experienced professional level
The pre-requisite to appear for AWS certified solutions architect associate is basic understanding and some hands on experience in AWs
To appear for AWS certified solutions architect professional exam the candidate must have passed minimum one of the associate level certifications. Preferable basic requirement is AWS certified solutions architect associate
We sell AWS certified solutions architect associate practice exam questions that will help you clear the exams. We have updated the questions based on latest exam curriculum

Free AWS tutorials. Enter your email address:

Delivered by FeedBurner

Posted on

AWS big data certification

Enter your email address:

Delivered by FeedBurner

AWS big data certification is a specialty certification from AWS. If you are a database administrator in oracle, sql server, mysql, mongoDB etc it is high time to upgrade yoru skill set to support databases, datawarehouse environments in AWS to retain your jobs
1) You have to locate all items in a table with a particular sort key value.What operation (or) feature (or) service can you make use of to accomplish this?
a) PutItem
b) Query
c) Query with a local secondary index
d) Query with a global secondary index
e) Scan against a table with filters
Answer : d,e
2) You are in process of creating a table. Which among the following must be defined while table creation in AWS redshift. What are the required definition parameters?
a) The Table Name
b) RCU (Read Capacity Units)
c) WCU (Write Capacity Units)
d) DCU (Delete/Update Capacity Units)
e) The table capacity number of GB
f) Partition and Sort Keys
Answer : a,b,c,f
3) How many transactions per second can be read by shard?
a) 2
b) 5
c) 7
d) 9
Answer : b
4) How many records per second for write are supported by shard?
a) 1000
b) 2000
c) 3000
d) 4000
Answer : a
5) In kinesis you are requested to perform the operation of sending data into stream for data ingestion and processing. You will have to write multiple data records into an Amazon Kinesis stream in a single call. Which command will you make use of?
a) PutRecords
b) GetRecords
c) InsertRecords
d) UpsertRecords
Answer : a
6) What is Amazon Kinesis Streams?
a) Managed service that scales elastically for real time processing of streaming big data
b) Managed service that scales elastically for online transaction processing of big data
c) Managed service that scales elastically for provisioning of big data
d) None of the above
Answer : a
7) What is the maximum number of tags can a amazon kinesis stream have?
a) 100
b) 30
c) 10
d) 40
Answer : c
8) In amazon kinesis stream you are creating streamname. What is the maximum length of the streamname string?
a) 110
b) 128
c) 139
d) 140
Answer : b
9) You are making use of amazon kinesis API to add tags to stream. How can you accomplish this action?
a) UpdateTagsToStream
b) AddTagsToStream
c) AddedTagsToStream
d) UpdatedTagsToStream
Answer : b
10) You are performing action of adding tags to amazon kinesis stream. You are making use of API AddTagsToStream. If there are some pre-existing tags what can happen?
a) Existing tags are overwritten
b) The action fails
Answer : a
11) You want to delete an Amazon Kinesis stream and all its shards and data. Which amazon kinesis API will you make use of to accomplish this action?
a) DropStream
b) PurgeStream
c) DeleteStream
d) TruncateStream
Answer : c
12) You are making use of kinesis UI to perform search from Dev console. In the search query you have specified size as 0. What does that mean?
a) When no results matching the search query is found, get a result that is an aggregate based on the data
b) Return all results
c) return no results
d) None of the above
Answer : a
13) You have been asked to build a system to analyse the customer behaviour. The data used for analysis comes from many different data sources including sales report, tweets, customer order logs from database. How will you build the basic framework for this project?
a) Datalake
b) Information lake
c) Loglake
d) Aggregator Lake
Answer : a
14) You are making use of amazon elastic search for your analytics project. How can you achieve high availability?
a) Region awareness
b) Instance awareness
c) Zone awareness
d) Storage awareness
Answer : c
15) in amazon elastic search project which is the highest structure used for data catalog?
a) index
b) shard
c) documents
d) blocks
Answer : a
16) For your amazon elastic search project you have to configure a shard. What is the maximum shard size preferred?
a) 30GB
b) 50GB
c) 100GB
d) 1TB
Answer : b
17) You are building a amazon elasticsearch in development. What instance type should you make use of?
a) R4 instance
b) M4 instance
c) A2 instance
d) D2 instance
Answer : b
18) You are building a amazon elasticsearch in production. What instance type should you make use of?
a) R4 instance
b) M4 instance
c) A2 instance
d) D2 instance
Answer : a
19) How is an index broken down in elastic search environment?
a) shard
b) document
c) block
d) bytes
Answer : a
20) You are looking for a robust delivery solution to transfer data from amazon lambda function onto amazon elastic search. Which service will you make use of?
a) Kinesis firehose
b) S3
c) EC2 instance
d) IAM role
Answer : a

Free AWS Associate exam dumps . Enter your email address:

Delivered by FeedBurner

Posted on

Amazon Aurora Interview questions

Enter your email address:

Delivered by FeedBurner

Amazon Aurora the mysql compatible most recently postgres compatibility in preview stage is the product from amazon corporation. Amazon web services has been pioneer to come up with services that help deploy databases in the cloud. Amazon Aurora Interview questions will help you be prepared to clear your upcoming database migration related job roles at AWS
1) What is amazon aurora?
Amazon aurora is the enterprise class database from AWS. AWS business model is targeting more towards migrating current enterprise grade databases like oracle, sql server onto open source coutnerparts like mysql, mariaDB, postgressql to name a few. As such AWS has adopted mysql code and have come up with their version of Amazon aurora that is mysql compatible. Currently a preview edition of aurora that is postgres compatible is in place
2) What is the need to think about amazon aurora?
This is a open source database that offers all the benefits of enterprise grade database as per AWS. Aurora off-loads much of dba work like o/s patching, database patching, performance tuning, backups, restore and recovery as all of these will be taken care of by AWS aurora team. All that is needed is application performance tuning
3) Can I migrate my oracle or sql server database to amazon aurora?
Yes. Make use of schema conversion tool to determine if current schema is compatible with target aurora database. Based on the assessment report if you think the database schema can be migrated make use of database migration service aka DMS in AWS to perform database migration
4) Can AWS aurora support databases migrated from EC2 environment?
Yes the Database migration service DMS can assist with migration of databases like oracle, sql server etc onto AWS aurora. The source databases can be hosted in EC2 environment, in-house datacenter etc
5) What is unique advantage of using schema conversion tool?
This is a free tool that is used to create a assessment report on what could be the impact if the source database can be converted to target database that can be open-source databases like mysql, postgres, AWS developed aurora etc. This tool takes into consideration all the source database objects like tables, indexes, views, synonyms, triggers, procedures, functions etc. In a simple case the SCT tool will determine if the soruce database table datatypes are all supported in destination database. If not this is a first-hand report a development head can make use of to determine if the migration project is feasible and needed
6) Why should enterprises think about migrating from stable, enterprise class databases like oracle sql server onto open source databases?
Personally speaking I’ve known mysql database from the beginning of my career. It has evolved and grown as stable robust database supporting terabytes, supporting major features of oracle. This database is a open source database that has no licensing constraints. The prime think enterprises think of is to cut cost on licensing option that is costly with enterprise grade databases. AWS aurora is mysql compatible and works on pay as you go model. Henceforth, this cloud database could be the future of open source cloud supported databases
7) What is the unique benefit of Amazon Aurora PostgreSQL-Compatible Edition?
Amazon Aurora PostgreSQL-Compatible Edition offers comparability with PostgreSQL the open source database that has been in development for the past 20 years. This is widely accepted by startups and enterprises owing to many of its interesting features including geospatial objects like postgis the spatial database extender for PostgreSQL
Now , coming to the amazon aurora postgresql-compatible edition this offers all of the benefits of normal postgresql database including high durability, high availability, and the ability to quickly create and deploy read replicas with more improvised features
7.1) Performance of aurora postgreSQL-compatible edition is two times traditional PostgreSQL databases
7.2) Compatible with version 9.6.1 of postgreSQL
7.3) Fully cloud hosted database supported all AWS features
7.4) Can make use of schema migration tool for database migration
8) How does amazon rds provide administration for amazon aurora?
Routine database administration tasks needed for amazon aurora are handled by amazon RDS. The routine tasks include provisioning, patching, backup, recovery, failure detection, repair
9) Is there a simple solution in amazon RDS that converts mysql applications to amazon aurora?
Yes. Amazon RDS offers push-button tools that make this process simple
10) What is an amazon aurora DB cluster?
An amazon aurora db cluster is a relational database from amazon that does run in AWS cloud. It is a fully-managed, MySQL-compatible, RDBMS in AWS cloud

Enter your email address for Amazon Aurora, AWS tips, articles FREE:

Delivered by FeedBurner