Posted on

Oracle cloud interview Questions


Oracle cloud interview Questions will help you prepare and clear upcoming interview questions with oracle cloud
1) There are many different cloud services. Why should I choose oracle cloud over others?
Oracle database is the popular relational database company that has been in existence so long. Oracle cloud offers database as a service. With oracle databases supporting enterprise databases, this is a cloud service that can support databases that are used for testing, development, all the way upto live production databases. Oracle cloud offers high availability, scalability options offering business continuity, time save and lot more advantages. A cloud service from a database company to keep it simple
2) You are opting for standard package. Do you need to pay extra for TDE?
No. All packages including standard package, enterprise package, high performance package, extreme performance package include Oracle Database Transparent Data Encryption
3) Can you implement RAC in oracle cloud?
Yes. Enterprise edition extreme performance RAC under bring your own license BYOL model does offer this provision. Pricing can be referred at oracle cloud official website
4) What is included in standard package?
Under standard package both in Universal credit services as well as bring your own license BYOL model oracle database standard edition 2 is included. In case you have opted to make use of BYOL model standard package this can be standard edition one, or oracle database standard edition 2
5) You have chosen to make a purchase under universal credit services standard package. What cost do you need to pay upfront?
As this a pay as you go service in case of universal credit services you dont need to pay any upfront costs. Based on usage monthly invoice will be generated. This is same as in AWS cloud
6) You have chosen to BYOL standard package. What is upfront cost?
This is a pay as you go service and no monthly upfront cost needed. Only service usage is metered and charged accordingly
7) In oracle cloud platform as service offering what database management services are included?
Following database management services are included as part of Oracle cloud platform as a service PaaS :
Database
Database backup
Big Data
Big Data Cloud
Event Hub
MySQL
Autonomous NoSQL Database Cloud
Data hub
Autonomous Data Warehouse Cloud
8) What are the many different database deployment model available with oracle cloud?
Following are the many different types of database deployment models that come as part of oracle cloud:
Oracle database cloud service
Oracle database cloud service bare metal
Oracle Database Exadata Cloud Service
Oracle Database Exadata Cloud at Customer
Oracle Database Exadata Express Cloud Service – Managed
Oracle Database Schema Cloud Service – Managed
9) Can I try oracle cloud for free?
Yes. Oracle cloud offers $300 in free credits. You can make use of this credit for building your test, development, production databases, compute options, containers, IoT, bigdata, API, chatbots, integrations and lots more that are valid for 30 days
Create a free account and try these features for free
10) Where can you store your oracle database backup in oracle cloud?
Oracle cloud object storage solution that is reliable and scalable. As of information from oracle website this is a 8000TB storage that is used for storing and accessing the data in databases that are ever growing. This is storage used for storing oracle database backup data
11) What are the unique features of oracle cloud database backup management offering?
This is a reliable and scalable object data storage from oracle cloud. Some salient features includes:
Security – This solution comes with enterprise-grade data protection policies. The privacy policies are in enterprise grade as well. Oracle being the most popular enterprise database solution this cloud offerings from oracle are designed with this in mind
Reliability – Redundancy policies ensure high availability of data
Scalability – Oracle cloud is a pay as you go model solution wherein you can choose to purchase universal credits, BYOL etc. Based on capacity and growing demands storage hardware is allocated making this a scalable solution
Simplification by using existing RMAN for backups – Even with cloud oracle is still making use of RMAN backup for performing the database backups. This makes backup, restore and recovery operations transparent using RMAN
12) Is my backup safe and secure while being stored in oracle cloud? Explain?
Yes. Oracle database backup service offering encrypts backups at the source, this backup is securely transferred to the cloud and stored in cloud
13) What is the difference between normal database and Oracle database cloud service?
Oracle database cloud service is same as single-instance oracle database except for that database is deployed in cloud and computing resources including storage, power etc is provided by Oracle
14) In Oracle cloud is database maintenance and management to be done only using cloud tools?
Nope. Normal database tools can be made use of for maintenance and management purposes. However, oracle cloud tools can be optionally used
15) What are the two service levels available with oracle database cloud service?
Oracle Database Cloud Service Virtual image wherein customer is responsible for installing software, maintenance of software. Customer has root privilege and full database administrative privilege
Oracle Database Cloud Service wherein database deployment is easy using custom options provided online. Oracle database cloud service can perform automated backups. Customer is responsible for setting up maintenance operations, recovery operation setup in the event of failure
16) Which component of Oracle cloud provides service console and REST API?
Platform Service Manager the PSM component of Oracle cloud is responsible for this. This component is the one used in Oracle golden gate and Oracle Java cloud service as well
17) How does PSM interact with compute nodes to perform predefined cloud service actions like backup, patching?
PSM uses secure shell the SSH in port 22 of compute nodes. Compute nodes hosts databases which as database deployments in oracle cloud terms. These cations can be initiated over web service console that uses PSM or REST API
18) How is internal communication between PSM and Oracle cloud compute nodes established?
SSH key-value pairs are used for this communication in port 22. This key value pair is specific to each and every database deployment and used for internal communication purposes. This SSH is internal to Oracle and not accessible. If there is an issue here PSM communication with compute node fails
19) Can PSM communication with compute nodes be audited?
Though PSM is an oracle component the actions of PSM during its communication with compute node can be logged and audited
20) Who has access to the SSH keys used for PSM communication with compute nodes?
Only owner of that project has this access. For security purposes even oracle support and operations is not granted this access unless explicitly shared by customers for troubleshooting purposes
21) What are the database related public cloud offerings in oracle cloud?
Oracle public cloud offering comes with the following three database related services in the public cloud. They are as folows :
a) Schema as a service
b) Database as a service – Popularly called DBaaS this public cloud is offered as both Infrastructure as a service IaaS and Plarform as a Service PaaS
c) Oracle database cloud exadata service
22) What does provisioning an oracle database mean?
Provisioning an oracle database means creating an oracle database and making it available to the end users
23) How will you provision an oracle database?
As an oracle dba (or) cloud dba we need to collect requirements from customers (or) project owners and create it as per specific requirements. In case your project needs a clone of production database immediately for testing (or) UAT purpose say for example this can be done as per standard specification requirements often based on a template
24) Give details on some oracle cloud terminologies :-
Service levels
Virtual image
OCPU
Cloud storage
Subscription
Region
Compute
Console
Shape
25) In oracle cloud infrastructure what is the default encryption used in file storage service?
AES-128 encryption algorithm is being used as default encryption in an oracle cloud infrastructure
26) In Oracle cloud infrastructure which among the following are encrypted at rest rather than in transit?
Data as well as Metadata
27) Is UpdateZoneRecord a valid REST API operation?
Yes. It is a valid REST API operation
28) Is AddZone a valid REST API operation?
Yes. It is a valid REST API operation
29) Give some invalid REST API operations for DNS Zone in OCI :-
a) ListZones
b) GetZone
c) CreateZone
d) UpdateZone
e) DeleteZone
30) Where are IAM resources the users and groups created?
They are created globally
31) You want to point a hostname to an IPv4 address. Which DNS resource record type will you make use of to accomplish this?
A record in DNS can be used for this purpose
32) In TCP level healthcheck you attempt to make a TCP connection with the backend servers . How will you validate the response?
Based on the connection status
33) What is extension of Terraform HCL configuration files?
These files come with extension .tf
34) What configuration formats are supported by terraform?
Hashicorp Configuration Language Format HCL,JSON. Oracle cloud infrastructure can be described using HCL in Terraform configuraiton files
35) How will you achieve high availability in Oracle Cloud Infrastructure?
Attach block volume from availability domain 1 to a compute instance in availability domain 2 or viceversa. Distribute application servers across all availability domains within a region
36) What are the different types of compute instances that come as part of Oracle Cloud infrastructure?
Bare Metal, Virtual Machine
37) What are the components of backend set of a load balancer?
a) Load balancing policy
b) list of backend servers
c) health check policy
d) SSL handling
e) session persistence configuration
38) What are the resource record types supported by Oracle Cloud Infrastructure DNS service?
a) ALIAS
b) CNAME
c) DNAME
d) MX
e) NS
39) To tell Terraform what data is important outputs are the way to go. When is this data outputted?
When apply is called
40) When a supply, variables, build are called is the data output created in Terraform?
No
41) What is the default output at the end of a terraform apply operation?
Statistics including what was added, changed, destroyed, values of output becomes available
42) Which two relevant storage tiers are available in the object storage service?
Standard storage,Enhanced Performance storage
43) Does boot volumes allow you to create significantly faster custom images of running VMs without having to reboot?
Yes they do
44) Are the backend set components in load balancer physical (or) logical?
The load balancer components in an oracle cloud infrastructure are logical in nature
45) In Oracle Cloud infrastructure what is default behavior of security list?
It uses stateful rules by default
46) What are the many different formats of terraform configuration files?
Terraform domain-specific language format HCL, Machine-readable JSON format files
47) How will you parameterize Terraform configurations ?
Using input variables
48) You want to launch an instance in Oracle Cloud Infrastructure. What are all the required parameters?
Subnet, Availability domain, Virtual cloud network, instance shape, image OS
49) Which resource is tied to availability zone on Oracle Cloud Infrastructure?
Subnet
50) What is extension of JSON format machine-readable Terraform configuration files?
These files are of format .tf.json
51) What are valid REST API operations for DNS Zone in OCI?
a) ListZones
b) GetZone
c) CreateZone
d) UpdateZone
e) DeleteZone
52) What actions are controlled by Oracle cloud infrastructure layer?
a) creating file systems
c) listing file systems
d) associating file systems
e) mount targets
53) How many subnets are needed to create public load balancer?
Two subnets are needed, each in different availability domain within a single region
54) In an Autonomous transaction processing what SQL operations are not available?
Alter profile
55) In an Autonomous transaction processing what SQL operations are available?
a) alter pluggable database with datafile autoextend on
b) Create Tablespace
c) Drop Tablespace
d) Create Index
56) In Oracle cloud infrastructure what is the default location in which automatic backups of databases in cloud are created?
Local Storage
57) What is the main use of load balancing policy in an oracle cloud infrastructure?
It tells the load balancer how to distribute incoming traffic to the backend servers
58) What is the way to tell Terraform what data is important?
Using outputs
56) Can you tell terraform the important data using extracts?
No. For this purpose outputs are used
57) In TCP level healthcheck you send requests to the backend servers at a specific URL. How will you validate the response?
a) based on status code
b) based on entity data returned
c) based on entity body returned
57) State the difference between VM standard and VM dense IO shapes?
The type of storage is NVMe drivers in VM Dense IO and block storage in VM standard
58) What load balancer components are optional backend set components?
SSL handling
session persistence configuration
59) What load balancer components that are mandatory backend set components?
a) Load balancing policy
b) list of backend servers
c) health check policy
d) TCP handling
60) In Oracle cloud infrastructure is IP Networking layer of the security control controlling the actions for connecting the client instance to the mount target?
Yes
61) You are in process of designing a load balancer to accept incoming traffic. What configurations must be made for this?
a) listener must be configured
b) a certificate must be available
c) a security list that is open on a listener port must be available
62) What is the use of internet gateway IG in Oracle Cloud infrastructure?
It provides a path for network traffic between VCN and public internet
63) What actions are controlled by Oracle cloud infrastructure layer?
a) creating file systems
b) listing file systems
c) associating file systems
64) In an Autonomous transaction processing which among the following SQL operations are available?
a) alter pluggable database with datafile autoetend on
b) Create Tablespace
c) Drop Tablespace
d) Create Index
65) What are the many different types of load balancing policy in Oracle Cloud Infrastructure environment?
a) Round Robin
b) Least Connections
c) IP Hash
66) Is route hash a load balancing policy in an oracle cloud infrastructure environment?
Nope
67) In Oracle cloud infrastructure is unix authentication layer of security controlling the actions for connecting the client instance to the mount target?
Nope. IP Networking layer of the security control takes care of connecting clients to mounting targets

Posted on

AWS Cloud Practitioner Practice Exam


1) How will you monitor CPU utilization of an EC2 resource in AWS environment?
a) AWS Trusted Advisor
b) AWS Cloudwatch
c) AWS Cloudmonitor
d) AWS Cloudcheck
Answer : b
2) What is the strategy to control access to your Amazon EC2 instances?
a) IAM Policies
b) EC2 Security Groups
c) Infra Security Groups
d) AWs Cloudwatch
Answer : b
3) You have launched a data collection campaign that is to last for two days. Which AWS EC2 purchase option best suits your needs?
a) Spot Instances
b) Reservation Instances
c) On-demand instances
d) dedicated instances
Answer : c
Explanation : On-demand instances allow us to pay for compute capacity on hourly basis with an option to increase or decrease compute capacity and pay hourly rate for instance used
4) Your environment runs on Oracle database. You are migrating your infra to cloud. What is the way to migrate to AWS oracle database service with minimal impact to source database?
a) AWS Database Migration Service
b) AWS Server Migration Service
c) RDS Multizone deployment
d) Oracle conversion service
Answer : a
5) Which AWS service allows for object level storage in AWS?
a) Amazon EBS
b) Amazon SQS
c) Amazon S3
d) Amazon Glacier
Answer : c
6) To get a DNS service in AWS cloud which AWS service will you make use of?
a) EC2
b) Route 53
c) VPC
d) VPN
Answer : b
7) You are making use of on-demand instance for a temporary project. Do you need to pay termination fees once you terminate the isntance?
a) Yes
b) No
Answer : b
8) What are the features of AWS on-demand instances?
a) Pay as you go model. Pay only for usage
b) You are charged per second based on hourly rate
c) There is no upfront cost for instances
d) There is an upfront cost as well as termination fees
Answer : a,b,c
9) You want to launch and manage a virtual private server with AWS. what is the easiest way to do that?
a) Using Cloudwatch
b) Using VPC in AWS
c) Using amazon lightsail
d) Using Route 53
Answer : c
10) You want to own your own private network in AWS. How can you accmplish this?
a) Using AWS route 53
b) Using VPC
c) Using S3
d) Using SQS
Answer : b
11) You are choosing AWS lambda service and have been asked to report the costing structure of this AWS service to your senior management. Do you know how this service usage is charged?
a) Based on storage consumption
b) Based on number of requests for your project functions
c) Based on compute time we consume
d) Based on compute capacity we consume
Answer : b,c
12) How is operational excellence achieved in an AWS environment?
a) Proactively manage failures
b) Review and refine the operational procedures periodically
c) Expect failure
d) Perform operations as a code
e) Make large changes
Answer : c,d
13) As a security measure you have been asked to figure out a way to protect termination of running instances by anyone. What measure will you take?
a) Create a policy document that denies EC2 instance termination and attach it to all existing IAM identities
b) Create a policy document that allows EC2 instance termination and attach it to all existing IAM identities
c) Create a role document that denies EC2 instance termination and attach it to all existing IAM identities
d) None of the above
Answer : a
14) What are some of the benefits of AWS organizations?
a) Centrally manage access policies for multiple AWS accounts
b) Automation of AWS account creation and management
c) Control access to AWS services
d) Enable consolidated billing across multiple aws accounts
e) None of the above
Answer : a,b,c,d
15) Which AWS service will you choose to easily generate and use your own encryption keys on the AWS cloud?
a) AWs WAF
b) AWS Shield
c) AWS lambda
d) AWS Certificate manager
e) AWS CloudHSM
Answer : e
16) In AWS shared responsibility model what controls do customers fully inherit from AWS?
a) Environmental controls
b) Patching Controls
c) Communication controls
d) Resource physical controls
Answer : a,d
17) You dont want to manage the OS in your infrastructure. Which cloud computing model does hep you accomplish this?
a) SaaS
b) IaaS
c) PaaS
d) Hybrid Coud
Answer : c
18) Which AWS service will you make use of for cost optimization?
a) AWS Trusted Advisor
b) AWs Cloudmeter
c) AWS WAF
d) AWs Cloudinspector
answer : a
19) Which AWS service will you make use of to co-ordinate tasks across distributed application components?
a) AWS WAF
b) AWS SWF
c) AWS S3
d) AWs Trusted Advisor
Answer : b
20) You need to have a common datasource for multiple EC2 instances. How can you accomplish this in an AWS environment?
a) AWS S3
b) AWS Storage Gatewy
c) AWs Elastic File Share
d) AWs Elastic File System
Answer : d
21) You have chosen simple storage service. How much can you store in here?
a) 1TB
b) 5PB
c) unlimited storage
d) 5TB
Answer : c
22) You have to enable the Virtual multi-factor authentication. Which AWS cloud service will you make use of for this purpose?
a) Identity and access management
b) AWs Inspector
c) AWS Autoscaling
d) AWs S3
Answer : a
23) What are the factors that an AWS cloudfront cost is dependent on? Choose all that apply
a) Storage Volume
b) Requests
c) Traffic distribution
d) Data Transfer Out
Answer : b,c,d
24) You work for an e-commerce website company that has made a decision to migrate to AWS cloud. They are bit concerned about the security as this involves online transactions. You have decided to have an exclusive hardware in AWS cloud and not a shared hardware. Which EC2 instance type will you choose for this purpose?
a) Distinct Instances
b) Dedicated Instances
c) Reserved Instances
d) Hardware Instances
Answer : b
25) Which AWS service will you make use of to monitor the CPU utilization of an EC2 instance?
a) AWs Cloudwatch
b) AWS Autoscaling
c) AWS Inspector
d) AWS SWF
Answer : a
26) How will you access your EC2 instances in an AWS environment?
a) Using key pairs
b) Using routing tables
c) Using MFA
d) Using instance password
Answer : a
27) What are the many different AWS services used to store files. Choose all that apply ?
a) AWS S3 the simple storage service
b) AWS Chime
c) AWS Elastic Block Store EBS
d) AWS Elastic File System EFS
Answer : a,c,d
28) How will you protect your AWS root account?
a) Sharing AWS password access keys with trusted persons
b) Enable AWS multi-factor authentication
c) Create an access key only if needed
d) Using a strong password
Answer : b,c,d
29) You have been asked to check the security of VPC by auditing. You need to start by analyzing what traffic is allowed to and from various EC2 instances. Which two parts in VPC need to be checked to accomplish this?
a) Security groups
b) NACLs
c) Traffic Manager
d) Subnets
Answer : a,b
30) Your AWS bill shows that the costs are high. You have been asked to determine where the high costs are coming from. What will you do?
a) Make use of AWS price list API
b) Activate cost allocation tags to categorize and track their costs
c) Use cloudwatch to create billing alerts that notify them when their usage of their service exceeds thresholds that they define
d) Using budget explorer to estimate AWS costs
Answer : b,c

Posted on

AWS cloud support engineer interview questions

Enter your email address:

Delivered by FeedBurner

AWS is an Amazon company with lots of openings for fresh talent, open to fresh ideas, innovation. Amazon web services the cloud based service that has migrated infrastructure from physical data center onto online cloud has been hiring engineers in various capacity including cloud support associate, cloud engineer, senior cloud support engineer, cloud architect, support manager etc. As a fresh graduate out of college this is a lucrative better career option you can eye on. Here we have proposed some interview questions that will help you crack the AWS interview including aws cloud support engineer interview questions. The interview questions does overlap with AWS cloud support associate, AWS cloud support engineer, AWS cloud architect as all these positions demand good knowledge, skill and expertise in Linux/UNIX OS,networking basisc to start with.
Note that these are not actual interview questions and this is just an aid prepared based on AWS stack analysis, job role responsibilities advertised by them in popular websites
These are some of the interview questions that you can expect during interview of AWS cloud support engineer, aws cloud support associate, AWS cloud support manager. We have analysed the technology stack, current job openings and created this based on them. These are not actual interview questions and has nothing to do with them
1) Why should we consider AWS? How would you convince a customer to start using AWS?
Primary advantage is going to be cost savings. As a customer support engineer your job role involves talking to current customers, prospective customers to help them determine if they really have to move to AWS from their current infrastructure. In addition to providing convincing answer in terms of cost savings it would be better if you give them real simple explanation of flexibility, elastic capacity planning that offers option of pay as you use infrastructure, easy to manage AWS console etc
2) What is your current job profile?How would you add value to customer?
Though AWS is looking to hire fresh talent for cloud support engineer openings, if you have some work experience in infrastructure side of business say system administrator, network administrator, database administrator, firewall administrator, security administrator, storage administrator etc you are still a candidate to be considered for interview.
All they are looking for is infrastructure knowledge in overall, little knowledge about different tech stack , how they inter-operate, what will it be like once the infrastructure is in web rather than physical data center.
If you don’t have experience with AWS don’t worry. Try to leverage the ways and means you did adopt to solve customer support calls both internal and external to let them know how you can bring value to the table.
Have some overview on how different components of infrastructure interact.
AWS wants to know your pro-active measure towards customer relationship. Say, if you are going to discuss a project or an issue with customer , it would be better if you have some preparatory work that comes handy rather than being reactive. Value addition comes in terms of recommending the best solution , utilization of services in AWS that will help them make decision easy and fast
3) Do you know networking?
Make sure you can be from many different backgrounds say from development, infrastructure, QA, customer support , network administration, system administration, firewall administration, system administration etc. You should know networking. Cloud is network based and to fix the application issues escalated, networking knowledge is very important
4) What networking commands do you make use of on daily basis to fix issues?
When we work with servers be it physical or virtual first command that comes handy to locate the request response path taken would be traceroute. In windows systems equivalent command is tracert
There are some more important interesting commands – ping, ipconfig, ifconfig that talks about network communication, network address, interface configuration etc
DNS commands – nslookup, Lookup of /etc/resolv.conf file in Linux systems to get details on DNS
5) What is the advantage of using TCP protocol?
TCP is used to exchange data reliably . IT used mechanisms of sequence and acknowledgment, error detection, error recovery. This comes with advantage of reliable application but comes with cost of hit in transmission time
6) What is UDP?
User datagram protocol called UDP is a connection less protocol that can be used for fast efficient applications that need less time compared to TCP
7) Do you know how an internet works in your environment?
This can be your home or office. Learn more on modem and its utilization in connection establishment
8) What is a process? How do you manage processes in Linux:-
In Linux/Unix based OS process is started or created when a command is issued. In simple terms while a program is running in an OS an instance of the program is created. This is the process. To manage the processes in Linux process management commands come handy
ps – this is the commonly used process management command to start with. ps command provides details on currently running active processes
top – This command provides details on all running processes. ps command lists active processes whereas top lists all the processes (i.e) activity of processor in real-time. This includes details on processor and memory being used
kill – To kill a process using the process id kill command is used. ps command provides details on process id. To kill a process issue
kill pid
killall proc – This command is same as kill command. To kill all the processes by name proc we can use this
9) Give details on foreground and background jobs command in Linux:-
fg – this command brings most recent job to foreground. Typing the command fg will resume most recently suspended job
fg n – This command brings job n to the foreground. If a job is recently back grounded by typing fg 1 this can be brought foreground
bg – This command is used to resume a suspended program without bringing it to foreground. This command also provides details on list of stopped jobs as well as current background jobs
10) How to get details on current date and time in Linux?
Make use of the command date that shows details on current date and time. To get current month’s calendar use cal command
uptime – shows current uptime
11) What is difference between command df and du?
In linux both df and du are space related commands showing system space information
df – this command provides details on disk usage
du – To get details on directory space usage use this command
free – this command shows details on memory and swap usage
12) What are the different commands and options to compress files in Linux?
Lets start with creating a tar and name it test.tar containing the needed files
tar cf test.tar files
Once the tar is available, uploaded on AWS there is a need to untar the files. Use the command as follows:
tar xf file.tar
We can create a tar with gzip compression that will minimize the size of files to be transferred and creates test.tar.gz at the end
tar czf test.tar.gz files
To extract the gzipped tar compressed files use the command:
tar xzf test.tar.gz
Bzip2 compression can be used to create a tar as follows
tar cjf test.tar.bz2
To extract bzip2 compressed files use
tar xjf test.tar.bz2
To simply make use of gzip compression use
gzip testfile – This creates testfile.gz
To decompress testfile.gz use gzip -d testfile.gz
13) Give examples on some common networking commands you have made use of?
Note that AWS stack is primarily dependent on linux and over the cloud architecture makes it heavily network dependent. As a result AWS interview could be related to networking irrespective of your system admin, database admin, bigdata admin background. Learn these simple networking commands:
When a system is unreachable first step is to ping the host and make sure it is up and running
ping host – This pings the host and output results
Domain related commands as AWS has become preferred hosting for major itnernet based companies, SaaS firms
To get DNS information of the domain use – dig domain
To get whois information on domain use – whois domain
Host reverse lookup – dig -x host
Download file – wget file
To continue stopped download – wget -c file


14) What is your understanding of SSH?
SSH the secure shell is widely used for safe communication. This is a cryptographic network protocol used for operating network services securely over an unsecured network. Some of the commonly used ssh commands include
To connect to a host as a specified user using ssh use this command:
ssh username@hostname
To connect to a host on a specified port make use of this command
ssh -p portnumber username@hostname
To enable a keyed or passwordless login into specified host using ssh use
ssh-copy-id username@hostname
15) How do you perform search in Linux environment?
Searching and pattern matching are some common functions that typically happens in Linux environment. Here are the Linux commands:
grep – Grep command is the first and foremost when it comes to searching for files with pattern. Here is the usage:
grep pattern_match test_file – This will search for pattern_match in test_file
Search for pattern in directory that has set of files using recursive option as follows – grep -r pattern dir – Searches for pattern in directory recursively
Pattern can be searched in concatenation with another command (i.e) output of a command can be used as input for pattern search and match – first command| grep pattern
To find all instances of a file use locate command – locate file
16) Give details on some user related commands in Linux:-
Here are some user related Linux commands:
w – displays details on who is online
whoami – to know whom you are logged in as
finger user – displays information about the user
17) How to get details on kernel information in Linux?
uname -a command provides details on kernel information
18) How to get CPU and memory info in Linux machine?
Issue the following commands:
cat /proc/cpuinfo for cpu information
cat /proc/meminfo for memory information
19) What are the file system hierarchy related commands in linux?
File system hierarchy starting with raw disks, the way disks are formatted into files, files are grouped together as directory all are important for cracking AWS interview. Here are some file system hierarchy related commands that come handy
touch filename – creates a file with name filename. This command can also be used to update a file
ls- lists the directories
ls -al – All files including hidden files are listed with proper formatting
cd dir – change to specified directory
cd – Changes to home directory
pwd – called present working directory that shows details on current directory
Make a new directory using mkdir command as follows – mkdir directory_name
Remove file using rm command – rm file – removes file
To delete directory use -r option – rm -r directory_name
Remove a file forcefully using -f option – rm -f filename
To force remove directory use – rm -rd directory_name
Copy the contents from one file to another – copy file1 file2
Copy the contents across directory use – cp -r dir1 new_dir – If new directory does not exist create this first before issuing copy command
Move or rename a file using mv command – mv file1 new_File
If new_Dir is a file that already exists new_File will be directory into which file1 will be moved into
more filename – output the contents of the file
head file – output the first 10 lines of the file
tail file – output the last 10 lines of the file
tail -f filename – output the contents of the file as it grows, to start with display last 10 lines
Create symbolic link to a file using ln command – ln -s file link – called soft link
20) What command is used for displaying manual of a command?
Make use of the command man command_name
21) Give details on app related commands in linux:-
which app – shows details on which app will be run by default
whereis app – shows possible locations of application
22) What are the default port numbers of http and https?
Questions on http and https port number is first step in launching webapp while customer reports an issue
Default port number of http is 80 (or) 8080
Default port number of https is 443
23) What is use of load balancer?
Load balancer is used to increase the capacity and reliability of applications. What capacity means is number of users connecting to applications. Load balancer distributes the traffic network and application traffic across many different servers increasing application capacity
24) What is sysprep tool?
System preparation tool comes as free tool with windows that can be accessed from %systemroot%\system32\sysprep folder. IT is used to duplicate, test and deliver new installation of windows based on established installation
25) User is not able to RDP into server. What could be the reason?
Probable reason is that user is not part of remote desktop users local group of the terminal servers
26) How would you approach a customer issue?
Most work of AWS support engineer involves dealing with customer issue.As with any other support engineer AWS engineer should follow approach of question customer, listen to them, confirm what you have collected. This is called QLC approach much needed step to cover details on issue description and confirm it
27) What types of questions can you ask customer?
A support engineer can ask two types of questions
1) Open ended questions – In this case your question will be single statement, answer you expect from customer is detailed
2) Closed questions – In this case your question will have answers yea (or) No, true (or) false type answers, single word answer in some cases
28) How do you consider customer from AWS technology perspective?
Even though the customer can be long standing customer of AWS, always think of customer as common man with no knowledge of AWS to talk more to them, explain more details to them to get correct issue description statement
29) Give details on operators in linux?
> – greater than symbol is input re-direction operator used to write content as input into file. Typically this is used to redirect the output of command into logfile. IF file already exists the contents are overwritten and only last recent content is retained
>> – this is same as input redirection except that this is appending content of a file if the file already exists
30) Explain difference between hardlink and softlink in simple terms?
Hardlink is link to inode that talks about file contents, softlink is link to filename. If filename changes the changes are not reflected. For both hard and soft link ln command is used. In case of hardlink it will be simply ln, for soft link ln -s option is used
31) What are some common linux commands AWS engineer should be aware of?
1) cat – This is plain simple command to access a file in UNIX
2) ls – Provides details on list of files and directories
3) ps – The process command provides details on list of processes in the system
4) vmstat – Virtual memory statistics comes handy during performance tuning
5) iostat – Command to determine I/O issues
6) top – This command provides details on top resource consuming processes
7) sar – This is a UNIX utility mainly used for tuning purpose
8) rm – This command is used to remove files
9) mv – moving the files and directories
cd – Enables us to change directories
date – gives us the time and date
echo – we can display text on our screen
grep – It is a pattern recognition command.It enables us to see if a certain word or set of words occur in a file or the output of any other command.
history – gives us the commands entered previously by us or by other users
passwd – this command enables us to change our password
pwd – to find out our present working directory or to simply confirm our current location in the file system
uname – gives all details of the system when used with options. We get details including systemname,kernel version etc.
whereis – gives us exact location of the executable file for the utility in the question
which – the command enables us to find out which version(of possibly multiple versions)of the command the shell is using
who – this command provides us with a list of all the users currently logged into the system
whoami – this command indicates who you are logged in as. If user logs in as a userA and does a su to userB,whoami displays userA as the output.
man – this command will display a great detail of information about the command in the question
find – this command gives us the location of the file in a given path
more – this command shows the contents of a file,one screen at a time
ps – this command gives the list of all processes currently running on our system
cat – this command lets us to read a file
vi – this is referred to as text editor that enables us to read a file and write to it
emacs- this is a text editor that enables us to read a file and write to it
gedit – this editor enables us to read a file and write to it
diff – this command compares the two files, returns the lines that are different,and tells us how to make the files the same
export – we can make the variable value available to the child process by exporting the variable.This command is valid in bash,ksh.
setenv – this is same as export command and used in csh,tcsh
env – to display the set of environment variables at the prompt
echo <$variablsname> – displays the current value of the variable
source – whenever an environment variable is changed, we need to export the changes.source command is used to put the environment variable changes into immediate effect.It is used in csh,tcsh
.profile – in ksh,bash use . .profile command to get same result as using source command
set noclobber – to avoid accidental overwriting of an existing file when we redirect output to a file.It is a good idea to include this command in a shell-startup file such as .cshrc
32) What are the considerations while creating username/user logins for Security Administration purpose?
It is a good practice to follow certain rules while creating usernames/user logins
1) User name/user login must be unique
2) User name/user login must contain a combination of 2 to 32 letters, numerals, underscores(_),hyphens(-), periods(.)
3) There should not be any spaces/tab spaces while creating user name/usr logins
4) User name must begin with a letter and must have atleast one lowercase letter
5) Username must be between three to eight characters long
6) It is a best practice to have alphanumeric user names/user logins. It can be a combination of lower case letters, upper case letters, numerals, punctuation
33) Give details on /etc/profile the system profile file and its usage in linux environment:-
.This is another important UNIX system administration file. This file has much to do with user administration. This file is run when we first log into the system.This is system profile file. After this user profile file is run. User profile is the file wherein we define the users environment details.Following are teh different forms of user profile files :
.profile
.bash_profile
.login
.cshrc
/home/username is the default home directory.User’s profile file resides in the user’s home directory.
34) How to perform core file configuration in Linux environment?
Lets consider a linux flavor say solaris. Core File Configuration involves the following steps. We need to follow the steps given below to configure the core file.
1) As a root user, use the coreadm command to display the current coreadm configuration :
# coreadm
2) As a root user, issue the following command to change the core file setup :
# coreadm -i /cores/core_new.%n.%f
3) Run the coreadm command afain to verify that the changes has been made permanent
# coreadm
The O/P line “init core file pattern : ” will reflect the new changes made to the corefile configuration.
From solaris 10 onwards, coreadm process is configured by the Service Management Facility (SMF) at system boot time.We can use svcs command to check the status .The service name for coreadm process is :
svc:/system/coreadm:default
35) How do you configure or help with customer printer configuration?
Administering Printers details the steps needed to administer a printer.
Once the printer server and printer client is set up, we may need to perform the following administrative tasks frequently :
1) Check the status of printers
2) Restart the print scheduler
3) Delete remote printer access
4) Delete a printer
36) How is zombie process recognized in linux and its flavors? How do you handle zombie process in linux environment?
Zombie Process in UNIX/LINUX/Sun Solaris/IBM AIX is recognized by the state Z.It doesn’t use CPU resources.It still uses space in the process table.
It is a dead process whose parent did not clean up after it and it is still occupying space in the process table.
They are defunct processes that are automatically removed when a system reboots.
Keeping OS and applications up to date and with latest patches prevents zombie processes.
By properly using wait() call in parent process will prevent zombie processes.
SIGCHLD is teh signal sent by child to parent upon task completion and parent kills child(proper termination).
kill -18 PID – Kills childs process
37) What is the use of /etc/ftpd/ftpusers in Linux?
/etc/ftpd/ftpusers is used to restrict users who can use FTP(File Transfer Protocol).Ftp is a security threat as passwor is not encrypted while using ftp. ftp must not be used by sensitive user accounts such as root,snmp,uucp,bin,admin(default system user accounts).
As a security measure we have a file called /etc/ftpd/ftpusers created by default. The users listed in this file are not allowed to do ftp.The ftp server in.ftpd reads this file before allowing users to perform ftp. If we want to restrict a user from doing ftp we may have to include their name in this file.
38) Have you ever helped a customer restore a root file system in their environment?
Restoring root file system (/)  provides steps we need to follow to restore the root file system (/ system) in SPARC and x86 (intel) machines.
1) Log in as root user. It is a security practice to login as normal user and perform an su to take root user (super user) role.
2) Appearance of # prompt is an indication that the user is root
3) Use who -a command to get information about current user
4) When / (root filesystem) is lost because of disk failure. In this case we boot from CD or from the network.
5) Add a new system disk to the system on which we want to restore the root (/) file system
6) Create a file system using the command :
newfs /dev/rdsk/partitionname
7) Check the new file system with the fsck command :
fsck /dev/rdsk/partitionname
8) Mount the filesystem on a temporary mount point :
mount /dev/dsk/devicename /mnt
9) Change to the mount directory :
cd /mnt
10) Write protect the tape so that we can’t accidentally overwrite it. This is an optional but important step
11) Restore the root file system (/) by loading the first volume of the appropriate dump level tape into the tape drive. The appropriate dump level is the lowest dump level of all the tapes that need to be restored. Use the following command :
ufsrestore -rf /dev/rmt/n
12) Remove the tape and repeat the step 11 if there is more than one tape for the same level
13) Repeat teh step 11 and 12 with next ddump levels. Always begin with the lowest dump level and use highest ump level tape
14) Verify that file system has been restored :
la
15) Delete the restoresymtable file which is created and used by the ufsrestore utility :
rm restoresymtable
16) Change to the root directory (/) and unmount the newly restored file system
cd /
umount /mnt
17) Check the newly restored file system for consistency :
fsck /dev/rdsk/devicename
18) Create the boot blocks to restore the root file system :
installboot /usr/platform/sun4u/lib/fs/ufs/bootblk /dev/rdsk/devicename — SPARC system
installboot /usr/platform/`uname -i`/lib/fs/ufs/pboot /usr/platform/`uname -i`/lib/fs/ufs/bootblk /dev/rdsk/devicename — x86 system
19) Remove the last backup tape, and insert a new tape onto which we can write. Make a dump level 0 backup of the newly restored system by issuing the following command :
ufsdump 0ucf /dev/rmt/n /dev/rdsk/deviceName
This step is needed because ufsrestore repositions the files and changes the inode allocations – the old backup will not truly represent the newly restored file system
20) Reboot the system :
#reboot (or)
# init 6
System gets rebooted and newly restored file systems are ready to be used.
21) What is the monitoring and reporting tool that comes as part of AWS console?
Cloudwatch the tool listed under management section of AWS console helps with monitoring and reporting emtrics in AWS environment. Following metrics can be monitored as part of cloudwatch including
1) CPU
2) Disk utilization
3) Network
4) Status Check
In addition to the above mentioned metrics RAM the custom metric can be monitored using cloudwatch
22) Give details on status check in cloudwatch?
In an AWS environment status of both instance and system needs to be monitored. As such there are system status check as well as instance status check sections associated with each and every EC2 instance. As the name implies system status check makes sure that physical machines on which the instances have been hosted is in good shape. Instance status check is at the EC2 instance which literally translates to virtual machine in AWS environment
23) What happens if a failure is reported in status check section of AWS?
Depending on what type of failure has been reported following actions can be taken:
In case of system failure – Restart the virtual machine. In AWS terms, restart the EC2 instance. This will automatically bring up the virtual machine in a physical hardware that is issue free
Instance Failure – Depending on the type of failure reported in EC2 instance this can be stopping and starting of virtual machine to ix the issue. In case of disk failure appropriate action can be taken at operating system level to fix the issues
24) What is an EC2 instance in AWS?
This is the basic component of AWS infrastructure. EC2 translates to Elastic compute cloud. In real-time this is a pre-built virtual machine template hosted in AWS that can be chosen, customized to fit the application needs
This is the prime AWS service that eliminates a business necessity to own a data center to maintain their servers, hosts etc
25) What is an ephemeral storage?
An ephemeral storage is a storage that is temporary (or) non-persistent
26) What is the difference between instance and system status check in cloudwatch?
An instance status check checks the EC2 instance in an AWS environment whereas system status check checks the host
27) What is the meaning of EBS volume status check warning?
An EBS volume is degraded or severly degraded. Hence, a warning in an EBS environment is something that can’t be ignored as with other systems
28) What is the use of replicalag metric in AWS?
Replicalag is a metric used to monitor the lag between primary RDS the relational database service a database equivalent in AWS environment and the read replica the secondary database system that is in read only mode
29) What is the minimum granularity level that a cloudwatch can monitor?
Minimum granularity that cloudwatch can monitor is 1 minute. In most real-time cases 5 minute metric monitoring is configured
30) What is the meaning of ebs volume impaired ?
EBS volume impaired means that the volume is stalled or not available
31) Where is ELB latency reported?
In cloudwatch the latency reported by elastic load balancer ELB is available
32) What is included in EC2 instance launch log?
Once the EC2 instance is created, configured and launched following details are recorded in instance launch log:
Creating security groups – The result needs to be Successful. In case of issues the status will be different
Authorizing inbound rules – For proper authorization this should show Successful
Initiating launches – Again this has to be Successful
At the end we see a message that says Launch initiation complete
33) What will happen once an EC2 instance is launched?
After the EC2 instance has been launched it will be in running state. Once an instance is in running state it is ready for use. At this point usage hours which is typically billable resource usage starts. This keeps accruing until we stop or terminate our instance. NExt immediate step is to view instance
34) What is maximum segment size (mss)? How is this relevant to AWS?
The maximum segment size is the important factor that determines the size of an un fragmented data segment. AWS is cloud based and the products hosted are accessed via internet connection. For data segments to successfully pass through all the router during transit their size should be acceptable across routers. If they grow big the data segments get fragmented. This eventually leads to network slowness
35) How does a load balancer check the EC2 instance availability in an AWS environment?
Periodically load balancer sends pings, attempts connections, send requests to EC2 instances to cehck their availability. Often these tests are referred as health checks in an AWS environment
36) Give details on health check and status of instances in an AWS environment :-
In an AWS environment to check the status of EC2 instances the load balancer periodically sends pings, attempts connection, sends requests to EC2 instances. This process is referred to as health check in an AWS environment
If an EC2 instance is healthy and functioning normal at the time of health check the status will be InService
If an instance does not respond back this is unhealthy and its status will be OutOfService
37) What are the instances that are candidates to be part of health check?
If an instance is registered with a load balancer this is a candidate under health check process in AWS. This covers instances that are in both healthy and unhealthy statuses which are typically InService and OutOfService respectively
38) What happens when an instance in an AWS environment has been found to be in an unhealthy state?
The requests will not be routed to unhealthy instances by load balancer. Once the instance health is restored back to healthy status requests are routed here
39) What is IPSec?
IPSec refers to the internet protocol security that is used to securely exchange data over public network that no-one can view and read except the intended parties. IPSec makes use of two mechanisms that work together to exchange data in a secure manner over the public networks. Though both of these mechanisms are not mandate we can use just one or both togetehr. The two mechanisms of IPSec are
Authentication header – Used to digitally sign the entire contents of each packet that protects against tampering, spoofing, replay attacks. The major disadvantage of authentication header is that though this protects data packets against tampering the data is still visible to hackers. To overcome this ESP can be used
Encapsulating Security Payload – ESP provides authentication, replay-proofing and integrity checking by making use of 3 components namely ESP header, ESP trailer, ESP authentication block
40) What are the many different types of IPSec modes?
Tunnel more, transport mode are the two modes that we can configure IPSEc to operate in.Tunnel mode is the default mode and used for and used for communication between gateways like routers, ASA firewalls (or) at an end-station to a gateway. Transport mode is used for end-to-end communication between a client and a server, between a workstation and a gateway like telnet connection, remote desktop connection between a workstation and server over VPN etc
41) In a class B network give relationship between network /count and number of hosts possible :-
network
/count No of hosts possible
16 65536
17 32768
18 16384
19 8192
20 4096
21 2048
22 1024
23 512
24 256
25 128
26 64
27 32
28 16
29 8
30 4
42) In a class C network give relationship between network /count and number of hosts possible :-
network
/count No of hosts possible
24 256
25 128
26 64
27 32
28 16
29 8
30 4
43) You are a DBA and have been assigned task of migrating oracle database to AWS with minimal to no impact to source database. How will you achieve this?
Make use of Database Migration Service. This will help you migrate databases securely and easily. This tool enables live migration of data making sure source database is up and running during migration
44) Which AWS service will you make use of to monitor CPU utilization of an EC2 resource in AWS environment?
AWS Cloudwatch is a monitoring service that can be used for management as well in an AWS environment. We can get data insights to monitor system performance and optimize resource utilization in an AWS environment
44) Give details on some AWS terminologies you need to be aware of as support engineer :-
Here are some common terminologies that you will come across in your daily job
EC2 instance – This is how the virtual machine is referred to in an AWS environment
Region – The physical geographical locations that host AWS datacenters is referred to as region. This keeps expanding with growth of AWS
RDS – The database related service commonly called as relational database services
S3- The storage service from AWS
EBS – The elastic block storage, another storage option from AWS
Availability zone – Commonly referred to as AZ
Virtual private cloud – Commonly called as VPC is datacenter in virtual format in AWS
45) What is use of wireshark?
This is a open-source packet analyzer tool commonly used to monitor network traffic coming in and out of the servers hosting applications. At times is is used to monitor and make sure there are no security threats in this system

Free AWS tutorials. Enter your email address:

Delivered by FeedBurner

Posted on

AWS Certified Solutions Architect Associate 2018 Exam Blue Print


AWS certified solutions architect associate exam has undergone evolution and new exam is available from 2018 onwards. This eam has emerged from beta phase onto full version. Let us see some details on this latest exam blue print
1) This exam comprises of 60 questions. The questions are going to be scenario based questions that will test the real-time understanding of AWS. These questions are going to be multiple-choice and multiple-answer questions. The number of answers to choose can be one (or) many. If there is a need to choose certain numbers, choose all that apply this is specified as part of question
2) This is a 2 hour 10 minute long exam (about 130 minutes in length)
3) Exam score range is from 100 to 1000. The pass score is 720 (Same as old exam roughly 70% is the pass score)
4) The validity of this exam is 2 years. Once you pass this exam this AWS Certified Solutions Architect Associate credential is valid for 2 years
5) This exam can be scheduled online from amazon link
6) Exam is available in many different languages including English, Simplified Chinese, Japanese, Korean
7) Exam registration fee is $150

Posted on

Creating database deployment on Oracle database cloud service different methods


There are any different methods to deploy the databases on Oracle database cloud service. The method to be used differs based on project requirements
1) Quickstart Template – This is the fastest and easiest method to deploy database
2) Create Instance Wizard – In this method of database deployment there is a way to customize all options
3) Cloud Backup of another database – This is common database deployment method when there is a need to deploy database that has data in existence. From DBA perspective this is most common method that you expect to use in your upcoming projects
4) Using Snapshots – This is creating clone of an existing database deployment
5) Hybrid DR – This is an interesting disaster recovery wherein the primary database of a dataguard resides on-premise and standby databases are deployed on cloud

Posted on

How Do I start Learning Cloud Computing and Developing applications in cloud


Cloud computing has begun ever since Amazon launched its cloud platform EC2 around 2006 and branded its product offering as amazon web services with certification exams at and professional level
Cloud computing is expected to be the next hottest niche from 2010 onwards. So, many developers are interested in learning cloud computing. They are not sure about what software to purchase and how to start with developing applications using cloud.
Given here is our recommendation on how to start with developing applications in cloud. Users may choose their preferred method. Force.com is the cloud computing platform from Salesforce
1) Visit www.force.com
2) Force.com is a part of the www.salesforce.com ecosystem. Salesforce grew as a billion-dollar company with their CRM (Customer Relationship Management) solution being deployed in cloud. Basic concept being, CRM solution from Salesforce will be deployed in a cloud. Customers can access the reports and track status using their PC and internet
3) Salesforce have comeout with a platform for developers to develop and deploy applications in the cloud. This platform is called as www.force.com
4) Click on Free Force.com in the righthandside
5) Fillup the form and receive a password from them
6) Logonto your email account. Click on the link sent by salesforce.com. Rest your password
7) Log onto developer.force.com  using useraccount(email ID) and password
8) Click on Setup link in the upper right corner of the webpage
9) Click on Next
10) Now click on create navigation tab in lefthandside corner of the webpage
11) Click on Apps
12) We have started creating our first cloud application in the www.force.com platform
Make use of Google App Engine to develop applications in google cloud

Posted on

What does unix system administrator do on daily basis?


Have you ever wondered what is expected out of a UNIX (or) Linux system administrator as they call it variously on day to day basis. Here are the job duties and responsibilities of linux system administrator:
1) Apply systems analysis techniques to determine functional specifications to meet system and networking business needs – Typically linux admin will be part of many infrastructure meetings starting with capacity planning wherein project needs are discussed in detail. This is the functional specification gathering point that they need to translate to server needs. Some firms make use of capacity planning software in which the linux admins are given appropriate access. They need to input the server specifications needs starting from project initiation phase including requirement on size and type of servers, operating systems to be installed in servers, disk configurations to be made etc
As of latest trend, all the major firms are evaluating the possibility of deploying their servers in cloud infrastructures like amazon web services, google cloud platform GCP, microsoft azure, rackspace etc as cost cutting measure that makes infrastructure design and deployment done in form of Infrastructure as a service (IaaS) rather than traditional on-site datacenter model. Henceforth, linux admins can start equipping themselves by preparing for AWS certified solutions architect associate level certification exam with sysops specialization focussing on AWS system administration without any delay. We at learnersreference.com Training can support with your AWS certification exam preparation needs
2) Design, develop, document, create, test, and modify system programs to meet enterprise needs
3) Highly skilled and proficient in theoretical and practical application of highly specialized information to server
4) Establish and manage server and monitoring infrastructure – monitoring tools like HP glance, linux level command line programs like sar, top, cpustat, vmstat, iostat are used extensively on day to day basis
5) Configure UNIX servers that can be linux flavors like RHEL, oracle linux, ubuntu linuc, HP-Unix, solaris etc
6) Configure, build and deploy applications and patches to the servers – This is a major job that takes much of the admins time. Now, with cloud infrastructure like AWS in place, this can been off-loaded to AWS team
7) Monitor server resources such as CPU, IO, and Disk to understand current resource requirements anticipate growth needs using server tools like top, sar, vmstat, cupstat, memstat
8) Must have command of basic advanced TCP/IP networking concepts – Though the system admin does not need to know lot of networking basic commands like ifconfig, stacktrace, ping etc come handy when server is inaccessible. same does apply in cloud infrastructure as well
9) Troubleshoot networking issues on both Linux servers, Solaris servers, Unix flavored machines. Additional knowledge is needed if server happens to be AIX the IBM based linux machines
10) Setup and manage SAN/NAS systems as well as backups. Most mid level enterprises dont have storage team as well as storage admins. Linux administrator will need to take care of NAS/SAN storage as well as backup infrastructure. The storage overhead will be 100% eliminated once the project is 100% deployed in cloud. However, with first project firms prefer to retain data in local datacenter as well as in AWS
11) Setup and manage Linux Server administration tasks including fixing broken disks by running fsck the disk check commands, create users, groups, grant roles appropriately, manage reports from monitoring infrastructure, perform backup restore and recovery of servers etc
12) Plan and test business continuity in form of disaster recovery testing on quarterly basis
13) Revoke access from user accounts once user leaves organization. Unix admins will have regular emails from HR department on this
14) Maintain and mange test, QA, UAT, DEV servers in addition to production servers
15) Work closely with dba’s and grant them access to storage LUN’s on as needed basis. Also, some database upgrades, patching demand server reboot. Linux admins are involved in these tasks

Posted on

Google Cloud Certified Professional Cloud Architect Practice Exam


Google Cloud Certified Professional Cloud Architect Practice Exam helps you prepare better for Professional Cloud Architect certification exam that will help you become familiarize with the type of questions you are likely to encounter in real-time certification exam. This exam will be your best preparation aid as this covers wide range of topics that will help you pass Google Professional Cloud Architect certification exam
1) What are the method in which you can create VM Instances in google cloud environment?
a) Using gcloud
b) Using Google Cloud Console Platform
c) Using App Engine
d) None of the above
Answer : a,b
2) You have created an instance in us-central1c region. You want to assign persistent disk from us-east1b region for faster access. How will you do that?
a) Make use of google cloud console
b) make use of gcloud shell
c) it is not permissible to assign persistent disks to vm instance from different regions
d) make use of kubernetes engine
Answer : c
Explanation : The vm instances and their corresponding persistent disks reside in same zone in same region
3) What significance does choosing correct location for creating your bucket does offer your project? Which among the following are properly balanced by creating bucket in correct location?
a) latency
b) correct bandwidth cost for application
c) availability
d) correct bandwidth cost for users
e) hardware failover
Answer : a,b,c
4) What are the two types of locations in which you can create buckets?
a) Zonal
b) Regional
c) Multi-regional
d) Local
Answer : b,c
5) You are creating your bucket. What are all the properties that you specify when you create your bucket?
a) bucket location in which the object data of this bucket resides
b) default storage class
c) globally unique name
d) locally unique name
Answer : a,b,c
6) You have designed a web application that will be deployed in google cloud platform. You application makes use of websockets and HTTP sessions. These are not distributed across the webservers. For proper running of application what should you do?
a) Make use of HTTP load balancer in GCP. HTTP load balancer in GCP handles websocket traffic natively. This helps with scaling and availability
b) Make use of HTTP streaming
c) Make use of monolithic user session service
d) Don’t make use of websocket and communicate the same to security team
Answer : a
7) You are migrating your enterprise applications to GCP. Your organization is securely monitored by security team and they want visibility of all the projects in the organization. You are the lead of GCP project and have assigned org admin role for yourself in google cloud resource manager. What IAM role in google cloud will you assign to security team to make sure no changes are made to any of these applications?
a) Org admin, project viewer
b) Org viewer, project admin
c) Org viewer, project viewer
d) org viewer, project owner
Answer : c
Explanation : These roles offer read-only access to every application projects. They cant make any changes to applications
8) What steps can you take to reduce the impact of rollbacks that impact the application owing to erroneous production deployments?
a) Adapt green-blue deployment model
b) Improve QA processes
c) Make use of failover systems
d) Simplify process by fragmenting monolithic platform into microservices
Answer : a,d
Explanation : The green-blue deployment system makes sure two systems one green other blue that are 100% same in terms of hardware configuraiton, software version are available. One green is live production, blue is in passive status. Once the changes are made to applications they are deployement thoroughly tested ib blue systems and this is made new production
As far as simplification goes implement changes little by little to avoid errors
9) You work for a healthcare client who stores patient data that are of personally identifiable information PII in nature and very sensitive. They have made a decision to move their storge to google storage solution the google cloud storage GCS. How will you make this a secured storage in GCS?
a) Once the objects are moved grant appropriate IAM roles to users who access data in buckets
b) The object access to objects is buckets should be using granular ACLs rather than google cloud IAM roles granted to users
c) The objects need to be accessed with signed URLs
b) Grant private owner IAM role to object in buckets for users
Answer : b
Explanation : The granular ACLs offers the least privilege required to access the data and data changes cant be made. This avoids threat
10) Your company has made a decision to migrate the data onto google cloud that comes with proper SQL interface. where will you load data for optimal storage and ease of analysis?
a) Load into google cloud datastore
b) Load into google bigquery
c) Load into flatfiles
d) Create buckets and put the data in zipped format onto google cloud storage GCS
Answer : b
11) Your company operates in three regions across the globe one in asia, one in europe, one in australia. You run a web cluster in each of the google container engine in 3 different regions. These three google container engine clusters are balanced using global load balancer. You want to automatically and simultaneously deploy new code onto all of these google container engine web clusters. What should you do to accomplish this?
a) This can be deployed in cluster federated mode
b) Make use of automated deployment tools like Jenkins
c) Make use of shell scripting with rsh
d) Using ssh and scripts
Answer : b
Explanation : For deploying the changes automatically and simultaneously make use of automation tools like Jenkins
12) You have a linux production machine in a region that you want to copy and deploy in a different region. This copy is deployed as a new instance in a new region. How can you accomplish this?
a) Make use of linux copy command cp create a image copy and deploy in new region
b) Make use of linux xcopy command xcp to create an image copy and deploy in new region
c) Make use of gclone command to create an image copy, create a new virtual machine instance and deploy in new region
d) Using linux dd command create an image file from root disk of the virtual machine, create a new disk from this image file, use this to create a new virtual machine instance in new region
Answer : d

Posted on

Google Cloud Shell First Login Experience

Free Cloud Tutorials. Enter your email address:

Delivered by FeedBurner

With advent of cloud, technology war going high between AWS from amazon versus google cloud platform popularly called GCP from giant google I wanted to explore both the platforms in parallel. As a first experience I created my GCP account for free now
1) Logged into Gmail
2) Launched google cloud and created an account that is free
3) I’ve provided details on my credit card. The page said once free trial ends there will not be any automated charge on billing. I believe once I surpass free billing limits I might be getting warning email from google before they make a decision to delete my resources. I hope to continue with google and blog more details on my experience with you all
4) As a starting free credit I could see $300 in my billing account
5) In the top panel I found a little icon that said google shell. Once I clicked on that icon it did open a pop-up that read this is debian based linux OS. I activated that and tried some linux commands. I remember good old days wherein I was strugglign with fedora core installation configuring dual boot in my laptop. It was in november 2007. After that comes ubuntu that can be launched as desktop app from within windows somewhere around 2010 if my memory holds good. Now, it is cloud nine to know that linux has been made a thin client and can be accessed via web views for free. All it needs is a gmail account which is 100 free. Here are some commands I tried in google shell today. My default project has been given name elaborate-howl-178922. Here is how it goes

Posted on

AWS big data specialty certification

Enter your email address:

Delivered by FeedBurner

1) You have planned to come up with redshift cluster to support an upcoming project. You are looking for a way to reduce the total cost of ownership. How can you achieve this?
a) Ephemeral S3 buckets
b) Encryption algorithms
c) Compression algorithms
d) All of the above
Answer: b,c
2) You are making use of COPY command to load files onto redshift. Will redshift manifest allow you to load files that do not share the same prefix?
a) Yes
b) No
Answer: a
3) Why are single-line inserts slower with redshift?
a) Owing to row nature of redshift
b) Columnar nature of Redshift
c) Tabular nature of redshift
d) All of the above
Answer: b
4) You are in process of creating a table. Which among the following must be defined while table creation in AWS redshift. What are the required definition parameters?
a) The Table Name
b) RCU (Read Capacity Units)
c) WCU (Write Capacity Units)
d) DCU (Delete/Update Capacity Units)
e) The table capacity number of GB
f) Partition and Sort Keys
Answer: a,b,c,f
5) Does Amazon Redshift offer enhanced support for viewing external Redshift Spectrum tables
a) Yes
b) No
Answer: a
6) Will Machine Learning integrate directly with Redshift using the COPY command?
a) Yes
b) No
Answer: b
7) You have been asked to build custom applications that process or analyze streaming data for specialized needs. Which AWS Service will you make use of to accomplish this?
a) Amazon Kinesis Streams
b) Amazon Kinesis Analytics
c) Amazon LAMBDA
d) Amazon Spark
Answer: a
8) Can Federated Authentication with Single Sign-On be used with amazon redshift?
a) Yes
b) No
Answer: a
Explanation : This is a new feature that is possible with redshift based on press release from AWS released on august 11th 2017
9) How will you isolate amazon redshift clusters and secure them?
a) Amazon VPC
b) Amazon KMS
c) Server side encryption
d) all of the above
Answer: a
10) How long does each Kinesis firehose delivery stream stores data records in case the delivery destination is unavailable?
a) 12 hours
b) 24 hours
c) 48 hours
d) 72 hours
Answer: b
11) What is a shuffle phase in hadoop ecosystem?
a) Process of transferring from reducers back to mappers
b) Process of transferring from mappers to reducers
c) None of the above
Answer: b
12) Your security team has made it a mandate to encrypt all data before sending it to S3 and you will have to maintain the keys. Which encryption option will you choose?
a) SSE-KMS
b) SSE-S3
c) CSE-Custom
d) CSE-KMS
Answer: c
13) Client-Side Encryption with KMS-Managed Keys aka CSE-KMS is used by EMR cluster. How is key managed in this case?
a) S3 uses a customer master key that is managed in the Key Management Service to encrypt and decrypt the data before saving it to an S3 bucket
b) S3 uses a server generated key that is managed in the Key Management Service to encrypt and decrypt the data before saving it to an S3 bucket
c) EMR cluster uses a customer master key to encrypt data before sending it to Amazon S3 for storage and to decrypt the data after it is downloaded
d) All of the above
Answer: c
14) You have to create a visual that depicts one or two measures for a dimension. Which one will you choose?
a) Heat Map
b) Tree Map
c) Pivot Table
d) Scatter Plot
Answer: b
15) Your developers are fluent in python and are comfortable with tools that integrate with python. Which open-source tool will you recommend as business analyst to eb used for this project?
a) Jupyter Notebook
b) Hue
c) Ambari
d) Apache Zeppelin
Answer: a
16) What is SPICE in QuickSight?
a) SPICE is QuickSight’s Super-fast, Parallel, In-memory Calculation Engine
b) SPICE is QuickSight’s Super-fast, Parallel, In-memory analytical Engine
c) Not related to QuickSight
Answer: a
17) You have set up Hadoop encrypted shuffle. Which protocol makes Mapreduce Shuffle possible?
a) TCP/IP
b) HTTP
c) HTTPS
d) VPN
Answer: c
18) You own an apparel business that is supplied and sold across lots of global regions. The start of discal year revenue goals are set at region level. You work in finance and marketing department. Your manager asks you to get a visual that uses rectangle sizes and colors to show which regions have highest revenue goals. Which visual type will you go for to satisfy this requirement?
a) Scatter Plot
b) Pivot Table
c) Tree Map
d) Heat Map
Answer : c
19) What is the most effective way to merge data into an existing table?
a) Use a staging table to replace existing rows or update specific rows
b) Execute an UPSERT
c) Execute an UPSERTwithout index
d) Execute an UPSERT with index
Answer: a
20) What does F1 score signify?
a) better concurrency
b) better predictive accuracy
c) better analytical accuracy
d) None of the above
Answer: b

Enter your email address:

Delivered by FeedBurner