Q1.Install and configure Ansible on the control-node control.realmX.example.com as follows:
–> Install the required packages
–> Create a static inventory file called /home/admin/ansible/inventory as follows:
node1.realmX.example.com is a member of the dev host group
node2.realmX.example.com is a member of the test host group
node3.realmX.example.com & node4.realmX.example.com are members of the prod host group
node5.realmX.example.com is a member of the balancers host group.
prod group is a member of the webservers host group
–> Create a configuration file called ansible.cfg as follows:
–> The host inventory file /home/admin/ansible/inventory is defined
–> The location of roles used in playbooks is defined as /home/admin/ansible/roles
$ vim /home/student/ansible.cfg
[defaults]
inventory = /home/student/inventory
roles_path = /home/student/roles
remote_user = admin
become = true
[privilege_escalation]
become = true
$ vim /home/students/inventory
[dev]
node1.realmX.example.com
[test]
node2.realmX.example.com
[prod]
node3.realmX.example.com
node4.realmx.example.com
[balancers]
node5.realmX.example.com
[webservers:children]
prod
$ vim ~/.vimrc
filetype plugin indent on
autocmd FileType yaml setlocal ai et ts=2 sts=2 sw=2 cursorcolumn
$ ansible –version
$ ansible all -m ping
$ ansible webservers -m ping
$ ansible-inventory -i inventory –lists
Q2.Create and run an Ansible ad-hoc command.
As a system administrator, you will need to install software on the managed nodes.
Create a shell script called yum-pack.sh that runs an Ansible ad-hoc command to create yum-
repository on each of the managed nodes as follows:
–> repository1
1. The name of the repository is EX407
2. The description is “Ex407 Description”
3. The base URL is http://content.example.com/rhel8.0/x86_64/dvd/BaseOS/
4. GPG signature checking is enabled
5. The GPG key URL is http://content.example.com/rhel8.0/x86_64/dvd/ RPM-GPG-KEY-redhat-release
6. The repository is enabled
–> repository2
1. The name of the repository is EXX407
2. The description is “Exx407 Description”
3. The base URL is http://content.example.com/rhel8.0/x86_64/dvd/AppStream/
4. GPG signature checking is enabled
5. The GPG key URL is http://content.example.com/rhel8.0/x86_64/dvd/ RPM-GPG-KEY-redhat-release
6. The repository is enabled
$ vim yum-pack.sh
#!/bin/bash
ansible all -m yum_repository -a ‘name=EX294 description=EX294-BaseOS enabled=no baseurl=http://abc.example.com/BaseOS gpgcheck=yes gpgkey=http://abc.example.com/RPM-GPG-KEY-redhat-release’
ansible all -m yum_repository -a ‘name=EX294App description=EX294-App enabled=no baseurl=http://abc.example.com/AppStream gpgcheck=yes gpgkey=http://abc.example.com/RPM-GPG-KEY-redhat-release’
[Change no to yes and add correct baseurl,gpgkey url]
$ chmod +x yum-pack.sh
$ bash yum-pack.sh
Q3.Create a playbook called packages.yml that:
–> Installs the php and mariadb packages on hosts in the dev, test, and prod host groups.
–> Installs the Development Tools package group on hosts in the dev host group.
–> Updates all packages to the latest version on hosts in the dev host group.
$ cat packages.yml
– name: Installing packages on web,prd,test host
hosts: web,load,prod
tasks:
– name: Install php and tree packages
yum:
name:
– php
– tree
state: latest
– name: Installing packages on load and web server
hosts: prod,load
tasks:
– name: Installing development packages
yum:
name: “@Development Tools”
state: latest
– name: Updating packages to latest release
yum:
name: “*”
state: latest
$ ansible-playbook packages-yml –syntax-check
$ ansible-playbook packages.yml -C
$ ansible-playbook packages.yaml
$ ansible [web,test,prod,lab] ansible.builtin.yum -a “name=php,mariadb state=present”
$ ansible [web,test,prod] -m shell -a “rpm -qa | grep -e php -e tree”
Q4.Install the RHEL system roles package and create a playbook called timesync.yml that:
–> Runs over all managed hosts.
–> Uses the timesync role.
–> Configures the role to use the time server 192.168.10.254
–> Configures the role to set the iburst parameter as enabled.
$ mkdir -p /home/student/roles
$ sudo yum install rhel-system-roles -y
$ cp -r /usr/share/ansible/rhel-system-roles.timesync /home/student/roles
$ ansible-galaxy role lists
$ cat timesync.yml
—
– name: Timesync for NTP
hosts: all
vars:
timesync_ntp_servers:
– hostname: 192.168.0.254
iburst: yes
timesync_ntp_provider: chrony
roles:
– rhel-system-roles.timesync
$ ansible-playbook –syntax-check timesync.yml
$ ansible-playbook timesync.yml -C
$ ansible-playbook timesync.yml
$ ansible test -m command -a ‘cat /etc/chrony.conf | grep 192.168.0.254’
Q4. [A] Install the RHEL system roles package and create a playbook called security.yml that:
–> Runs over all managed hosts.
–> Uses the selinux role.
–> Configures the role to selinux Enforcing and targeted
$ mkdir -p /home/student/roles
$ sudo yum install rhel-system-roles -y
$ cp -r /usr/share/ansible/rhel-system-roles.selinux /home/student/roles
$ cat security.yml
—
– name: Configure Selinux permissive
hosts: all
vars:
selinux_policy: targeted
selinux_state: enforcing
roles:
– rhel-system-roles.selinux
$ ansible-playbook security.yml –syntax-check
$ ansible-playbook security.yml -C
$ ansible-playbook security.yml
$ ansible all -m command -a ‘getenforce’
$ ansible all -m command -a ‘cat /etc/selinux/config’
Q5.Create a role called apache in /home/admin/ansible/roles with the following requirements:
–> The httpd package is installed, enabled on boot, and started.
–> The firewall is enabled and running with a rule to allow access to the web server.
–> template file index.html.j2 is used to create the file /var/www/html/index.html with the output:
Welcome to HOSTNAME on IPADDRESS
where HOSTNAME is the fqdn of the managed node and IPADDRESS is the IP-Address of the managed node.
note: you have to create index.html.j2 file.
–> Create a playbook called httpd.yml that uses this role and the playbook runs on hosts in the webservers host group.
$ cd roles
$ ansible-galaxy init apache
$ vim roles/apache/tasks/main.yaml
—
– name: Install httpd and firewalld packages
yum:
name:
– httpd
– firewalld
state: latest
– name: httpd service is restarted
service:
name: httpd
state: started
enabled: yes
– name: Firewalld rule Add
firewalld:
service: http
permanent: yes
immediate: yes
state: enabled
– name: firewalld restarted
service:
name: firewalld
state: restarted
– name: Template file copy to /var/www/html/index.html
template:
src: index.html.j2
dest: /var/www/html/index.html
notify: restart_httpd
$ vim roles/apache/templates/index.html.j2
Welcome to {{ ansible.facts[‘fqdn’] }} on {{ ansible.facts[‘default_ipv4’][‘address’] }}
$ vim roles/apache/handlers/main.yml
—
– name: restart_httpd
service:
name: httpd
state: restarted
enabled: yes
$ vim httpd.yml
—
– name: Calling apache role
hosts: web
roles:
– apache [roles/apache]
$ ansible-playbook httpd.yml –syntax-check
$ ansible-playbook httpd.yml -C
$ ansible-playbook httpd.yml
$ curl http://serverd.lab.example.com
Q6.Use Ansible Galaxy with a requirements file called /home/admin/ansible/roles/install.yml to download and install roles to /home/admin/ansible/roles from the following URLs:
http:// classroom.example.com /role1.tar.gz The name of this role should be balancer
http:// classroom.example.com /role2.tar.gz The name of this role should be phphello
$ vim roles/install.yml
—
– src: http://classroom.example.com/balancer.gz
name: balancer
– src: http://classroom.xample.com/phphello.gz
name: phphello
$ ansible-galaxy install -r roles/install.yml -p roles
Q7.Create a playbook called balance.yml as follows:
* The playbook contains a play that runs on hosts in balancers host group and uses the balancer role.
–> This role configures a service to loadbalance webserver requests between hosts in the webservers host group.
–> When implemented, browsing to hosts in the balancers host group (for example http://node5.example.com) should produce the following output:
Welcome to node3.example.com on 192.168.10.z
–> Reloading the browser should return output from the alternate web server:
Welcome to node4.example.com on 192.168.10.a
* The playbook contains a play that runs on hosts in webservers host group and uses the phphello role.
–> When implemented, browsing to hosts in the webservers host group with the URL /hello.php should produce the following output:
Hello PHP World from FQDN
stu–> where FQDN is the fully qualified domain name of the host. For example, browsing to http://node3.example.com/hello.php, should produce the following output:
Hello PHP World from node3.example.com
* Similarly, browsing to http://node4.example.com/hello.php, should produce the following output:
Hello PHP World from node4.example.com
$ vim balance.yml
—
– name: gather facts for all the hosts
hosts: all
tasks: []
– name: balancers role
hosts: balancer
roles:
– ./roles/balancer
– name: phphello roles
hosts: webservers
roles:
– ./roles/phphello
$ ansible-playbook balance.yml –syntax-check
$ ansible-playbook balance.yml -C
$ ansible-playbook balance.yml
$ curl http://node4.realmX.example.com/hello.php
Q8.Create a playbook called web.yml as follows:
* The playbook runs on managed nodes in the dev host group
* Create the directory /webdev with the following requirements:
–> membership in the webdev group
–> regular permissions: owner=r+w+execute, group=r+w+execute, other=r+execute s.p=set group-id
* Symbolically link /var/www/html/webdev to /webdev
* Create the file /webdev/index.html with a single line of text that reads: “Development”
–> it should be available on http://servera.lab.example.com/webdev/index.html
—
– name: Creating files
hosts: web
tasks:
– name: Ensure group exists
group:
name: webdev
state: present
– name: creating file
file:
path: /webdev
state: directory
mode: ‘2775’
group: webdev
setype: httpd_sys_content_t
– name: Creating symbolic link
file:
src: /webdev
dest: /var/www/html/webdev
state: link
force: yes
– name: Create file
file:
path: /webdev/index.html
state: touch
– name: Add content to the file
copy:
dest: /webdev/index.html
content: “Development”
– name: Add firewall service
firewalld:
service: http
permanent: yes
immediate: yes
state: enabled
– name: restart httpd
service:
name: httpd
state: restarted
Q9.Create an Ansible vault to store user passwords as follows:
* The name of the vault is vault.yml
* The vault contains two variables as follows:
– dev_pass with value wakennym
– mgr_pass with value rocky
* The password to encrypt and decrypt the vault is atenorth
* The password is stored in the file /home/admin/ansible/password.txt
$ echo “atenorth” > password.txt
$ chmod 600 password.txt
$ ansible-vault create vault.yml –vault-password-file=password.txt
—
dev_pass: wakennym
mgr_pass: rocky
$ cat vault.yml
$ ansible-vault view vault.yml –vault-password-file=password.txt
Q10.Generate a hosts file:
* Download an initial template file hosts.j2 from http://classroom.example.com/hosts.j2 to
/home/admin/ansible/ Complete the template so that it can be used to generate a file with a
line for each inventory host in the same format as /etc/hosts:
172.25.250.9 workstation.lab.example.com workstation
* Create a playbook called gen_hosts.yml that uses this template to generate the file
/etc/myhosts on hosts in the dev host group.
* When completed, the file /etc/hosts on hosts in the dev host group should have a line for
each managed host:
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
172.25.250.10 serevra.lab.example.com servera
172.25.250.11 serevrb.lab.example.com serverb
172.25.250.12 serevrc.lab.example.com serverc
172.25.250.13 serevrd.lab.example.com serverd
$ wget http://classroom.example.com/hosts.j2
$ vim hosts.j2
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
{% for host in groups[‘all’] %}
{{ hostvars[host][‘ansible_facts’][‘default_ipv4’][‘address’] }} {{ hostvars[host][‘ansible_facts’][‘fqdn’] }} {{ hostvars[host][‘ansible_facts’][‘hostname’]}}
{% endfor %}
$ vim gen_hosts.yml
—
– name: generate hosts file
hosts: all
tasks:
– name: template file
template:
src: hosts.j2
dest: /etc/myhosts
when: inventory_hostname in groups[‘dev’]
$ ansible-playbook gen_hosts.yml –syntax-check
$ ansible-playbook gen_hosts.yml -C
$ ansible-playbook gen_hosts.yml
$ ansible dev -m shell -a ‘cat /etc/myhosts’
$ ansible node4.realmX.example.com -m shell -a ‘cat /etc/hosts’
$ ansible [hostname] -m setup >> abc.yml
Q11.Create a playbook called hwreport.yml that produces an output file called /root/hwreport.txt on all managed nodes with the following information:
–> Inventory host name
–> Total memory in MB
–> BIOS version
–> Size of disk device vda
–> Size of disk device vdb
Each line of the output file contains a single key-value pair.
* Your playbook should:
–> Download the file hwreport.empty from the URL http://classroom.example.com/hwreport.empty and
save it as /root/hwreport.txt
–> Modify with the correct values.
note: If a hardware item does not exist, the associated value should be set to NONE
$ ansible servera -m setup >> fact.yaml
$ vim hwreport.yaml
—
– name: Create hardware hardware report
hosts: all
tasks:
– name: Download the empty file
get_url:
url: http://classroom.example.com/hwreport.empty
dest: /root/hwreport.txt
– name: Add hostname
lineinfile:
path: /root/hwreport.txt
regexp: ‘^inventory_hostname’
line: inventory_hostname:{{ ansible_facts[‘hostname’] }}
– name: Total memory in Mb
lineinfile:
path: /root/hwreport.txt
regexp: ‘^Total memory in mb’
line: Total memory in Mb:{{ansible_facts[‘memtotal_mb’] }}
– name: BIOS version
lineinfile:
path: /root/hwreport.txt
regexp: ‘^BIOS version’
line: BIOS version:{{ ansible_facts[‘bios_version’] }}
– name: Size of disk vda
lineinfile:
path: /root/hwreport.txt
regexp: ‘^Size of disk vda’
line: Size of disk vda:{{ ansible_facts[‘devices’][‘vda’][‘size’] | default(‘NONE’) }}
– name: Size of disk vdb
lineifile:
path: /root/hwreport.txt
regexp: ‘^Size of disk vdb’
line: Size of disk vdb:{{ ansible_facts[‘device’][‘vdb’][‘size’] | default(‘NONE’) }}
$ ansible-playbook hwreport.yml –syntax-check
$ ansible-playbook hwreport.yml
$ ansible node1.realmX.example.com -m command -a ‘cat /root/hwreport.txt’
Q12.Modify file content.
Create a playbook called /home/admin/ansible/modify.yml as follows:
* The playbook runs on all inventory hosts
* The playbook replaces the contents of /etc/issue with a single line of text as follows:
–> On hosts in the dev host group, the line reads: “Development”
–> On hosts in the test host group, the line reads: “Test”
–> On hosts in the prod host group, the line reads: “Production”
$ vim modify.yml
—
– name: create files
hosts: all
tasks:
– name: Create file for development
copy:
content: “Development”
dest: /etc/issue
when: inventory_hostname in groups[‘dev’]
– name: create file for test
copy:
content: “Test”
dest: /etc/issue
when: inventory_hostname in groups[‘test’]
– name: Create file in Production
copy:
content: “prod”
dest: /etc/issue
when: inventory_hostname in groups[‘prod’]
$ ansible-playbook modify.yml –syntax-check
$ ansible-playbook modify.yml -C
$ ansible-playbook modify.yaml
$ ansible prod -m shell -a ‘cat /etc/issue’
Q13.Rekey an existing Ansible vault as follows:
* Download Ansible vault from http://classroom.example.com/secret.yml to /home/admin/ansible/
* The current vault password is curabete
* The new vault password is newvare
* The vault remains in an encrypted state with the new password
Answer Q13.
$ pwd
/home/admin/ansible/
$ wget http://classroom.example.com/secret.yml
$ ansible-vault view secret.yml
vault password: *****
$ ansible-vault rekey secret.yml
vault password: *****
new vault password: *****
confirm new vault password: *****
$ ansible-vault view secret.yml
Q14.Create user accounts
A list of users to be created can be found in the file called user_list.yml which you should download from http://classroom.example.com/user_list.yml and save to /home/admin/ansible/
* Using the password vault created elsewhere in this exam, create a playbook called create_user.yml that
creates user accounts as follows:
* Users with a job description of developer should be:
–> created on managed nodes in the dev and test host groups assigned the password from the dev_pass
variable a member of supplementary group devops.
* Users with a job description of manager should be:
–> created on managed nodes in the prod host group assigned the password from the mgr_pass variable
a member of supplementary group opsmgr
* Passwords should use the SHA512 hash format. Your playbook should work using the vault password file
created elsewhere in this exam.
$ cat create_user.yml
– name: create devops user and groups
hosts: test,prod
vars_files:
– vault.yml
– user.yml
tasks:
– name: create group
group:
name: devops
state: present
– name: Creating user
user:
name: “{{ item.name }}”
groups: devops
password: “{{ dev_pass | password_hash(‘sha512’) }}”
state: present
when: item.job == “developer”
loop: “{{ user }}”
– name: create opsmgr user and groups
hosts: web
vars_files:
– vault.yml
– user.yml
tasks:
– name: create group
group:
name: opsmgr
state: present
– name: Creating user
user:
name: “{{ item.name }}”
groups: opsmgr
password: “{{ mgr_pass | password_hash(‘sha512’) }}”
state: present
when: item.job == “manager”
loop: “{{ user }}”
Q15.Create Logical volumes with lvm.yml in all nodes according to following requirements.
* Create a new Logical volume named as ‘data’
* LV should be the member of ‘research’ Volume Group
* LV size should be 1500M
* It should be formatted with ext4 file-system.
–> If Volume Group does not exist then it should print the message “VG Not found”
–> If the VG can not accommodate 1500M size then it should print “LV Can not be created with
following size”, then the LV should be created with 800M of size.
–> Do not perform any mounting for this LV.
$ cat storage.yml
—
– name: Playbook for creating storage logical volume
hosts: all
tasks:
– debug:
msg: “Volume Group not defined”
when: ansible_lvm.vgs.research is not defined
– name: Creating Logical volume with size
lvol:
vg: research
lv: data
size: 1500m
when: ansible_lvm.vgs.research.free_g >= “1.6”
– debug:
msg: “LV cannot be created with specified size”
when: ansible_lvm.vgs.research.free_g < “1.6”
– name: Creating logical volume
lvol:
vg: research
lv: data
size: 800m
when: ansible_lvm.vgs.research.free_g < “1.6”
– name: Creating filesystem
filesystem:
fstype: ext4
dev: “/dev/research/data”
Q16. Create crontab name cron.yml for user natasha in all hosts .run the job logger “EX294 is running” in every 2 minutes
$ vim cron.yml
—
– name: Cron for natasha
hosts: all
tasks:
– name: cron logger
cron:
name: logger
minutes: “2”
user: natasha
job: /usr/bin/logger “EX294 is running”
$ ansible-playbook cron.yml –syntax-check
$ ansible-playbook cron.yml
$ ansible all -m shell -a ‘journalctl | grep running’