This article is the 16th day article of Ansible Advent Calendar 2019.
I will explain what to do to make Ansible Ready for a Linux machine set up without being aware of using Ansible.
There is a wealth of information on server and network construction using Ansible, and I think that there is an environment where "You can run Ansible immediately if you use this environment", but "In this existing environment (especially on-premise)" I felt (I personally) had little information about "how to get started with Ansible", so I've summarized it.
Therefore, this article is not about "how to start studying Ansible" but "what should I prepare when starting to build a server using Ansible".
――For those who want to study Ansible, click here: [Beginners welcome] If you want to get started with Ansible, let's join the Ansible Mokumokukai! --If you want an Ansible environment that can be verified immediately, click here: [VM construction using Vagrant specialized for Ansible verification environment --Ssh public key authentication setting / shared folder between multiple VMs / VMs](https: // qiita.com/zaki-lknr/items/cdf4eac2d2f2020ac7be)
It is premised on CentOS.
hostname | IP address | role |
---|---|---|
control-node | 192.168.0.140 | Control node |
target-node01 | 192.168.0.141 | Target node |
target-node02 | 192.168.0.142 | Target node |
The host that executes Ansible commands (such as ʻansible and ʻansible-playbook
) is called the control node, and the target host processed by the control node Ansible is called the target node.
--What you need for the control node
--What you need for the target node
Also, in this article, it is assumed that the working user name (non-root) and authentication password are all the same for each of the control node and target node.
For CentOS7, Ansible 2.4 is installed with the standard Yum repository, but it's a bit too old (latest is 2.9 as of December 2019).
$ yum info ansible
:
:
Available packages
name: ansible
architecture: noarch
version: 2.4.2.0
release: 2.el7
capacity: 7.6 M
Repository: extras/7/x86_64
wrap up: SSH-based configuration management, deployment, and task
:
:
Therefore, in the case of CentOS7, it is easy to install with the package in the EPEL repository.
$ sudo yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
$ sudo yum install ansible
You should now be able to install the latest stabilizer. To check the version, run ʻansible --version`.
[zaki@control-node ~]$ ansible --version
ansible 2.9.1
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/zaki/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, Aug 7 2019, 00:51:29) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
[zaki@control-node ~]$
For other platforms such as RHEL and Debian / Ubuntu, see the Ansible Installation Guide (https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html).
You can also install it using pip
.
As a simple operation check after installation, try running the ping
module to localhost to check if Ansible can be executed on the target.
$ ansible localhost -m ping
localhost | SUCCESS => {
"changed": false,
"ping": "pong"
}
It is OK if SUCCESS
is displayed.
ping
command that uses ICMP, it checks whether Ansible can be executed rather than communicating. Also, for localhost
, no SSH connection is requiredPython
You can install it with a package manager such as yum
... but if you have a popular distribution these days, Python should be included from the beginning.
Also, Python is required for Ansible installation requirements, so if you have Ansible installed, you should already have Python installed.
$ python --version
You can check the Python version with.
If the SSH server is also a major distribution, it should be running immediately after installing the OS. To be able to use Ansible, you need SSH access from the control node to the target node. (SSH server is running on the target node)
Both password authentication and public key authentication can be performed for authentication, but in the case of password authentication or public key authentication with a passphrase, the password / passphrase must be entered each time the authentication is performed, so the private key without a passphrase Setting is convenient and hassle-free when building.
$ ssh-keygen -t rsa -f $HOME/.ssh/id_rsa -N ""
This will create a public key with no passphrase in ~ / .ssh / id_rsa
.
After creating the key pair, distribute the public key to each target node.
$ ssh-copy-id [email protected] #Target node 1
$ ssh-copy-id [email protected] #Target node 2
You should now be able to SSH access to each target node without authentication.
[zaki@control-node ~]$ ssh 192.168.0.141
[zaki@target-node01 ~]$
[zaki@control-node ~]$ ssh 192.168.0.142
[zaki@target-node02 ~]$
With this setup, you should be able to access the two target nodes with the ping
module.
Create an inventory file with the following contents that defines the target node.
inventory.ini
[nodes]
192.168.0.141
192.168.0.142
Specify this inventory file with -i
and execute ʻansible. At this time, specify the group name
[nodes]written in the first line of the inventory where you specified
localhost` when executing locally.
$ ansible nodes -i inventory.ini -m ping
192.168.0.142 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
192.168.0.141 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
Now you can confirm that Ansible can be executed from the control node to the target node.
If you do not set the public key without a passphrase, you can execute Ansible by doing the following.
Authentication method | Ansible operation |
---|---|
Password authentication | -k を付与することでPassword authenticationのプロンプトを表示する |
Public key authentication with passphrase | The passphrase input prompt is automatically displayed |
One more point.
When building a server with Ansible, most tasks (such as installing packages with yum
) require root privileges.
If sudo
is not available on the target node in the first place, configure sudoers (such as adding it to the wheel
group) for the target user.
Also, if you set the password not required when executing sudo
, you do not need to enter the password when executing Ansible, which is convenient and hassle-free.
/etc/sudoers
%wheel ALL=(ALL) NOPASSWD: ALL
visudo
instead of vi
This is done on all target nodes. (If processing is also performed on the control node, set it on the control node as well)
Once set, add the -b
option (acting as the default root) and run Ansible.
$ ansible nodes -i inventory.ini -b -m ping
192.168.0.142 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
192.168.0.141 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
If no error occurs, the process will be executed with root privileges on the target node.
However, it is difficult to understand that it is a ping
module, so if you can yum update
on all nodes, try the following command. All packages will be updated.
$ ansible nodes -i inventory.ini -b -m yum -a "name=* state=latest"
If you do not set sudo
without a password, you can run Ansible with root privileges by giving -K
at runtime.
Can be used with -k
for password authentication.
So far, I have explained the installation of Ansible and the setting of SSH / sudo manually, but since I install Ansible, I can automate it so that potato does not get angry. Let's automate the place. In particular, distributing the SSH public key to all target nodes and setting sudo is not something that you do manually.
--Installing Ansible on the control node (manual)
--The working user on the target node belongs to the wheel
group and can be promoted to root with sudo (however, password required)
--Work user name and password are common to all nodes
--Ssh connection is possible with password authentication, and Python is already installed.
The other requirements, SSH public key setting without passphrase and sudo setting without password, are done in Ansible.
ansible.cfg
I haven't explained it so far, so please think of it as a magic that reduces the trouble of executing SSH for the first time.
ansible.cfg
[defaults]
host_key_checking = False
[ssh_connection]
ssh_args -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null
inventory.ini
[nodes]
192.168.0.141
192.168.0.142
playbook.yml
---
- hosts: localhost
tasks:
- name: create directory for ssh keypair
file:
path: "{{ lookup('env','HOME') }}/.ssh/"
state: directory
mode: 0700
- name: create ssh privatekey
openssh_keypair:
path: "{{ lookup('env','HOME') }}/.ssh/id_rsa"
- hosts: all
tasks:
- name: publickey copy to target-nodes
authorized_key:
user: "{{ ansible_env.USER }}"
key: "{{ lookup('file', lookup('env', 'HOME') + '/.ssh/id_rsa.pub' )}}"
- hosts: all
become: True
tasks:
- name: configure non-password sudo
copy:
dest: /etc/sudoers.d/nopass
mode: 0600
content: |
%wheel ALL=(ALL) NOPASSWD: ALL
Place these three files in the same directory and execute the following command.
$ ansible-playbook -i inventory.ini playbook.yml -kK
Don't forget that -k
is for password authentication and -K
is for sudo promotion password authentication, which has not been set yet (set with this Ansible).
The following are externally referenced using lookup
plugin.
variable | Contents |
---|---|
{{ lookup('env','HOME') }} | Execution user's home directory path |
{{ ansible_env.USER }} | User name |
{{ lookup('file', lookup('env', 'HOME') + '/.ssh/id_rsa.pub' )}} | ~/.ssh/id_rsa.pub File contents of |
With that in mind, each task in this playbook does the following:
(If you have already set something such as sudo
without a password, please delete it)
--create directory for ssh keypair (local execution)
--Create a ~ / .ssh
directory
--create ssh privatekey (local execution)
--Create SSH private key key pair in ~ / .ssh / id_rsa
--publickey copy to target-nodes (remote access)
--Copy ~ / .ssh / id_rsa.pub
to authorized_key on all target nodes
--config non-password sudo (root privileges required)
--Sudo NOPASSWD
setting for all target nodes
It ’s easy, is n’t it?
As mentioned in the Documentation (https://docs.ansible.com/ansible/latest/modules/openssh_keypair_module.html), this module is a new module available from version 2.8. It cannot be used with earlier Ansible versions.
If you are using an earlier version, the module cannot create the key, so replace it with the task created by the shell
module.
playbook.yml
- name: check exists ssh privatekey
shell: test -f $HOME/.ssh/id_rsa
register: exist_key
ignore_errors: True
check_mode: False
changed_when: False
- name: create ssh privatekey
shell: ssh-keygen -t rsa -f $HOME/.ssh/id_rsa -N ""
when: exist_key.rc != 0
It's a bit long, but there are two steps: "Check if there is ~ / .ssh / id_rsa
"and" If not, ssh-keygen
".
At the first execution, an error message is output because there is no ~ / .ssh / id_rsa
, but ʻignore_errors: True` ignores it and continues processing.
… Isn't there a forced overwrite mode for ssh-keygen
? ??
So, I explained the specific work procedure (manual & automatic construction by Ansible) about the preparation for building a server using Ansible.
If the number of units is a few, I think that I can do my best without mistakes even by hand, but when it comes to 10 units and 20 units, it is definitely better to automate it.
I think that it will change a little when it comes to network automation and cloud environment, but I think that automation of server construction using Ansible on-premise can be started with this.
As an aside, can Ansible installation be automated with Ansible? ..
reference
-openssh_keypair – Generate OpenSSH private and public keys — Ansible Documentation * From version 2.8
Recommended Posts