Affichage des articles dont le libellé est ssh. Afficher tous les articles
Affichage des articles dont le libellé est ssh. Afficher tous les articles

lundi 19 août 2024

GCP+ansible+gitlab-ci : OS-login for ansible with dynamic google compute inventory and gitlab-ci

"How to configure OS Login in GCP for Ansible" + ansible dynamic inventory for GCP compute + update on accepted cryptographic algo for SSH key + gitlab-ci example

Context: 

2024 update on https://alex.dzyoba.com/blog/gcp-ansible-service-account/ => thanks ! this still works, but the ssk-keygen by default doesn't provide keys that are accepted by gcp os-login.

+ updated precisions that took me too much time to infer, understood after reading : https://cloud.google.com/compute/docs/connect/create-ssh-keys => the SSH keys need to be in ssh-rsa !

+ ansible dynamic inventory configuration (as of 2024-aug)

+ automation example with gitlab-ci (as of 2024-aug)


Note: For the sake of completeness, ease of reproduction by the reader and leveraging the CC-BY-SA copyright, I include the sections of the great blog post cited as [1] and add my contributions when and where I needed too. 

Service account

 (cf. [1] - no change so far) OS Login allows SSH access for IAM users - there is no need to provision Linux users on an instance.

So Ansible should have access to the instances via IAM user. This is accomplished via IAM service account.

You can create service account via Console (web UI), via Terraform template or (as in [this] case) via gcloud:

$ gcloud iam service-accounts create ansible-sa \
     --display-name "Service account for Ansible"

Configure OS Login

 (cf. [1,5,6] - no change so far)
 Now, the trickiest part – configuring OS Login for service account.

 

0. Enable OS Login for all VMs in the project

Before you do anything else make sure to enable it for your project:

$ gcloud compute project-info add-metadata \
    --metadata enable-oslogin=TRUE

1. Add roles

Fresh service account don't have any IAM roles, so ansible-sa doesn’t have permission to do anything. To allow OS Login we have to add these 4 roles to the Ansible service account:
  • Compute Instance Admin (beta)
  • Compute Instance Admin (v1) => login as root (for "become: true" with ansible)
  • Compute OS Admin Login =>  login as a regular user
  • Service Account User

Here is how to do it via gcloud:

for role in \
    'roles/compute.instanceAdmin' \
    'roles/compute.instanceAdmin.v1' \
    'roles/compute.osAdminLogin' \
    'roles/iam.serviceAccountUser'
do \
    gcloud projects add-iam-policy-binding \
        my-gcp-project-241123 \
        --member='serviceAccount:ansible-sa@my-gcp-project-241123.iam.gserviceaccount.com' \
        --role="${role}"
done


2. Create key for service account and save it 

Service account is useless without key, create one with gcloud,  and save it, we will use it for the dynamic inventory connection further down the path.
This will create GCP key, not the SSH key.
$ gcloud iam service-accounts keys create \
    .gcp/gcp-key-ansible-sa.json \
    --iam-account=ansible-sa@my-gcp-project.iam.gserviceaccount.com 


3. Create SSH key for service account 

(cf. [1]  modified ! with command from [2])

This  [was supposed] the easiest part. $ ssh-keygen -f ssh-key-ansible-sa

 By default ssh-keygen provided an ssh-ed25519 ssh key, leading to a  Permission denied (publickey) when trying to connect with Ansible and OS-login (but no issue when connecting directly from the command-line). Specify `-t rsa` when creating your SSH keys.

$ ssh-keygen -t rsa -f ssh-key-ansible-sa -b 2048


4. Add SSH key for OS login to service account

(cf. [1] - no change) Now, to allow service account to access instances via SSH it has to have SSH public key added to it. To do this, first, we have to activate service account in gcloud:
$ gcloud auth activate-service-account \
    --key-file=.gcp/gcp-key-ansible-sa.json

This command uses GCP key we’ve created on step 2.

Now we add SSH key to the service account:

$ gcloud compute os-login ssh-keys add \
    --key-file=ssh-key-ansible-sa.pub

5. Switch back from service account

(cf. [1] - no change)  

$ gcloud config set account your@gmail.com

 

Connecting to the instance with OS login 

(cf. [1] - with change to the command to bypass the ssh-agent)

Now, we have everything configured on the GCP side, we can check that it’s working.

Note, that you don’t need to add SSH key to compute metadata, authentication works via OS login. But this means that you need to know a special user name for the service account.

Find out the service account id.

$ gcloud iam service-accounts describe \
    ansible-sa@my-gcp-project.iam.gserviceaccount.com \
    --format='value(uniqueId)'
106627723496398399336


This id is used to form user name in OS login – it’s sa_<unique_id>.  

Here is how to use it to check SSH access is working, 

 Specify the RSA private key, and to bypass the ssh-agent, specify to only test the identity provided in the command (IdentitiesOnly=yes), not all the keys you may have loaded in your running ssh-agent.

$ ssh -o "IdentitiesOnly=yes" -i ssh-key-ansible-sa sa_106627723496398399336@10.0.0.44
...

Configuring Ansible

Ansible GCP static inventory

 (cf. [1] - no change for static inventory, more on dynamic inventory just after)

And for the final part – make Ansible work with it.

There is a special variable ansible_user that sets user name for SSH when Ansible connects to the host.

In my case, I have a group gcp where all GCP instances are added, and so I can set ansible_user in group_vars like this:

# File inventory/dev/group_vars/gcp
ansible_user: sa_106627723496398399336


(thanks a lot to [1]'s author Alex Dzyoba, those previous steps helped me a lot in going this far ! now, let's add dynamic inventory and how I run it from gitlab-ci...)


Ansible GCP dynamic inventory

The compose section of the dynamic GCP inventory ansible plugin (google.cloud.gcp_compute, cf. [3]) expects a jinga2 template, to pass a variable directly, you need to double escape it:

 ansible_user: "'sa_106627723496398399336'"

#example of inventory.gcp.yml
---
plugin: google.cloud.gcp_compute

projects:
 - your-gcp-project

auth_kind: serviceaccount
# must match `ansible_user` below, json file must be available
service_account_file: ./.gcp/gcp-key-ansible-sa.json

filters:
 - status = RUNNING

keyed_groups:
  - key: labels
    prefix: label
  - key: zone
    prefix: zone
  - key: project
    prefix: project
  - key: labels['compoment']
    prefix: component

compose:
 ansible_host:           networkInterfaces[0].accessConfigs[0].natIP
 ansible_user:           "'sa_106627723496398399336'"

Configuring Ansible (ansible.cfg ssh_args)

Make sure to instruct ansible to use the correct private key when connecting to the servers
ssh_args = -i ssh-key-ansible-sa 

#file ansible.cfg :
[defaults]
;debug = true
roles_path = ./vendor-roles:./roles
collections_path = ./vendor-collections:./collections
stdout_callback = yaml
deprecation_warnings = True
host_key_checking = False
forks = 10
callbacks_enabled = timer, profile_tasks, profile_roles
;remote_user = sa_106627723496398399336
remote_tmp = /tmp [ssh_connection] pipelining = True scp_if_ssh = True ssh_args = -i ssh-key-ansible-sa

Gitlab-ci example

Identify an image that has ansible installed (and maintained), I used registry.gitlab.com/gitlab-org/gitlab-environment-toolkit:latest but if you find other /lighter one, let me know ! 


Gitlab-ci secret files [4] doc is set from the gitlab project's configuration, in the CI settings  and doc can be found here https://docs.gitlab.com/ee/ci/secure_files/index.html 

  • Using gitlab-ci feature secret files [4], store the ssh RSA secret key (for ssh)
  • Using gitlab-ci feature secret files [4], store the GCP json file for GCP authentication (for dynamic inventory).
  • We gather the files in the prepare_ssh_ansible job


If you have multiple jobs using Ansible, factorize (here  in the prepare_ssh_ansible job), and extends, here in the ssh-access, ansible-ping, ansible-inventory)

Example playbook, in this case, is located in ~/ansible/playbook/ssh.yml; replace with your own playbook).


# file: .gitlab-ci.yml
variables:
ANSIBLE_DYN_INVENTORY : "inventory.gcp.yml"
ANSIBLE_CHECK : "--check -vvv"
SECURE_FILES_DOWNLOAD_PATH: './secured_files/'

prepare_ssh_ansible:
# image: librespace/ansible:9.6.0
image: registry.gitlab.com/gitlab-org/gitlab-environment-toolkit:latest
before_script:
# install missing packages
- apt update --allow-releaseinfo-change -y -qq && apt install -y ansible curl bash jq

# gitlab-ci secured_files (cf [4]) for GCP service account credentials
- curl --silent "https://gitlab.com/gitlab-org/incubation-engineering/mobile-devops/download-secure-files/-/raw/main/installer" | bash
- mkdir ./ansible/.gcp/
- cp ${SECURE_FILES_DOWNLOAD_PATH}/gcp-key-ansible-sa.json ./ansible/.gcp/gcp-key-ansible-sa.json
- cp ${SECURE_FILES_DOWNLOAD_PATH}/ssh-key-ansible-sa ./ansible/ssh-key-ansible-sa && chmod 400 ./ansible/ssh-key-ansible-sa

# cf https://docs.ansible.com/ansible/9/scenario_guides/guide_gce.html#providing-credentials-as-environment-variables
- export GCP_AUTH_KIND="serviceaccount"
- export GCP_SERVICE_ACCOUNT_FILE="./ansible/.gcp/gcp-key-ansible-sa.json"
- export GCP_SCOPES="https://www.googleapis.com/auth/compute"

- export ANSIBLE_HOME=./ansible
- cd ./ansible/
- export ANSIBLE_CONFIG=./ansible.cfg
- ansible-galaxy install -r requirements.yaml --roles-path ./vendor-roles
- ansible-galaxy collection install -r requirements.yaml -p ./vendor-collections/
- ansible --version

ansible-inventory:
extends:
.prepare_ssh_ansible
script:
- ansible-inventory -i ${ANSIBLE_DYN_INVENTORY} --graph

ansible-ping:
extends:
.prepare_ssh_ansible
script:
- ansible all -i ${ANSIBLE_DYN_INVENTORY} -m ping -vvv

ssh-access:
extends:
.prepare_ssh_ansible
script:
- ansible-playbook -i ${ANSIBLE_DYN_INVENTORY} playbooks/ssh.yml


References :

[1] https://alex.dzyoba.com/blog/gcp-ansible-service-account/

[2] https://cloud.google.com/compute/docs/connect/create-ssh-keys 

[3] https://docs.ansible.com/ansible/latest/collections/google/cloud/gcp_compute_inventory.html 

[4] https://docs.gitlab.com/ee/ci/secure_files/index.html

[5] https://cloud.google.com/compute/docs/oslogin

[6] https://cloud.google.com/compute/docs/oslogin/set-up-oslogin


lundi 25 février 2013

Reusing SSH Connection

SOURCE : http://sshmenu.sourceforge.net/articles/transparent-mulithop.html

"The transparent multi-hop connections can be very useful but you may find that it takes a second or two to establish each connection. This delay can become annoying if it happens a lot (e.g.: every time you save a file from the text editor).
The good news is that if you can configure SSH to reuse an existing connection. This means that for example if you have an SSH shell session running then a new connection for SCP can skip the connection setup phase. Two steps are required:
First, you must create a directory (or 'folder') which SSH will use to keep track of established connections:
mkdir ~/.ssh/tmp
Next, add these two lines at the start of your ~/.ssh/config (make sure to use your username in place of 'YOUR-NAME'):


ControlMaster auto
ControlPath   /home/YOUR-NAME/.ssh/tmp/%h_%p_%r

As you can see, a small investment in time setting up your SSH configuration can pay back dividends in convenience."



vendredi 10 juillet 2009

svn + ssh on NetBSD

Problème : sur une machine uniquement accessible en ssh, je souhaite installer un serveur svn pour la gestion de version. Je souhaite avoir plusieurs utilisateurs. Ces utilisateurs doivent pouvoir faire des "checkout", des "commit", et autres opérations avec svn, mais ne doivent pas pouvoir se logguer sur le serveur pour autre chose que pour utiliser le serveur.

1ère étape :
Création d'un utilisateur svn qui sera celui utiliser pour se logguer sur le serveur.

1.1 Authentification par clefs publique/privée ssh

 Chaque utilisateur créé une clef ssh ( ssh-keygen ). Ajouter la clef publique générée dans ~/.ssh/authorized_keys

cat clef.pub > ~svn/.ssh/authorized_keys

Maintenant les utilisateurs peuvent se logguer directement en tant que l'utilisateur svn.

Allons voir un peu plus avant le contenu du fichier ~svn/.ssh/authorized_keys ...
Chaque ligne est indépendante, et de la la forme
  command="COMMAND" TYPE KEY COMMENT

En utilisant cette syntaxe, pour restreindre l'accès uniquement au service svn, on peut également ajouter "no-port-forwarding" ainsi que "no-agent-forwarding,no-X11-forwarding,no-pty"
  command="svnserve -t",no-port-forwarding TYPE KEY COMMENT

ainsi l'on utilisera une fois la clef ssh enregistrée (ssh-agent bash; ssh-add clef)

svn co svn+ssh://svn@host/path/to/base/repos




1.2 Plusieurs utilisateurs
Pour l'instant tout le monde se loggue en tant qu'utilisateur "svn". En particulier, il est ainsi impossible de différentier les auteurs des différents commits effectués.



ajouter l'option "--tunnel-user=user" également pour l'identifier.


2ème étape : Eviter d'avoir à afficher tout le path sur le serveur

Si le repo est dans /path/to/base/repos, alors :
  command="svnserve -r /path/to/base/",no-port-forwarding TYPE KEY COMMENT

Attention
:
il semble que l'option -r et l'option -t soient incompatibles si l'on utilise pas l'option --tunnel-user


3 Et Au Final...

Nous obtenons dans le fichier ~svn/.ssh/authorized_keys :
   command="/path/to/svnserve -t -r /repository/root --tunnel-user=alice",no-port-forwarding,no-agent-forwarding,no-X11-forwarding,no-pty TYPE1 KEY1 COMMENT1
command="/path/to/svnserve -t -r /repository/root --tunnel-user=bob",no-port-forwarding,no-agent-forwarding,no-X11-forwarding,no-pty TYPE2 KEY2 COMMENT2



1.4 liens/remerciements.



lundi 5 janvier 2009

Molly-guard (for ssh)

Installant sshd sur mon eee ubuntu m'a conseillé d'ajouter également «molly-guard», dont je n'avais jamais entendu parler.

Found on http://packages.ubuntu.com/fr/intrepid/molly-guard

protects machines from accidental shutdowns/reboots

The package installs a shell script that overrides the existing shutdown/reboot/halt/poweroff commands and first runs a set of scripts, which all have to exit successfully, before molly-guard invokes the real command.

One of the scripts checks for existing SSH sessions. If any of the four commands are called interactively over an SSH session, the shell script prompts you to enter the name of the host you wish to shut down. This should adequately prevent you from accidental shutdowns and reboots.

This shell script passes through the commands to the respective binaries in /sbin and should thus not get in the way if called non-interactively, or locally.

lundi 17 novembre 2008

sshfs mount

Quand même tellement plus pratique qu'un ssh + scp lorsqu'il s'agit uniquement de transférer qq fichiers !!


# sshfs user@serveur.ext:/home/user/ mountpoint/



mardi 12 août 2008

cron + ssh + keychain

Keychain :



dans .bash_profile


#!/bin/bash
#example ~/.bash_profile file
/usr/bin/keychain private keys
#redirect ~/.ssh-agent output to /dev/null to zap the annoying
#"Agent PID" message
source ~/.ssh-agent > /dev/null


cron :


crontab -e :

42 4 * * * time /home/cm/usr/bin/script_sauvegarde.sh



script_sauvegarde.sh





#!/bin/bash

keychain $HOME/SSH_PRIV_KEY
source /home/cm/.keychain/MACHINE-NAME-sh

rsync -e ssh --exclude-from=RSYNC_EXCLUDE_FILE -Cavz CLIENT/ SERVER:PATH_ON_SERVER/