Showing posts with label bash. Show all posts
Showing posts with label bash. Show all posts

Friday, April 17, 2026

Managing kubeconfig files + download from Rancher

Managing kubeconfig files across multiple Rancher clusters

When you manage several Kubernetes clusters through Rancher, you quickly end up with a pile of kubeconfig files. Downloading them by hand from the Rancher UI is tedious, keeping track of which ones are loaded is error-prone, and testing connectivity after a VPN change or token rotation is a chore.

Here is the setup I landed on: a download script, a directory convention, auto-discovery in the shell, and a parallel connectivity tester.

  1. Directory layout
  2. Downloading kubeconfigs from Rancher
  3. Auto-discovery in .bashrc
  4. Testing connectivity
  5. Putting it all together

Directory layout

~/.kube/
  config                        # default kubectl config (GKE, kind, etc.)
  clusters/
    rancher_kubeconfig_dl.sh    # download script
    test-kubeconfigs.sh         # connectivity tester
    local.yaml                  # kubeconfig files live here
    cluster-us-east-1.yaml
    cluster-us-west-2.yaml
    cluster-eu-west-1.yaml
    cluster-ap-south-1.yaml

The scripts and kubeconfig files all live in ~/.kube/clusters/. The test script filters by kind: Config so it ignores non-kubeconfig files like itself.

Downloading kubeconfigs from Rancher

Rancher exposes a generateKubeconfig action on its v3 API. The following script lists all clusters your token has access to and downloads each kubeconfig into a directory.

First, create an API key in Rancher: top-right avatar, Account & API Keys, Create API Key. You get a token like token-xxxxx:yyyyyyy.

#!/usr/bin/env bash
# rancher_kubeconfig_dl.sh
# Downloads all kubeconfig YAML files from a Rancher instance.
#
# Usage:
#   RANCHER_TOKEN=token-xxxxx:yyyyyyy ./rancher_kubeconfig_dl.sh
#
# To get a token manually:
#   Rancher UI -> top-right avatar -> Account & API Keys -> Create API Key

set -euo pipefail

RANCHER_URL="${RANCHER_URL:-https://rancher.example.com}"
RANCHER_TOKEN="${RANCHER_TOKEN:-}"
OUTPUT_DIR="${OUTPUT_DIR:-.}"

# -- Auth --
if [[ -z "$RANCHER_TOKEN" ]]; then
  echo "ERROR: Set RANCHER_TOKEN (e.g. token-xxxxx:yyyyyyy)"
  echo "       Rancher UI -> Avatar -> Account & API Keys -> Create API Key"
  exit 1
fi

CURL_OPTS=(-sSf -H "Authorization: Bearer ${RANCHER_TOKEN}")

# -- Fetch cluster list --
echo "-> Fetching cluster list from ${RANCHER_URL} ..."
clusters_json=$(curl "${CURL_OPTS[@]}" "${RANCHER_URL}/v3/clusters")

tmp=$(mktemp)
trap 'rm -f "$tmp"' EXIT
echo "$clusters_json" | jq -r '.data[] | .id + "\t" + .name' > "$tmp"

count=$(wc -l < "$tmp" | tr -d ' ')
[[ "$count" -eq 0 ]] && { echo "No clusters found (check your token permissions)."; exit 1; }
echo "-> Found ${count} cluster(s)"
mkdir -p "$OUTPUT_DIR"

# -- Download kubeconfig per cluster --
while IFS=$(printf '\t') read -r id name; do
  safe=$(echo "$name" | tr -cs '[:alnum:]-_.' '-' | sed 's/-$//')
  out="${OUTPUT_DIR}/${safe}.yaml"
  echo "  ${name} (${id}) -> ${out}"
  curl "${CURL_OPTS[@]}" \
    -X POST \
    "${RANCHER_URL}/v3/clusters/${id}?action=generateKubeconfig" \
    | jq -r '.config' \
    > "$out"
done < "$tmp"

echo ""
echo "Done. Kubeconfigs saved to: ${OUTPUT_DIR}/"
ls -lh "$OUTPUT_DIR"

Run it once, or whenever you add a new cluster to Rancher:

RANCHER_TOKEN=token-xxxxx:yyyyyyy ./rancher_kubeconfig_dl.sh

Each cluster gets its own file, named after the cluster. No manual copy-pasting from the Rancher UI.

Auto-discovery in .bashrc

kubectl merges all files listed in the KUBECONFIG environment variable (colon-separated). Instead of maintaining a hardcoded list that goes stale every time you add or remove a cluster, use find to discover them at shell startup:

# In ~/.bashrc or ~/.zshrc
export KUBECONFIG=~/.kube/config:$(find ~/.kube/clusters -name '*.yaml' 2>/dev/null | tr '\n' ':' | sed 's/:$//')

This starts with the default ~/.kube/config (for GKE, kind, minikube, or anything else) and appends every YAML file found in the clusters directory. find, tr, and sed are all POSIX — this works on macOS, Linux, and BSDs. Avoid find -printf which is a GNU extension and won't work on macOS.

Drop a new file in, open a new terminal, and kubectx sees it immediately. Remove a file and it disappears. No editing required.

Testing connectivity

After a VPN reconnect, a token rotation, or just to check that everything is reachable, run the test script. It finds all kubeconfig files in a directory, extracts every context from each file, and tests them all in parallel:

#!/usr/bin/env bash
# test-kubeconfigs.sh
# Test connectivity for all kubeconfig YAML files in a directory.
# Usage: ./test-kubeconfigs.sh [directory]
#   directory: path containing kubeconfig YAML files (default: script directory)

set -euo pipefail
shopt -s nullglob

dir="${1:-$(dirname "$0")}"

if [[ ! -d "$dir" ]]; then
  echo "Error: '$dir' is not a directory" >&2
  exit 1
fi

test_context() {
  local f="$1" ctx="$2" name
  name="$(basename "$f")"
  if kubectl --kubeconfig="$f" --context="$ctx" cluster-info --request-timeout=5s &>/dev/null; then
    echo "OK    $name  context=$ctx"
  else
    echo "FAIL  $name  context=$ctx"
  fi
}

pids=()
found=0

for f in "$dir"/*.yaml "$dir"/*.yml; do
  [[ -f "$f" ]] || continue
  grep -q 'kind: Config' "$f" 2>/dev/null || continue
  found=$((found + 1))

  contexts=$(kubectl --kubeconfig="$f" config get-contexts -o name 2>/dev/null)
  if [[ -z "$contexts" ]]; then
    echo "SKIP  $(basename "$f")  (no contexts found)" &
    pids+=($!)
    continue
  fi

  for ctx in $contexts; do
    test_context "$f" "$ctx" &
    pids+=($!)
  done
done

if [[ $found -eq 0 ]]; then
  echo "No kubeconfig files (kind: Config) found in $dir"
  exit 1
fi

fail=0
for pid in "${pids[@]}"; do
  wait "$pid" || fail=$((fail + 1))
done

pass=$(( ${#pids[@]} - fail ))
echo ""
echo "Results: $pass ok, $fail failed (from $found files)"
[[ $fail -eq 0 ]]

Each test runs as a background job, so checking six clusters takes as long as the slowest one (typically the 5-second timeout for an unreachable cluster), not six times that.

$ ./test-kubeconfigs.sh
OK    cluster-us-east-1.yaml   context=cluster-us-east-1
OK    cluster-us-west-2.yaml   context=cluster-us-west-2
OK    cluster-eu-west-1.yaml   context=cluster-eu-west-1
FAIL  cluster-ap-south-1.yaml  context=cluster-ap-south-1
OK    local.yaml               context=local

Results: 4 ok, 1 failed (from 5 files)

Putting it all together

The workflow is:

  1. Run rancher_kubeconfig_dl.sh once (or after adding clusters in Rancher) to download all kubeconfigs into ~/.kube/clusters/.
  2. Open a new terminal. The find-based KUBECONFIG export picks up all files automatically. kubectx lists every context.
  3. Run test-kubeconfigs.sh to verify connectivity to all clusters in parallel.

No manual list to maintain, no UI clicking, and a quick way to verify everything is reachable.

Tuesday, February 20, 2024

terraform variable <-> variables {script, gitlab-ci}

 (Ici, ce que vous voulez avant le lien) (Ici vous racontez votre vie)


terraform -> gitlab/script/etc.


src: https://stackoverflow.com/questions/75531444/how-to-use-terraform-variable-into-gitlab-ci-yml

  • Terraform : use an "output" 

locals {
toto = format ("${var.ressource_name_pattern}-something", "cloudfront-edge")
}

output "toto" {
 value = local.toto
}

  • Script: get output from terraform command

foobar = ${terraform output toto}



gitlab -> Terraform


Read environment variables in terraform variables, add TF_VAR_ in front of the variable name,

=> env / exported  "TF_VAR_toto" => variable.tf variable toto





terraform -> gitlab/script/etc. VARIABLE (in project settings)

src https://www.reddit.com/r/Terraform/comments/mwmq4e/comment/gvjo7g3/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button

     For example : create an EKS cluster & then create a variable with the KUBECONFIG data in another project which has the code for the apps & trigger the deployment of those apps into newly created cluster using that variable.

    - terragrunt run-all apply --terragrunt-non-interactive -auto-approve tfplan-$CI_COMMIT_SHA
    - terraform output kubectl_config > kubectl_config
    - |
      curl -s -XPUT -H "PRIVATE-TOKEN: $GITLAB_API_RW_PRIVATE_TOKEN" $CI_API_V4_URL/groups/$GROUP_ID/variables/KUBECONFIG \
      --form "value=$(cat kubectl_config)" \
      --form "variable_type=file" \
      --form "protected=false" \
      --form "masked=false" \ 
      --form "environment_scope=*" 






Tuesday, October 5, 2021

Shell tools : shellcheck


  • ShellCheck, a static analysis tool for shell scripts

https://github.com/koalaman/shellcheck



suggests to add this on your makefiles :

check-scripts:
    # Fail if any of these files have warnings
    shellcheck myscripts/*.sh
or this on your CI (here : .travis.yml )
script:
  # Fail if any of these files have warnings
  - shellcheck myscripts/*.sh



Monday, August 31, 2020

Rsync and exclusions

 

an old post found back on disqr about rsync (original date : circa 2016)

You might want to use the rsync exclude options directly in your script either by specifying a file with all the exclusions to perform, or by specifying them in the command line directly :

--exclude-from <file-name with 1 pattern by line>
--exclude <file or="" dir="">

For example :

rsync -r -a -v -e "ssh -l Username" \
        --exclude 'dbconfig.xml' --exclude 'WEB-INF/classes/' --delete \
        /local/Directory remote.server:/remoteDirectory

One important thing to keep in mind when excluding a directory is that rsync will always consider the path to be relative to the source directory.



This can be used for example when you want to push your production from a Production JIRA instance to a Staging JIRA instance, but your dbconfig.xml is different (different DB auth parameters for example), and hence want to avoid some files. 

Monday, October 26, 2015

Scripting : set variables depending how the where launched (terminal mode, cron/auto mode, ...)

You all know the deal : when you launch a command it (obviously) comes with all your environment variables. But when you want to cron it, none of them is present. We hence want to set some variables, but only in some cases. The "tty" command will help us do so.


You can use the tty tool to check if a script is called from the standard output :


if ! tty -s
then
    exec >/dev/null 2>&1
else
    MAIL_DEST="email@server.ext"
fi



  • The command tty returns :

User Commands                                              tty(1)

NAME
     tty - return user's terminal name

SYNOPSIS
     tty [-l] [-s]

DESCRIPTION
     The tty utility writes to the standard output  the  name  of
     the  terminal  that is open as standard input. The name that
     is used is equivalent to the string that would  be  returned
     by the ttyname(3C) function.

OPTIONS
     The following options are supported:

     -l       Prints the synchronous line  number  to  which  the
              user's terminal is connected, if it is on an active
              synchronous line.

     -s       Inhibits printing of the terminal path name, allow-
              ing one to test just the exit status.

EXIT STATUS
     The following exit values are returned:

     0        Standard input is a terminal.

     1        Standard input is not a terminal.

     >1       An error occurred.

 

Wednesday, July 23, 2014

Test dir var (eval shell)



 Test if the given variable contains a valid directory. If not, exit. (This uses the eval function of the shell)
testdirvar () {
    tmp=$1
    if [ -d "${!tmp}" ] ; then
        echo "${tmp}=${!tmp} (variable ${tmp}: directory exists)"
    else
        echo "!!! ${tmp}=${!tmp} is not a directory (variable ${tmp}) !!!"
        exit
    fi
}

Thursday, July 12, 2012

Command(s) of the day

Some colleagues started putting commands on the white-board every X days under
  • comm compare two sorted files line by line
  • paste merge lines of files
  • colordiff diff, with colors
  • colorgcc
  • pgrep look for a process name
  • pkill kill processes by name, not by pid
  • ...

Thursday, February 16, 2012

signaux et terminaux, stty

A la base, une simple question de la part d'une collègue : comment envoyer un SIGQUIT dans un terminal sous mac OS.
Une rapide recherche sur le net donne comme réponse "^\", contrairement à linux où c'est usuellement "^d" qui sert à cela.

Comment configurer cela, comment trouver cela ?


STTY(1)                          User Commands                         STTY(1)
NAME
       stty - change and print terminal line settings

SYNOPSIS
       stty [-F DEVICE | --file=DEVICE] [SETTING]...
       stty [-F DEVICE | --file=DEVICE] [-a|--all]
       stty [-F DEVICE | --file=DEVICE] [-g|--save]

DESCRIPTION
       Print or change terminal characteristics.

Qui chez moi par exemple (Ubuntu) me donne :
 
$ stty -a
speed 38400 baud; rows 40; columns 80; line = 0;
intr = ^C; quit = ^\; erase = ^?; kill = ^U; eof = ^D; eol = M-^?; eol2 = M-^?;
swtch = M-^?; start = ^Q; stop = ^S; susp = ^Z; rprnt = ^R; werase = ^W;
lnext = ^V; flush = ^O; min = 1; time = 0;
-parenb -parodd cs8 hupcl -cstopb cread -clocal -crtscts
-ignbrk brkint -ignpar -parmrk -inpck -istrip -inlcr -igncr icrnl ixon -ixoff
-iuclc ixany imaxbel iutf8
opost -olcuc -ocrnl onlcr -onocr -onlret -ofill -ofdel nl0 cr0 tab0 bs0 vt0 ff0
isig icanon iexten echo echoe echok -echonl -noflsh -xcase -tostop -echoprt 
Donc en fait "^d" envoie un "eof", pas un sig_quit...

Thursday, October 7, 2010

Option Dash Dash

Une propriété intéressante des outils gnu,bsd&cie. est la possiblité d'indiquer la fin du passage des options avec un "--" (dash dash).

Pratique par exemple pour faire un grep sur quelquechose qui pourrait ressembler à une option, sans que la chaine à chercher ne soit interprétée comme otion de grep. Mais ce n'est qu'un exemple.


Application, pour chercher toutes les occurences de '-l' dans le fichier 'toto.txt' :

grep -- -l toto.txt

Tuesday, December 15, 2009

Bash tricks ":", la commande qui ne fait rien

En bash, la commande ":" ("column", ou "deux points") est une commande qui prend autant d'arguments que l'on veut mais qui ne fait rien.

C'est en particulier pratique pour commenter dynamiquement du code.
Par exemple :


#!/bin/bash
#DEBUG=echo
DEBUG=:

if [ -f /tmp/toto ]; then
$DEBUG "/tmp/toto existe"
echo `date` >>/tmp/toto
else
$DEBUG "/tmp/toto n'existe pas"
echo "fichier cree a `date`" > /tmp/toto

fi


Exem

Wednesday, October 21, 2009

Bash "bind" command for defining keyshortcuts

http://www.geocities.com/h2428/petar/bash_bind.htm

On y trouve entre autre :


# Now map xterm's alternative keybindings to existing functionality
# Some are simple translations to correspontend M- combinations
# ctrl+left/right arrows:
bind '"\e\x5b\x31\x3b\x35\x44"':backward-word
bind '"\e\x5b\x31\x3b\x35\x43"':forward-word
# alt+b/f:
bind '"\xe2"':'"\M-b"'
bind '"\xe6"':'"\M-f"'
# atl+backspace:
bind '"\xff"':backward-kill-word
# alt+'.':
bind '"\xae"':yank-last-arg
# alt+k:
bind '"\xeb"':"\"\M-k\""
# alt+w:
bind '"\xf7"':'"\M-w"'

Friday, July 10, 2009

svn + ssh on NetBSD

Problème : sur une machine uniquement accessible en ssh, je souhaite installer un serveur svn pour la gestion de version. Je souhaite avoir plusieurs utilisateurs. Ces utilisateurs doivent pouvoir faire des "checkout", des "commit", et autres opérations avec svn, mais ne doivent pas pouvoir se logguer sur le serveur pour autre chose que pour utiliser le serveur.

1ère étape :
Création d'un utilisateur svn qui sera celui utiliser pour se logguer sur le serveur.

1.1 Authentification par clefs publique/privée ssh

 Chaque utilisateur créé une clef ssh ( ssh-keygen ). Ajouter la clef publique générée dans ~/.ssh/authorized_keys

cat clef.pub > ~svn/.ssh/authorized_keys

Maintenant les utilisateurs peuvent se logguer directement en tant que l'utilisateur svn.

Allons voir un peu plus avant le contenu du fichier ~svn/.ssh/authorized_keys ...
Chaque ligne est indépendante, et de la la forme
  command="COMMAND" TYPE KEY COMMENT

En utilisant cette syntaxe, pour restreindre l'accès uniquement au service svn, on peut également ajouter "no-port-forwarding" ainsi que "no-agent-forwarding,no-X11-forwarding,no-pty"
  command="svnserve -t",no-port-forwarding TYPE KEY COMMENT

ainsi l'on utilisera une fois la clef ssh enregistrée (ssh-agent bash; ssh-add clef)

svn co svn+ssh://svn@host/path/to/base/repos




1.2 Plusieurs utilisateurs
Pour l'instant tout le monde se loggue en tant qu'utilisateur "svn". En particulier, il est ainsi impossible de différentier les auteurs des différents commits effectués.



ajouter l'option "--tunnel-user=user" également pour l'identifier.


2ème étape : Eviter d'avoir à afficher tout le path sur le serveur

Si le repo est dans /path/to/base/repos, alors :
  command="svnserve -r /path/to/base/",no-port-forwarding TYPE KEY COMMENT

Attention
:
il semble que l'option -r et l'option -t soient incompatibles si l'on utilise pas l'option --tunnel-user


3 Et Au Final...

Nous obtenons dans le fichier ~svn/.ssh/authorized_keys :
   command="/path/to/svnserve -t -r /repository/root --tunnel-user=alice",no-port-forwarding,no-agent-forwarding,no-X11-forwarding,no-pty TYPE1 KEY1 COMMENT1
command="/path/to/svnserve -t -r /repository/root --tunnel-user=bob",no-port-forwarding,no-agent-forwarding,no-X11-forwarding,no-pty TYPE2 KEY2 COMMENT2



1.4 liens/remerciements.



Tuesday, December 16, 2008

bash 'type' function

$ type echo
echo is a shell builtin

$ type toto
bash: type: toto: not found

$ type which
which is /usr/bin/which



NAME
type - write a description of command type

SYNOPSIS
type name...

DESCRIPTION
The type utility shall indicate how each argument would be interpreted
if used as a command name.