mercredi 1 avril 2026

Reverse-engineering Ropvacnic S1 vacuum for homeassistant integration

Integrating the Ropvacnic S1 into Home Assistant via LocalTuya

The Ropvacnic S1 is a Tuya-based robot vacuum. Getting it to work locally in Home Assistant requires some digging — the DP numbers aren't documented anywhere, and LocalTuya has a bug in its locate function. This guide documents everything I found.

Why local control? Faster response, no cloud dependency, works if Tuya servers go down. The device still sends data to Tuya's cloud, but Home Assistant commands go directly over your LAN.

Setup: Home Assistant OS 2026.3.4 · LocalTuya 5.2.3 · Protocol 3.4 · Region: Western America


Step 1 — Tuya IoT Platform Setup

Create an account at iot.tuya.com (not tuya.com — click "Developer Platform" at the bottom). Create a Cloud Project with these settings:

  • Industry: Smart Home
  • Development Method: Smart Home
  • Data Center: Western America (for Canada/USA accounts)

Important: The Data Center must match your Tuya Smart app region. A mismatch causes a "Data center inconsistency" error when scanning the QR code.

Under Devices → Link Tuya App Account, scan the QR code with your Tuya Smart app. Then note:

  • Access ID — from the Overview tab
  • Access Secret — from the Overview tab
  • UID — from the linked account (starts with az...)

Step 2 — Discover DPs with tinytuya

A DP (Data Point) is a numbered channel that controls one specific function of the device. Install tinytuya on your computer to identify them all:

pip3 install tinytuya
python3 -m tinytuya wizard
# Enter your Access ID, Secret, UID and region (us)
 
python3 -m tinytuya snapshot
# Poll: Y → shows all live DP values

Complete DP Map — Ropvacnic S1

DP Code Type Values
1switchBooleantrue / false
2switch_goBooleantrue / false
3modeEnumstandby, random, wall_follow, spiral, chargego
4direction_controlEnumforward, backward, turn_left, turn_right, stop
5statusEnumstandby, smart_clean, goto_charge, charging, charge_done, paused
6residual_electricityInteger0–100 (%)
7clean_timeIntegerminutes
9clean_areaInteger
13seek (locate)Booleantrue / false
14suctionEnumgentle, normal, strong
17edge_brushInteger0–100 (%)
18filterInteger0–100 (%)
20water_controlEnumclosed, low, middle, high

Note: DP 13 (seek/locate) is not visible in the standard tinytuya output. It was found by brute-force testing DPs 10–16 using d.set_value(dp, True) and listening for the vacuum's beep.


Step 3 — LocalTuya Entity Configuration

Install LocalTuya via HACS, restart HA, then go to Settings → Devices & Services → Add Integration → LocalTuya.

Set Manual DPS to: 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25

Configure the entity with these values:

FieldValue
ID (status DP)5
Power DP (powergo_dp)1
Mode DP3
Battery DP6
Fan Speed DP14
Clean Time DP7
Clean Area DP9
Locate DP13
Fault DP(leave empty)
Idle Statusstandby,sleep
Docked Statuscharging,chargecompleted,charge_done
Returning Statusgoto_charge
Modes liststandby,random,wall_follow,spiral,chargego
Return home modechargego
Fan speeds listgentle,normal,strong
Pause statepaused
Stop statusstandby

Step 4 — Fix vacuum.py (LocalTuya bug)

LocalTuya 5.2.3 has a bug in async_locate — it sends an empty string "" instead of True, so the locate button does nothing. Fix it via the Terminal add-on:

sed -i 's/set_dp("", self._config\[CONF_LOCATE_DP\])/set_dp(True, self._config[CONF_LOCATE_DP])/g' \
  /config/custom_components/localtuya/vacuum.py
 
# Verify
grep -A3 "async_locate" /config/custom_components/localtuya/vacuum.py

Step 5 — Fix config via Terminal (if GUI doesn't save)

The LocalTuya GUI sometimes doesn't persist certain changes (notably mode_dp and id). Edit the config file directly:

# Backup first!
cp /config/.storage/core.config_entries \
   /config/.storage/core.config_entries.backup
 
# Status DP = 5 (reads DP 5 for state)
sed -i 's/"id":1,"idle_status_value":"standby,sleep"/"id":5,"idle_status_value":"standby,sleep"/g' \
  /config/.storage/core.config_entries
 
# powergo_dp must be 1 (start/pause)
sed -i 's/"powergo_dp":5/"powergo_dp":1/g' \
  /config/.storage/core.config_entries
 
# mode_dp = 3 (stop + return to base)
sed -i 's/"mode_dp":0/"mode_dp":3/g' \
  /config/.storage/core.config_entries
 
# locate_dp = 13
sed -i 's/"locate_dp":0/"locate_dp":13/g' \
  /config/.storage/core.config_entries
 
# Remove fault_dp entirely (causes false errors)
sed -i 's/,"fault_dp":[0-9]*//g' \
  /config/.storage/core.config_entries

After edits, reload LocalTuya: Settings → Devices & Services → LocalTuya → 3 dots → Reload — no full restart needed.


Final Result

After completing all steps, the Ropvacnic S1 is fully controllable from Home Assistant with local-only communication:

  • State correctly shows Docked, Cleaning, Returning, and Paused
  • Start, Pause, Stop, and Return to Base all work
  • Locate (beep) works after the vacuum.py fix
  • Battery percentage displayed in real time
  • Fan speed (gentle / normal / strong) selectable from HA
  • Clean time and area tracked per session

vendredi 21 novembre 2025

UN / unodc elearning



 https://elearningunodc.org/local/pages/?id=3

jeudi 11 septembre 2025

Image magick, reduce size of jpg files by 50%

 

for file in *.jpg; do magick "$file" -resize 50% "resized/$file"; done

lundi 8 septembre 2025

Ansible : GCP Google Cloud (Compute) ansible dynamic inventory with cache

Inventory definition GCP compute

---
plugin: google.cloud.gcp_compute
# https://docs.ansible.com/ansible/latest/collections/google/cloud/gcp_compute_inventory.html
# https://gitlab.com/gitlab-org/gitlab-environment-toolkit/-/blob/main/docs/environment_configure.md#google-cloud-platform-gcp

projects:
- project_id

auth_kind: serviceaccount
# must match `ansible_user` below, cf. other article on how to set this up
service_account_file: ./gcp-sa.json

filters:
# only return running instances, we won't be able to connect to sopped instances
- status = RUNNING
# for example, only return compute instances with label foo = foobar
- labels.foo = foobar

keyed_groups:
- key: labels
prefix: label

hostnames:
- name
- public_ip
- private_ip

compose:
#<ansible variable to be set> <data from gcp discovery>
# Set an inventory parameter to use the Public IP address to connect to the host
#ansible_host: public_ip
ansible_host: networkInterfaces[0].accessConfigs[0].natIP
ansible_user: "'sa_115528571027174573787'"

# GCP compute label "activate_this" value => ansible variable "run_this" value
run_this: labels['activate_this']


jeudi 28 août 2025

Systemd healtcheck with side service and monotonic timer, auto-healing

What : bypass the lack of healthcheck of systemd

  • systemd service "what-service.service"
  • systemd timer  "what-service-healthcheck.timer"
    • triggers a systemd service "what-service-healthcheck.service"
       which lanches a script "
      service_health_check.sh"
    • script that :
      • curl's heal-tcheck URL "HEALTH_CHECK_URL"
      • if KO, restart the targetted service



what-service-healthcheck.timer

[Unit]
Description=Run health check every 15 seconds
[Timer]
# Wait 1 minute after boot before the first check
OnBootSec=1min
# Run the check 15 seconds after the last time it finished
OnUnitActiveSec=15s
[Install]
WantedBy=timers.target


By default the timer service will trigger the unit service with the same name, no need to specify it.

what-service-healthcheck.service.j2
[Unit]
Description=Health Check for {{ what_service }}
Requires={{ what_service }}.service
[Service]
Type=oneshot
ExecStart=/usr/local/bin/service_health_check.sh
Restart=on-failure
OnFailure={{ what_service }}


service_health_check.sh 
#!/bin/bash
# The health check endpoint
HEALTH_CHECK_URL="http://localhost:{{ running_port }}/health_check"
# Use curl to check the endpoint.
# --fail: Makes curl exit with a non-zero status code on server errors (4xx or 5xx).
# --silent: Hides the progress meter.
# --output /dev/null: Discards the response body.
if ! curl --silent --fail --max-time 2 --output /dev/null "$HEALTH_CHECK_URL"; then
echo "Health check failed for {{ service_name }}. Restarting..."
# Restart is performed on failure from healthcheck service
exit 1
fi



Adding through ansible (to do : fix indentation, blog isn't great for this)

<role>/tasks/main.yml
--- 

- name: Generate what-service systemd file
ansible.builtin.template:
src: what-service.service.j2
dest: /etc/systemd/system/what-service.service
mode: "0755"
notify: Restart what-service

 - name: Copy the health check script
ansible.builtin.copy:
src: service_health_check.sh
dest: /usr/local/bin/service_health_check.sh
owner: root
group: root
mode: '0755'
vars: 
  service_name: what-service

- name: Copy the health check systemd service file
ansible.builtin.copy:
src: what-service-healthcheck.service
dest: /etc/systemd/system/what-service-healthcheck.service
owner: root
group: root
mode: '0644'
notify: Reload systemd

- name: Copy the health check systemd timer file
ansible.builtin.copy:
src: what-service-healthcheck.timer
dest: /etc/systemd/system/what-service-healthcheck.timer
owner: root
group: root
mode: '0644'
notify: Reload systemd

- name: Enable and start the health check timer
ansible.builtin.systemd:
name: healthcheck.timer
state: started
enabled: yes
daemon_reload: yes # Ensures systemd is reloaded before starting


<role>/handlers/main.yml
---
- name: Restart what-service 
ansible.builtin.service:
name: what-service
state: restarted
daemon_reload: true

- name: Reload systemd
ansible.builtin.service:
daemon_reload: yes

- name: Restart what-service-healthcheck.timer
ansible.builtin.service:
name: what-service-healthcheck.timer
state: restarted
daemon_reload: true

jeudi 24 juillet 2025

apt info - ansible tasks + roles to install apt_info.py automatically along node-exporter + Grafana dashboard

Create a file with openmetrics values, so that it be exporter along node-exporter metrics.

=> script runs every 12h to report the status of apt packages to upgrade writes it in  /var/lib/node_exporter/apt_info.prom 

which is ingested by prometheus when calling node-exporter.


The metrics are used by a grafana dashboard available here : https://grafana.com/grafana/dashboards/23777-apt-ugrades/


```

---
- name: Monitoring probes - setup exporters running on each server
hosts: all
vars:
become_user: root
become: true

tasks:
# https://github.com/ncabatoff/process-exporter
- name: Install .deb package of process-exporter
ansible.builtin.apt:
deb: https://github.com/ncabatoff/process-exporter/releases/download/v0.8.3/process-exporter_0.8.3_linux_amd64.deb
become: true

- name: Download and install apt_info.py
ansible.builtin.get_url:
url: https://raw.githubusercontent.com/prometheus-community/node-exporter-textfile-collector-scripts/refs/heads/master/apt_info.py
dest: /usr/local/bin/apt_info.py
mode: '0755'
become: true

- name: Install apt_info.py dependencies via apt
ansible.builtin.apt:
name: "{{ item }}"
state: present
update_cache: true
become: true
with_items:
- python3-prometheus-client
- python3-apt
- cron

- name: Add a cron job to run apt_info.py every 12 hours
ansible.builtin.cron:
name: "Run apt_info.py every 12 hours"
minute: "0"
hour: "*/12"
job: "/usr/local/bin/apt_info.py > /var/lib/node_exporter/apt_info.prom"
become: true
ignore_errors: "{{ ansible_check_mode }}"

- name: Ensure APT auto update is enabled
ansible.builtin.copy:
dest: /etc/apt/apt.conf.d/99_auto_apt_update.conf
content: 'APT::Periodic::Update-Package-Lists "1";'
owner: root
group: root
mode: '0644'
become: true

roles:
# https://github.com/prometheus-community/ansible/tree/main/roles/node_exporter
- name: prometheus.prometheus.node_exporter

# node_exporter_textfile_dir: "/var/lib/node_exporter" # default
```