https://cybernews.com/personal-data-leak-check/
jeudi 18 février 2021
mercredi 17 février 2021
Run Ansible playbook on AWS target with SSM System manager
AWS configuration
SSM State manager : Association Parameters
documentParameters with an archive (zip) containing multiple yml files
{ "InstallDependencies":"False", "SourceType":"S3", "SourceInfo":"{\"path\":\"https://name_of_bucket_hosting_sources.s3-eu-west-42.amazonaws.com/prefix_key/archive.zip\"}", "PlaybookFile":"main.yml" }documentParameters with only one yml files
{ "InstallDependencies":"False", "SourceType":"S3", "SourceInfo":"{\"path\":\"https://name_of_bucket_hosting_sources.s3-eu-west-42.amazonaws.com/prefix_key/playbook.yml\"}", "PlaybookFile":"playbook.yml" }sourceInfo
{ "name": "AWS-ApplyAnsiblePlaybooks" }* Association Target
Depending on where you want to run the playbook, select what's appropriate
Ansible playbook
example : daily export from an EC2 instance directory to an s3 bucket
become: yes
become_method: sudo
tasks:
- name: Find zips in /path/to/data/*.zip older than 7d
find:
paths: /path/to/data/
patterns: '*.zip'
age: 7d
register: files_to_delete
- name: Remove zips in /path/to/data/ older than 7d
file:
path: "{{ item.path }}"
state: absent
with_items: "{{ files_to_delete.files }}"
- name: Upload content of /path/to/data/ directory, ommiting structure-*.zip files
community.aws.s3_sync:
bucket: target-s3-share-name
key_prefix: s3-prefix-dir-name/subdirectory/
file_root: /path/to/data/
include: "*.zip"
exclude: "structure-*.zip"
delete: no # if set to yes, removes remote files that exist in bucket but are not present in the file root.
- name: Upload content of /path/to/data/ directory
community.aws.s3_sync:
bucket: target-s3-share-name
key_prefix: s3-prefix-dir-name/subdirectory/
file_root: /path/to/data/
include: "*"
References :
mardi 16 février 2021
Atlassian thread dumps
Install the Thready app to make sure that your threads have a meaningful name.
Then use the following scripts :
https://bitbucket.org/atlassianlabs/atlassian-support/src/master/
vendredi 12 février 2021
Jira and Confluence DB setup PostgreSQL
Atlassian Confluence
https://confluence.atlassian.com/doc/configuring-a-datasource-connection-937166084.html
postgres=# CREATE USER confluence;
CREATE ROLE
postgres=# ALTER USER confluence with PASSWORD 'confluencepwd';
ALTER ROLE
postgres=# CREATE DATABASE confluencedb WITH OWNER = confluence ENCODING 'UNICODE' LC_COLLATE 'C' LC_CTYPE 'C' TEMPLATE template0;
CREATE DATABASE
postgres=# \q
Atlassian Jira
pg_ctl -D /usr/ local /var/postgres start && brew services start postgresql psql postgres CREATE ROLE otrs WITH LOGIN PASSWORD 'otrs' ; ALTER ROLE otrs CREATEDB; \q # create user and db psql postgres -U otrs CREATE DATABASE otrs; GRANT ALL PRIVILEGES ON DATABASE otrs TO otrs; #show databases \list #import dump psql -h localhost -d otrs -U otrs -f /Users/tokamak/Desktop/ IN /MISSIONS/TOTAL/OTRS/Files/psql_otrs_dump.sql #show all db sizes SELECT pg_database.datname, pg_size_pretty(pg_database_size(pg_database.datname)) AS size FROM pg_database; # Create jiradb and user CREATE USER jiradbuser WITH PASSWORD 'jiraSQL' ; CREATE DATABASE jiradb WITH ENCODING 'UNICODE' LC_COLLATE 'C' LC_CTYPE 'C' TEMPLATE template0; GRANT ALL PRIVILEGES ON DATABASE jiradb TO jiradbuser; |
jeudi 21 janvier 2021
Opsgenie webinar / ressources
opsgenie is a tool allowing filtering and routing of monitoring-triggered alerts (nagios, AWS SNS, datadog, ...) to specific channels (SMS, phone-call, Slack, Jira, ...).
Main features on top of this :
- time-table (who's on-call)
- alerts / incident resolution centralization
- third party integrations with 100+ tools
Opsgenie Learning Center : https://docs.opsgenie.com/
[video] Opsgenie : "What do we do?" https://www.youtube.com/watch?v=yphtZ9z2TtA&feature=youtu.be
[video] Opsgenie: "First Look" https://www.youtube.com/watch?v=pyM2dROKn6g
Opsgenie Pricing : https://www.atlassian.com/software/opsgenie/pricing
Implement nagios to opsgenie Heartbeats :
- basic demo : https://www.youtube.com/watch?v=wsN2E_ZHlkE&feature=youtu.be
- https://docs.opsgenie.com/docs/monitoring-nagios
- https://docs.opsgenie.com/docs/heartbeat-monitoring
- https://docs.opsgenie.com/docs/heartbeat-api
vendredi 18 décembre 2020
Webinar notes : General Assembly / Instagram
Add new lines in description texts
https://apps4lifehost.com/Instagram/CaptionMaker.html
Schedule (free plans)
- https://later.com/pricing/ 30 / month
- https://www.planoly.com/pricing 30 / month
- Keyword Tool : https://keywordtool.io/
- Instagram Tags (app ?)
- http://www.instagramtags.com/
- Hashtags (app ?)
- Hashtagify (app ?)
mardi 15 décembre 2020
opinion-size-age-shape-colour-origin-material-purpose Noun.
from a book called The Elements of Eloquence: How to Turn the Perfect English Phrase.
Adjectives, writes the author, professional stickler Mark Forsyth, “absolutely have to be in this order:
opinion-size-age-shape-colour-origin-material-purpose Noun.
So you can have a lovely little old rectangular green French silver whittling knife. But if you mess with that order in the slightest you’ll sound like a maniac.”
https://getpocket.com/explore/item/how-non-english-speakers-are-taught-this-crazy-english-grammar-rule-you-know-but-have-never-heard-of
mardi 1 décembre 2020
ansible FQCN for modules names > 2.10
Starting in ansible 2.10, ansible now recommends using the FQCN for each module (Fully Qualified Collection Name)
This might become mandatory in a future release.
To identify the redirection from the default / previous/ still working to the FQCN :
# ansible-playbook deploy.yaml -vv
redirecting (type:module) ansible.builtin.timezone to community.general.timezone
namely, this is an example for the helm (example from Montreal Ansible meetup on 30-sept-2020)
mercredi 25 novembre 2020
OpsGenie : AWS SNS message to Jira Description Wiki markup (+ links to S3 logs and SSM output)
If you're using the AWS SNS opsgenie integration and want to publish to JIRA, you can for example use the following code to present the data in a slightly better way :
I this use-case I'm using the SNS channel to publish outputs from a system manager (AWS SSM) command that also publishes it's outputs to an S3, so we're using this extraction to provide the direct links to the s3 logs and the SSM run command history.
And in the end, we copy the message we received from the SNS channel "raw"..
In Opsgenie, in the specific Amazon SNS integration (Incoming Amazon SNS), in the Alert Fields, you can for example modify the "Description" so that it transforms the Message received like this :
h3. Details
|| AWS region | {{ TopicArn.extract(/arn:aws:sns:([^:]*):.*/) }} |
|| Status | {{ Message.extract(/.*"status":"([^"]*)".*/) }} |
|| Instance ID | {{ Message.extract(/.*"instanceId":"([^"]*)".*/) }} [(aws link)|https://{{ TopicArn.extract(/arn:aws:sns:([^:]*):.*/) }}.console.aws.amazon.com/ec2/v2/home?region={{ TopicArn.extract(/arn:aws:sns:([^:]*):.*/) }}#InstanceDetails:instanceId={{ Message.extract(/.*"instanceId":"([^"]*)".*/) }}]|
|| Command ID | {{ Message.extract(/.*"commandId":"([^"]*)".*/) }} [(aws cmd)|https://console.aws.amazon.com/systems-manager/run-command/{{ Message.extract(/.*"commandId":"([^"]*)".*/) }}] [(s3 logs)|https://console.aws.amazon.com/s3/buckets/ssm-output/ssm-log/{{ Message.extract(/.*"commandId":"([^"]*)".*/) }}/{{ Message.extract(/.*"instanceId":"([^"]*)".*/) }}/?region={{ TopicArn.extract(/arn:aws:sns:([^:]*):.*/) }}&showversions=false ]
|
|| documentName | {{ Message.extract(/.*"documentName":"([^"]*)".*/) }} |
|| requestedDateTime | {{ Message.extract(/.*"requestedDateTime":"([^"]*)".*/) }} |
|| eventTime | {{ Message.extract(/.*"eventTime":"([^"]*)".*/) }} |
h3. Opsgenie info
|| EventType | {{eventType}} |
|| Timestamp (opsgenie) | {{Timestamp}}|
|| Tags | {{tags}} |
|| TopicArn | {{TopicArn}} |
|| Actions | {{actions}} |
h3. Original Message (raw):
{code}
{{Message}}
{code}
Nb: this might only be available in certain OpsGenie subscriptions unfortunately :-(
mardi 3 novembre 2020
Gitops & git tracking modifications & Nagios
Context :
I needed to have all my nagios files on my computer to be able to run some python scripts trying to figure out what refactoring needed to be done, and identifying gaps in the configuration I inherited.
I took that opportunity to version all our nagios configuration files, with a git repository configured at the /usr/local/nagios/etc level.
That proved itself useful to gain some confidence that we're not going to loose anything.
Initial idea was taken from :
Script to automatically commit changes done in Nagios and push them to the central repo.
NB : still a few things to investigate, but ...
auto-git-commit-push.sh
cd /usr/local/nagios/etc \
&& /bin/git pull \
&& /bin/git add -A \
&& /bin/git commit -m "updated nagios dynamic files $(date) -- automatic commit" \
&& /bin/git push origin master \
&& /bin/git pull \
&& if ! $(grep -lr '<<<<<<<' . ) ; then grep -lr '<<<<<<<' . | xargs git checkout --ours; ./$0; fi