Talk:Meza/Common Meza Test Environment (CMTE)
subscribe, clean, and resubscribe
editsudo subscription-manager remove --all sudo subscription-manager unregister sudo subscription-manager clean sudo rm -rf /var/cache/yum/*
sudo subscription-manager register sudo subscription-manager attach --auto sudo subscription-manager refresh sudo subscription-manager identity
Elasticsearch and FIPS mode
editAs of 2023-07-09 Meza does not support FIPS mode due to some issue with Elasticsearch.
We are working to solve this problem. Current efforts are based on guidance from https://www.elastic.co/guide/en/elasticsearch/reference/current/security-settings.html#password-hashing-settings
which recommends configuring setting xpack.security.fips_mode.enabled
to true in Elasticsearch.yml
More soon Revansx (talk) 14:54, 9 July 2023 (UTC)
update 2023-07-09
editfound some good insights here: https://discuss.elastic.co/t/issues-trying-to-enable-fips-140-2-on-centos-8/300505
specifically a security section for elasticsearch.yml as:
# ---------------------------------- Security ---------------------------------- # # *** WARNING *** # # Elasticsearch security features are not enabled by default. # These features are free, but require configuration changes to enable them. # This means that users don’t have to provide credentials and can get full access # to the cluster. Network connections are also not encrypted. # # To protect your data, we strongly encourage you to enable the Elasticsearch security features. # Refer to the following documentation for instructions. # # https://www.elastic.co/guide/en/elasticsearch/reference/current/security-settings # # Some typical security setting are: # xpack.security.enabled: true # xpack.security.http.ssl.enabled: true # xpack.security.http.ssl.key: /etc/elasticsearch/ssl/http-key.key # xpack.security.http.ssl.certificate: /etc/elasticsearch/ssl/http-cert.crt # # however, recall that meza (when deployed as a monolith) runs all services (like elasticsearch) # behind an SSL terminating load balancer/proxy. This means that the elasticsearch service is # not accessible to the network as such. # # However, we do need elastic search to work in FIPS mode so we need the folowing security settings per # https://www.elastic.co/guide/en/elasticsearch/reference/current/security-settings.html#password-hashing-settings # but note that the settings below only tell Elasticsearch to avoid non-FIPS approved algorithms. # It does not configure the underlying JVM to run in FIPS mode. That must be addressed in the JVM config separately. # Ref1: https://discuss.elastic.co/t/issues-trying-to-enable-fips-140-2-on-centos-8/300505 # Ref2: https://www.elastic.co/support/matrix#matrix_jvm # # Require only FIPS aproved algothithms xpack.security.fips_mode.enabled: true xpack.security.authc.password_hashing.algorithm: pbkdf2_stretch
and the user's comments that:
- Simply setting
xpack.security.fips_mode.enabled: true
inelasticsearch.yml
only tells Elasticsearch to avoid non-FIPS approved algorithms. It does not configure the underlying JVM to run in FIPS mode.
and
- The only supported JVM is Oracle's JVM with the BouncyCastle FIPS provider per: https://www.elastic.co/support/matrix#matrix_jvm
more soon Revansx (talk) 16:51, 9 July 2023 (UTC)
Workaround to install Elasticsearch in FIPS mode
editFound that sudo dnf install elasticsearch
fails with error:
package elasticsearch-0:6.8.23-1.noarch does not verify: no digest
It did download the rpm before it failed so I was able to find the elasticsearch rpm file with:
sudo find / -name *elasticsearch-6.8*.rpm
which found: elasticsearch-6.8.23.rpm
in /var/cache/dnf/elasticsearch-45849848dc92ff76/packages/
and so then I was able to install it using rpm directly using:
sudo rpm -ivh --nodigest --nofiledigest /var/cache/dnf/elasticsearch-45849848dc92ff76/packages/elasticsearch-6.8.23.rpm
-i
tells rpm to install the specified package(s). If the package is not already installed, it will be installed on the system.-v
enables verbose output, providing more detailed information about the installation process.-h
displays hash marks (#) to indicate the progress of the installation.--nodigest
tells RPM not to verify the package's header digest. The header digest is a checksum of the package metadata, and by disabling this check, RPM skips the verification process for the header.--nofiledigest
instructs RPM not to verify the file digest of each file within the package. The file digest is a checksum of the individual files contained in the package, and by disabling this check, RPM skips the verification process for each file.
/Rich Revansx (talk) 18:47, 9 July 2023 (UTC)
HAProxy deploy error
editAfter carefully setting up the CMTE, my first deploy errored with:
TASK [haproxy : ensure haproxy is running (and enable it at boot)] ************* fatal: [localhost]: FAILED! => { "changed": false } MSG: Unable to start service haproxy: Job for haproxy.service failed because the control process exited with error code. See "systemctl status haproxy.service" and "journalctl -xe" for details. PLAY RECAP ********************************************************************* localhost : ok=126 changed=48 unreachable=0 failed=1 skipped=61 rescued=0 ignored=0
Service status reveals that there is a problem with the cert and/or key:
[userx@localhost opt]$ systemctl status haproxy.service ● haproxy.service - HAProxy Load Balancer Loaded: loaded (/usr/lib/systemd/system/haproxy.service; enabled; vendor preset: disabled) Active: failed (Result: exit-code) since Sun 2023-12-24 00:12:46 EST; 10h ago Process: 10828 ExecStartPre=/usr/sbin/haproxy -f $CONFIG -f $CFGDIR -c -q $OPTIONS (code=exited, status=1/FAILURE) Dec 24 00:12:46 localhost.localdomain systemd[1]: Starting HAProxy Load Balancer... Dec 24 00:12:46 localhost.localdomain haproxy[10828]: [ALERT] 357/001246 (10828) : parsing [/etc/haproxy/haproxy.cfg:67] : 'bind *:443' : Dec 24 00:12:46 localhost.localdomain haproxy[10828]: unable to load SSL private key from PEM file '/etc/haproxy/certs/meza.crt'. Dec 24 00:12:46 localhost.localdomain haproxy[10828]: unable to load SSL certificate from PEM file '/etc/haproxy/certs/meza.key'. Dec 24 00:12:46 localhost.localdomain haproxy[10828]: [ALERT] 357/001246 (10828) : Error(s) found in configuration file : /etc/haproxy/hapr> Dec 24 00:12:46 localhost.localdomain haproxy[10828]: [ALERT] 357/001246 (10828) : Fatal errors found in configuration. Dec 24 00:12:46 localhost.localdomain systemd[1]: haproxy.service: Control process exited, code=exited status=1 Dec 24 00:12:46 localhost.localdomain systemd[1]: haproxy.service: Failed with result 'exit-code'. Dec 24 00:12:46 localhost.localdomain systemd[1]: Failed to start HAProxy Load Balancer. lines 1-14/14 (END)
If I manually change the HAProxy config file on line 67 from
bind *:443 ssl crt /etc/haproxy/certs/
bind *:443 ssl crt /etc/haproxy/certs/meza.pem
Then I can start HAProxy manually, and successfully with sudo systemctl start haproxy && sudo systemctl status haproxy
So, making the change in the template for HAProxy sudo vim /opt/meza/src/roles/haproxy/templates/haproxy.cfg.j2
allows me to deploy successfully find the next issue (Semantic Drilldown). Greg Rundlett (talk) 16:35, 24 December 2023 (UTC)
- Yep. I've got that change made too and just need to commit it to the repo. I'll reply here when I do. More soon. Revansx (talk) 00:28, 26 December 2023 (UTC)
Semantic Drilldown error
editWhen attempting sudo meza deploy monolith
, I get the following failure:
failed: [localhost] (item={'name': 'SemanticDrilldown', 'repo': 'https://github.com/SemanticMediaWiki/SemanticDrilldown.git', 'version': 'master', 'legacy_load': True}) => { "ansible_loop_var": "item", "changed": false, "item": { "legacy_load": true, "name": "SemanticDrilldown", "repo": "https://github.com/SemanticMediaWiki/SemanticDrilldown.git", "version": "master" } } MSG: Failed to init/update submodules: Submodule 'build' (git@github.com:gesinn-it-pub/docker-compose-ci.git) registered for path 'build' Cloning into '/opt/htdocs/mediawiki/extensions/SemanticDrilldown/build'... Host key verification failed. fatal: Could not read from remote repository. Please make sure you have the correct access rights and the repository exists. fatal: clone of 'git@github.com:gesinn-it-pub/docker-compose-ci.git' into submodule path '/opt/htdocs/mediawiki/extensions/SemanticDrilldown/build' failed Failed to clone 'build'. Retry scheduled Cloning into '/opt/htdocs/mediawiki/extensions/SemanticDrilldown/build'... Host key verification failed. fatal: Could not read from remote repository. Please make sure you have the correct access rights and the repository exists. fatal: clone of 'git@github.com:gesinn-it-pub/docker-compose-ci.git' into submodule path '/opt/htdocs/mediawiki/extensions/SemanticDrilldown/build' failed Failed to clone 'build' a second time, aborting
Greg Rundlett (talk) 16:58, 24 December 2023 (UTC)
- I opened an issue in the CI project https://github.com/gesinn-it-pub/docker-compose-ci/issues/1 Greg Rundlett (talk) 17:11, 24 December 2023 (UTC)
- Thanks for submitting the issue. I think I'll just commit an update that has it commented out and add it back when the issue is resolved. Revansx (talk) 00:31, 26 December 2023 (UTC)
- On further investigation and testing, I moved the issue to the SemanticDrilldown project; submitting a patch and pull request https://github.com/SemanticMediaWiki/SemanticDrilldown/pull/75 Greg Rundlett (talk) 21:51, 29 December 2023 (UTC)
Pre-Meza Rocky Linux 8 Repos
editappstream Rocky Linux 8 - AppStream baseos Rocky Linux 8 - BaseOS droplet-agent DigitalOcean Droplet Agent extras Rocky Linux 8 - Extras
How to check for packages from other repos:
dnf list installed | grep -vE 'AppStream|appstream|baseos|anaconda|droplet-agent'