action #160685
open[qe-core] End to end testing of Databases using NFS
0%
Updated by szarate 6 months ago
- Related to action #115196: [qe-core] Prepare for ALP - Schedule Databases testsuite for ALP added
Updated by acarvajal 6 months ago
We need a reproducer for this, and so far I am not sure we have one.
Out of the customers affected and listed in the ticket, the one that caught my eye was Walgreens with their 9-node HANA Scale Out database.
HANA Scale Out is a configuration of HANA in which the database is spread out among many nodes; some tables are in the memory of only one node, while others are in other nodes, but always any table resides on only one node. File system backing is used mainly for database logs (Re Do and System Replication logs for example) and to store the data when the database is shut down. Usually, in a N Nodes Scale Out setup, N-1 nodes hold the actual data, while the remaining node works as a hot spare in case any of the nodes go down. For this, all nodes need to have access to the DB files from the other nodes. AFAIU there are many ways to share the files from one node to another, including NFS. For example, this Scale Out guide from Amazon has the instruction to do so using their NFS solution: https://docs.aws.amazon.com/sap/latest/sap-hana/fsx-host-scaleout.html
I'm guessing Walgreens' 9-node installation was probably a set of 3+1 HANA Scale Out installations with system replication (8 nodes) and a majority maker (9th node), but I could be mistaken. It could also had been a 8+1 HANA Scale Out installation without system replication.
In any case, it's a complicated scenario only to test NFS regressions.
I wonder if installing a single machine HANA (no Scale Out, no Scale Up), but having the file systems (/hana/data, /hana/log, /hana/shared and /usr/sap//home) be mounted over NFS, updating the system to the faulty kernel version, and stopping and starting the database several times, would be enough to certify it's working? This is assuming the faulty kernel goes into the HANA node ... as I'm not sure whether it was on the NFS server side (I'm guessing Walgreens' installation - which is in the cloud - uses a cloud service for NFS instead of a box with SLES, but again, not sure)
Does anybody have access to the faulty kernel? If so, I can prepare a 12-SP5 system as described above with NFS backing and we can do a quick test to see if it reproduces the issue.
Updated by szarate 6 months ago
- Related to coordination #109572: [qe-core][epic] MariaDB Galera Testing added
Updated by slo-gin 5 months ago
This ticket was set to High priority but was not updated within the SLO period. Please consider picking up this ticket or just set the ticket to the next lower priority.
Updated by szarate 5 months ago ยท Edited
PS-9306 The following script will create 12K tables for mysql, maybe we can do something similar for postgres
#!/bin/bash
# Default MySQL version
DEFAULT_VERSION="8.0.38"
# Parse arguments for MySQL version
if [ -z "$1" ]; then
VERSION="$DEFAULT_VERSION"
else
VERSION="$1"
fi
# Validate the version input
case "$VERSION" in
"8.0.38" | "8.4.1" | "9.0.0")
;;
*)
echo "Error: Invalid MySQL version. Supported versions are 8.0.38, 8.4.1, or 9.0.0."
exit 1
;;
esac
# MySQL connection details
MYSQL_HOST="127.0.0.1"
MYSQL_PORT="3306"
MYSQL_USER="root"
MYSQL_PASSWORD="mysql"
MYSQL_DATABASE="test"
# Number of tables to create
NUM_TABLES=8000
THREADS=16
# Start MySQL Docker container with sudo
echo "Starting MySQL Docker container mysql-$VERSION..."
sudo docker stop mysql-$VERSION
sudo docker rm mysql-$VERSION
sudo docker run --name mysql-$VERSION -p 3306:3306 -p 3060:3060 \
-e MYSQL_ROOT_HOST='%' -e MYSQL_ROOT_PASSWORD="$MYSQL_PASSWORD" \
-d mysql:$VERSION --log-error-verbosity=3
# Wait for MySQL to start up
echo "Waiting for MySQL to initialize..."
sleep 10
# Check if the MySQL container is running
if ! sudo docker ps | grep -q "mysql-$VERSION"; then
echo "Error: MySQL Docker container failed to start."
sudo docker logs mysql-$VERSION
exit 1
fi
# MySQL command to execute using Docker MySQL client with sudo
MYSQL_CMD="sudo docker exec -i mysql-$VERSION mysql -u$MYSQL_USER -p$MYSQL_PASSWORD"
# Check MySQL connection
echo "Checking MySQL connection..."
if ! echo "SELECT 1;" | $MYSQL_CMD > /dev/null 2>&1; then
echo "Error: Unable to connect to MySQL. Please check your connection details."
sudo docker logs mysql-$VERSION
exit 1
fi
# Create database if it doesn't exist
echo "Creating database if it doesn't exist..."
echo "CREATE DATABASE IF NOT EXISTS $MYSQL_DATABASE;" | $MYSQL_CMD
# Use the created or existing database
MYSQL_CMD="$MYSQL_CMD $MYSQL_DATABASE"
# Create a stored procedure to create tables in a range
echo "Creating stored procedure to create tables in a range..."
echo "
DELIMITER //
CREATE PROCEDURE create_tables(start_index INT, end_index INT)
BEGIN
DECLARE i INT;
SET i = start_index;
WHILE i <= end_index DO
SET @table_name = CONCAT('table_', i);
SET @create_stmt = CONCAT('CREATE TABLE IF NOT EXISTS ', @table_name, ' (id INT PRIMARY KEY AUTO_INCREMENT, data VARCHAR(255));');
PREPARE stmt FROM @create_stmt;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;
SET i = i + 1;
END WHILE;
END //
DELIMITER ;
" | $MYSQL_CMD
# Function to call the stored procedure with a range of table indices
call_procedure() {
local start_index=$1
local end_index=$2
echo "CALL create_tables($start_index, $end_index);" | $MYSQL_CMD
echo "Called create_tables with start_index=$start_index and end_index=$end_index"
}
export -f call_procedure
export MYSQL_CMD
# Calculate the number of tables each thread will create
tables_per_thread=$((NUM_TABLES / THREADS))
remainder=$((NUM_TABLES % THREADS))
# Create the ranges for each thread and run them in parallel
generate_ranges() {
for ((i=0; i<THREADS; i++))
do
start_index=$((i * tables_per_thread + 1))
end_index=$(((i + 1) * tables_per_thread))
if [ $i -eq $((THREADS - 1)) ]; then
end_index=$((end_index + remainder))
fi
echo "$start_index $end_index"
done
}
generate_ranges | parallel -j $THREADS --colsep ' ' call_procedure {1} {2}
echo "Completed creating $NUM_TABLES tables."
# Stop the MySQL Docker container with sudo
echo "Stopping MySQL Docker container mysql-$VERSION..."
sudo docker stop mysql-$VERSION
# Start the MySQL Docker container with sudo
echo "Starting MySQL Docker container mysql-$VERSION..."
sudo docker start mysql-$VERSION
sleep 5
# Check the logs for any errors before exiting
echo "Checking MySQL container logs for any errors..."
sudo docker logs mysql-$VERSION
echo "MySQL container stopped and removed successfully."
sudo docker stop mysql-$VERSION
sudo docker rm mysql-$VERSION
Updated by slo-gin 4 months ago
- Priority changed from High to Normal
This ticket was set to High priority but was not updated within the SLO period. The ticket will be set to the next lower priority Normal.