We are going to simulate a number of failure situations, and recover from them.
Try and replicate the scenarios on your cluster.
In this entire exercise you are working at the cluster level, not the individual nodes. This means you will need to sit together as a group and work together.
node A node B node C
(master)
+-----------------+ +-----------------+ +-----------------+
| | | | | |
| +====+ | drbd | ...... +====+ | drbd | ...... |
| | dX |.....................: dX : | dY |............: dY : |
| +====+ | | :....: +====+ | | :....: |
| | | | | |
| | | +----+ | | |
| | | plain | dZ | | | |
| | | +----+ | | |
+--------+--------+ +-------+---------+ +-------+---------+
| | |
-----------+-------------------------+--------------------------+-----------
The command gnt-instance list -o name,pnode,snodes,status
is useful to see which instances you have running where.
Choose three of your existing instances to be dX, dY and dZ and if necessary move them around to look like the diagram. Commands you may need include:
gnt-instance migrate
gnt-instance replace-disks
gnt-instance move
Let's imagine that we want to take down nodeB for maintenance: more RAM, a disk replacement, etc.
You have probably many instances running on your cluster by now.
We need to make sure nodeB is not hosting any instances, primary, secondary or plain
We can use nodeC and nodeA to move instances away from nodeB
Here's the process:
Mark the node as "drained" to prevent new instances being created on it.
DRBD instances for which nodeB is primary will need to migrate to their secondary, leaving nodeB to only be secondary for any instances
We need to move the disks of secondary DRBD instances from nodeB to another node.
(if A is primary for debianX, we move its secondary disks from B to C)
Below are the commands we'll be using for each of the steps above.
command: gnt-node modify --drained=yes nodeB
check using: gnt-node list -o name,drained
command: gnt-instance migrate
We've used this command before - we have to make sure that if nodeB is primary for any instances, we migrate them to the secondary node.
In the example above, nodeB is primary for dY
. Let's migrate it over to nodeC.
# gnt-instance migrate dY
After this is done, we are now in the following situation: nodeB is only running the plain instance dZ
.
node A node B node C
(master)
+-----------------+ +-----------------+ +-----------------+
| | | | | |
| +====+ | drbd | ...... ...... | drbd | +====+ |
| | dX |.....................: dX : : dY :............| dY | |
| +====+ | | :....: :....: | | +====+ |
| | | | | |
| | | +----+ | | |
| | | plain | dZ | | | |
| | | +----+ | | |
+--------+--------+ +-------+---------+ +-------+---------+
| | |
-----------+-------------------------+--------------------------+-----------
command: gnt-instance replace-disks
# gnt-instance replace-disks -n nodeC debianX
If you prefer, you can let ganeti's instance allocator choose the new secondary node for you using its instance allocator (dot means use the default instance allocator)
# gnt-instance replace-disks -I . debianX
Repeat for debianY of course.
command: gnt-instance move
Note that this will require shutting down the instance, as its disk(s) will first have to be copied to node C before it can be restarted there.
# gnt-instance move -n nodeC debianY
Instance debianY will be moved. This requires a shutdown of the instance.
Continue?
y/[n]/?: y
Fri Sep 19 14:31:44 2014 - INFO: Shutting down instance debianY on source node nodeB
Fri Sep 19 14:32:01 2014 disk/0 sent 450M, 77.2 MiB/s, 21%, ETA 21s
Fri Sep 19 14:32:37 2014 disk/0 finished receiving data
Fri Sep 19 14:32:37 2014 disk/0 finished sending data
Fri Sep 19 14:32:37 2014 - INFO: Removing the disks on the original node
Fri Sep 19 14:32:38 2014 - INFO: Starting instance debianY on node nodeC
The nodeB is ready to be shut down. Don't do this!
Instead, let's imagine our maintenance is over and nodeB is ready for use again. Remove the "drained" flag to make it able to accept instances again.
gnt-node modify --drained=no nodeB
debianX
(or what the name of the DRBD VM you are using is) is running (primary) on nodeB, and debianY is secondary on nodeB, so that it looks like this:# gnt-instance list -o name,pnode,snodes,status
Instance Primary_node Secondary_Nodes Status
debianX nodeB.virt.nsrc.org nodeA.virt.nsrc.org running
debianY nodeC.virt.nsrc.org nodeB.virt.nsrc.org running
debianZ nodeC.virt.nsrc.org running
Work out for yourself what commands are necessary to do this. Ask for help if you need it.
# halt -p
# gnt-instance list -o name,pnode,snodes,status
Instance Primary_node Secondary_Nodes Status
debianX nodeB.virt.nsrc.org nodeA.virt.nsrc.org ERROR_nodedown
debianY nodeC.virt.nsrc.org nodeB.virt.nsrc.org running
debianZ nodeC.virt.nsrc.org running
Run gnt-cluster verify
(will take a while), and look at the output.
Run gnt-node list
, and look at the output, too.
As you notice, things are quite slow. This is because Ganeti is trying to contact the gnt-noded
daemon on nodeB, and it's timing out.
If this were a production environment, we'd have to examine nodeB, and determine whether nodeB was likely to come back online soon. If not, say, because of some hardware failure, we would decide to take the node "offline", so Ganeti would stop trying to talk to it.
Let's start by marking nodeB as offline:
# gnt-node modify --offline=yes nodeB.virt.nsrc.org
Modified node nodeB.virt.nsrc.org
- master_candidate -> False
- offline -> True
It will take a little while, but now most commands will run faster as Ganeti stops trying to contact the other nodes in the cluster.
Try running gnt-instance list
and gnt-node list
again.
Also re-run gnt-cluster verify
If you attempt to migrate, you will be told:
# gnt-instance migrate debianX
Failure: prerequisites not met for this operation:
error type: wrong_state, error details:
Can't migrate, please use failover: Node is marked offline
# gnt-instance failover debianX
Hopefully you will see messages ending with:
...
Sat Jan 18 15:58:11 2014 * activating the instance's disks on target node nodeA.virt.nsrc.org
Sat Jan 18 15:58:11 2014 - WARNING: Could not prepare block device disk/0 on node nodeB.virt.nsrc.org (is_primary=False, pass=1): Node is marked offline
Sat Jan 18 15:58:11 2014 * starting the instance on the target node nodeA.virt.nsrc.org
If so, skip to the section "Confirm that the VM is now up on nodeA"
If you see this message:
Sat Jan 18 20:57:55 2014 Failover instance debianX
Sat Jan 18 20:57:55 2014 * checking disk consistency between source and target
Failure: command execution error:
Disk 0 is degraded on target node, aborting failover
... you will need to force the operation. This should normally not happen when the node is marked offline. However, if you do get the message:
gnt-instance
, find the section about failover
:If you are trying to migrate instances off a dead node, this will fail. Use the --ignore-consistency option for this purpose. Note that this option can be dangerous as errors in shutting down the instance will be ignored, resulting in possibly having the instance running on two machines in parallel (on disconnected DRBD drives).
We know that nodeB is down - we halted it ourselves! In a real world scenario, you MUST verify that nodeB really is down. Otherwise you risk ending up with two running instances of VM
(if someone force starts it) and you will need to force a resolution.
Re-run gnt-instance failover
with the '--ignore-consistency' flag. We are in a situation that requires this (nodeB down)
# gnt-instance failover --ignore-consistency debianX
There will be much more output this time, pay attention in particular if you see some warnings - these are normal since the nodeB node is down, but we did it mark it as offline.
Sat Jan 18 21:03:15 2014 Failover instance debianX
Sat Jan 18 21:03:15 2014 * checking disk consistency between source and target
[ ... messages ... ]
Sat Jan 18 21:03:27 2014 * activating the instance's disks on target node nodeA.virt.nsrc.org
[ ... messages ... ]
Sat Jan 18 21:03:33 2014 * starting the instance on the target node nodeA.virt.nsrc.org
# gnt-instance list -o name,pnode,snodes,status
Instance Primary_node Secondary_Nodes Status
debianX nodeA.virt.nsrc.org nodeB.virt.nsrc.org running
debianY nodeC.virt.nsrc.org nodeB.virt.nsrc.org running
debianZ nodeC.virt.nsrc.org running
Ok, let's say nodeB has been fixed.
Restart nodeB. (Depending on the class setup, you may need to ask the instructor to do this for you).
Make sure you can ping it and can log in to it
We need to re-add it to the cluster. We do this using the gnt-node add --readd
command on the cluster master node.
From the gnt-node
man page:
In case you're readding a node after hardware failure, you can use the --readd parameter. In this case, you don't need to pass the secondary IP again, it will reused from the cluster. Also, the drained and offline flags of the node will be cleared before re-adding it.
# gnt-node add --readd nodeB.virt.nsrc.org
[ ... question about SSH ...]
Sat Jan 18 22:09:43 2014 - INFO: Readding a node, the offline/drained flags were reset
Sat Jan 18 22:09:43 2014 - INFO: Node will be a master candidate
We're good! It could take a while to re-sync the DRBD data if a lot of disk activity (writing) has taken place on debianX
, but this will happen in the background.
Inspect the node list:
# gnt-node list
Check the cluster configuration.
# gnt-cluster verify
Probably the DRBD instances on nodeB have not yet been activated by the master daemon. As a result you may see some errors about your instance's disk beging degraded, similar to this:
Thu Sep 18 18:52:41 2014 * Verifying node status
Thu Sep 18 18:52:41 2014 - ERROR: node nodeB: drbd minor 0 of instance debianX is not active
Thu Sep 18 18:52:41 2014 * Verifying instance status
Thu Sep 18 18:52:41 2014 - ERROR: instance debianX: disk/0 on nodeA is degraded
Thu Sep 18 18:52:41 2014 - ERROR: instance debianX: couldn't retrieve status for disk/0 on nodeB: Can't find device <DRBD8(hosts=03add4b7-d6d9-40d0-bf6e-74d1683aad49/0-93eef5d9-6b33-4c
Don't panic! This is normal, as it's possible the disks haven't been re-synchronized yet.
If so, you can use the command gnt-cluster verify-disks
to fix this:
# gnt-cluster verify-disks
Submitted jobs 78
Waiting for job 78 ...
Activating disks for instance 'debianX'
Wait a few seconds, then run:
# gnt-cluster verify
When all is OK, let's try and migrate debianX back to nodeB:
# gnt-instance migrate debianX
Test that the migration has worked.
Let's now imagine that the failure of nodeB wasn't temporary: we imagine that it cannot be fixed, and won't be back online for a while (it needs to be completely replaced). We could decide to remove nodeB from the cluster.
To do this:
Note: RUN THIS ON nodeB !!!
# halt -p
Mark nodeB as offline:
# gnt-node modify --offline=yes nodeB.virt.nsrc.org
run gnt-cluster verify
, and look at the output.
Sat Jan 18 21:31:56 2014 - NOTICE: 1 offline node(s) found.
We marked nodeB as down - let's assume nodeB will be down for a while while it's being fixed.
We decide to remove nodeB from the cluster:
# gnt-node remove nodeB.virt.nsrc.org
Failure: prerequisites not met for this operation:
error type: wrong_input, error details:
Instance debianX is still running on the node, please remove first
Ok, we are not allowed to remove the nodeB, because Ganeti can see that we still have an instance (debianX) associated with nodeB.
This is different from simply marking the node offline, as it means we are permanently getting rid of nodeB, and we need to take a decision about what to do for DRBD instances that were associated with nodeB.
# gnt-instance failover debianX
Failover will happen to image debianX. This requires a shutdown of
the instance. Continue?
y/[n]/?: y
Thu Sep 18 20:29:32 2014 Failover instance debianX
Thu Sep 18 20:29:32 2014 * checking disk consistency between source and target
Thu Sep 18 20:29:32 2014 Node nodeB.virt.nsrc.org is offline, ignoring degraded disk 0 on target node nodeA.virt.nsrc.org
Thu Sep 18 20:29:32 2014 * shutting down instance on source node
Thu Sep 18 20:29:32 2014 - WARNING: Could not shutdown instance debianX on node nodeB.virt.nsrc.org, proceeding anyway; please make sure node nodeB.virt.nsrc.org is down; error details: Node is marked offline
Thu Sep 18 20:29:32 2014 * deactivating the instance's disks on source node
Thu Sep 18 20:29:33 2014 - WARNING: Could not shutdown block device disk/0 on node nodeB.virt.nsrc.org: Node is marked offline
Thu Sep 18 20:29:33 2014 * activating the instance's disks on target node nodeA.virt.nsrc.org
Thu Sep 18 20:29:33 2014 - WARNING: Could not prepare block device disk/0 on node nodeB.virt.nsrc.org (is_primary=False, pass=1): Node is marked offline
Thu Sep 18 20:29:33 2014 * starting the instance on the target node nodeA.virt.nsrc.org
Followed by:
# gnt-node evacuate -s nodeB
Relocate instance(s) debianX from node(s) nodeB?
y/[n]/?: y
Thu Sep 18 20:32:37 2014 - INFO: Evacuating instances from node 'nodeB.virt.nsrc.org': debianX
Thu Sep 18 20:32:37 2014 - INFO: Instances to be moved: debianX (to nodeA.virt.nsrc.org, nodeC.virt.nsrc.org)
...
Thu Sep 18 20:32:38 2014 STEP 3/6 Allocate new storage
Thu Sep 18 20:32:38 2014 - INFO: Adding new local storage on nodeC.virt.nsrc.org for disk/0
...
Thu Sep 18 20:32:41 2014 STEP 6/6 Sync devices
Thu Sep 18 20:32:41 2014 - INFO: Waiting for instance debianX to sync disks
Thu Sep 18 20:32:41 2014 - INFO: - device disk/0: 1.20% done, 1m 55s remaining (estimated)
Thu Sep 18 20:33:41 2014 - INFO: Instance debianX's disks are in sync
All instances evacuated successfully.
Ok, check out the instance list:
# gnt-instance list -o name,pnode,snodes,status
Instance Primary_node Secondary_Nodes Status
debianX nodeA.virt.nsrc.org nodeC.virt.nsrc.org running
XXX
Perfect, nodeB is not used by any instance. We can now re-attempt to remove node nodeB from the cluster:
# gnt-node remove nodeB.virt.nsrc.org
More WARNINGs! But did it work ?
# gnt-node list
Node DTotal DFree MTotal MNode MFree Pinst Sinst
nodeA.virt.nsrc.org 29.1G 12.6G 995M 145M 672M 2 0
nodeC.virt.nsrc.org 29.0G 12.7G 995M 137M 680M 0 1
Yes, nodeB is gone.
Note: Ganeti will modify /etc/hosts
on your remaining nodes, and remove the line for nodeB!
We can restart our debianX instance, by the way! (This may have already happened if you called gnt-instance failover
)
# gnt-instance start debianX
Test that it comes up normally.
Let's imagine that we need to temporarily service the cluster master (in this case, nodeA). It's rather easy. Decide first which of the other nodes will become master.
Read about master-failover
: run man gnt-cluster
, find the MASTER-FAILOVER section.
Then, ON THE NODE YOU PICKED, run this command:
# gnt-cluster master-failover
If everything goes well, after 5-10 seconds, the node you ran this command on is now the new master.
Test this! For example, if nodeB is your new master, run these commands on it:
Verify that the cluster IP is now on this node:
# ifconfig br-lan:0
Notice that the IP address in br-lan:0 is that of the cluster master.
This means that next time you log on using SSH using the cluster IP, you will be logged on to nodeB.
Check which node is the master (this is one of the few commands you can run on any node, not just the master)
# gnt-cluster getmaster
nodeB.virt.nsrc.org
All good!
Let's imagine a slightly more critical scenario: the crash of the master node.
Let's shut down the master node!
On nodeB (it's now our master node, remember ?)
# halt -p
The node is now down. VM still running on other nodes are unaffected, but you are not able to make any changes (stop, start, modify, add VMs, change cluster configuration, etc...)
Let's assume that nodeB is not coming back right now, and we need to promote a master.
You will first need to decide which of the remaining nodes will become the master. Let's pick nodeA.
To promote the slave:
Log on to the node that will become master (nodeA):
Run the following command:
# gnt-cluster master-failover
Note here that you will NOT be asked to confirm the operation!
If you have 3 or more nodes in the cluster, the operation should be as smooth as in the previous section.
On the other hand, if you only had 2 nodes in your cluster, you would have to specify --no-voting
1 as an option. This is because, if one node is down, there is only one node left in the cluster, and no majority election can take place.
At this point, the chosen node (nodeA) is now master. You can verify this using the gnt-cluster getmaster
command.
From this point, recovering downed machines is similar to what we did in the first scenario. But to be on the safe side:
Restart nodeB, and log in to it as root
Try and run gnt-instance list
Even though nodeB was down while the promotion of nodeA happened, the ganeti-masterd
daemon running on nodeB was informed, on startup, that nodeB was no longer master. The above command should therefore fail with:
This is not the master node, please connect to node 'nodeA.virt.nsrc.org' and
rerun the command
Which means that nodeB is well aware that nodeA is the master now.
Once you have done this, you may find that nodeA and nodeB have different versions of the cluster database. Type the following on nodeA:
# gnt-cluster verify
...
Sat Jan 18 16:11:12 2014 - ERROR: cluster: File /var/lib/ganeti/config.data found with 2 different checksums (variant 1 on nodeB.virt.nsrc.org, nodeC.virt.nsrc.org; variant 2 on nodeA.virt.nsrc.org)
Sat Jan 18 16:11:12 2014 - ERROR: cluster: File /var/lib/ganeti/ssconf_master_node found with 2 different checksums (variant 1 on nodeB.virt.nsrc.org, nodeC.virt.nsrc.org; variant 2 on nodeA.virt.nsrc.org)
You can fix this by:
# gnt-cluster redist-conf
which pushes out the config from the current master to all the other nodes.
Re-run gnt-cluster verify
to check everything is OK again.
Then to make nodeB take over the master role again, login to nodeB and run:
# gnt-cluster master-failover
For reference, here are some additional useful commands. You should try these out in a test environment before a real problem occurs.
command: gnt-node evacuate
Read the man page for gnt-node
and look for the section about the evacuate
subcommand.
Note: for the time being, one needs to explicitly tell the evacuate command to move away either primary (-p
) or secondary (-s
) instances - it won't work for both at the same time.
Assuming we have:
nodeB (secondary).
We have debianZ running as a plain instance on node B
What happens if we do:
# gnt-node evacuate -p nodeB
Relocate instance(s) debianY from node(s) nodeB?
y/[n]/?:
gnt-node evacuate
has figured out that the plain
debianY instance needs to be moved away. Answer y
Fri Sep 19 14:29:45 2014 - INFO: Evacuating instances from node 'nodeB': debianY
Fri Sep 19 14:29:46 2014 - WARNING: Unable to evacuate instances debianY (Instances of type plain cannot be relocated)
Failure: command execution error:
Unable to evacuate instances debianY (Instances of type plain cannot be relocated)
Uh oh :(
What about gnt-node evacuate -s nodeB ?
When a node has been marked offline for a short period of time and no other cluster changes have taken place, it is possible just to mark it online again. You could simply do the following (DON'T DO THIS NOW!):
# gnt-node modify --offline=no nodeB.virt.nsrc.org
Sat Jan 18 22:08:45 2014 - INFO: Auto-promoting node to master candidate
Sat Jan 18 22:08:45 2014 - WARNING: Transitioning node from offline to online state without using re-add. Please make sure the node is healthy!
If there is any doubt, use gnt-node add --readd
instead.
It's also a good idea to do a gnt-cluster redist-conf
after bringing the node back online.
Similarly, if you have a 2 node cluster and one of them is down, and you reboot the single working node, the master daemon will fail to start as it is unable to confirm that it is definitely the master. Use service ganeti status
to see what is running. Then as required:
/usr/lib/ganeti/daemon-util start ganeti-masterd --no-voting # ganeti 2.11
/usr/lib/ganeti/daemon-util start ganeti-wconfd --no-voting # ganeti 2.12
/usr/lib/ganeti/daemon-util start ganeti-luxid --no-voting
These commands will be rejected unless you also add --yes-do-it
to the command lines. Do so, but only if you are sure that you need to force this node to become master. If another node is still running as master, or you later force the other node to become master too, then you could have a "split brain" scenario, i.e. two inconsistent masters.↩