Is there a resource agent for RHEL 5, 6, 7, 8 or 9 High Availability clusters that can create a bind mount of a subdirectory of a file system managed by the cluster resource manager?
Environment
- Red Hat Enterprise Linux (RHEL) 5, 6, 7, 8 or 9 with the High Availability Add On
pacemaker(RHEL 6, 7, 8 or 9) orrgmanager(RHEL 5 or 6) resource manager
Issue
- I need setup a bind mount in my cluster
- I have an
<fs>resource in my service, and then I need a subdirectory of that bind mounted elsewhere. How can I do this with the resource agents provided forrgmanager? - Does
pacemakerhave a resource agent that can do bind mounting?
Resolution
rgmanager-Based Clusters
RHEL 6 with resource-agents-3.9.2-40.el6_5.10 and later
The `bind-mount` resource agent can create a bind mount as part of a service. In most cases, this will usually be created from a `fs`, `clusterfs`, or `netfs` resource within the same service. For example:
<service name="bind-example" domain="1then2" recovery="relocate">
<lvm name="lvm-clust" vg_name="clust" self_fence="1">
<fs name="ext4-clust-data1" device="/dev/clust/data1" mountpoint="/data1" fsid="1234" fstype="ext4" force_unmount="1" self_fence="1">
<bind-mount name="bind-data1-app" source="/data1/app" mountpoint="/app/data1/" fstype="none" force_unmount="1"/>
</fs>
</lvm>
</service>
This example would activate the clust volume group, mount /dev/clust/data1 on /data1, and bind-mount its subdirectory /data1/app onto an alternate path at /app/data1.
RHEL 6 with resource-agents prior to 3.9.2-40.el6_5.10, or RHEL 5
-
Although resource agents don't exist for prior
rgmanagerversions,initscripts can be used instead :
pacemaker-Based Clusters
Use the `ocf:heartbeat:Filesystem` resource agent, specifying `bind` in the `options` attribute, the path to the source directory in the `device` attribute, and the path to the destination for the bind mount in the `directory` attribute. This will typically be dependent on another `ocf:heartbeat:Filesystem` resource that mounts the base file system, so either the base should come before the bind mount in a resource group, or colocation and ordering constraints should be created between them.
Example: Using a Resource Group
# ### Create the resources in the same group
# pcs resource create clustVG LVM volgrpname=clust exclusive=1 --group myGroup
# pcs resource create ext4-clust-lv1 Filesystem device=/dev/clust/lv1 directory=/mnt/lv1 fstype=ext4 --group myGroup
# pcs resource create bind-mntlv1data-appdata Filesystem device=/mnt/lv1/data directory=/app/data fstype=none options=bind --group myGroup
# ### Now see what the configuration looks like in the CIB:
# pcs cluster cib
[...]
<group id="myGroup">
<primitive class="ocf" id="clustVG" provider="heartbeat" type="LVM">
<instance_attributes id="clustVG-instance_attributes">
<nvpair id="clustVG-instance_attributes-volgrpname" name="volgrpname" value="clust"/>
<nvpair id="clustVG-instance_attributes-exclusive" name="exclusive" value="1"/>
</instance_attributes>
<operations>
<op id="clustVG-monitor-interval-60s" interval="60s" name="monitor"/>
</operations>
</primitive>
<primitive class="ocf" id="ext4-clust-lv1" provider="heartbeat" type="Filesystem">
<instance_attributes id="ext4-clust-lv1-instance_attributes">
<nvpair id="ext4-clust-lv1-instance_attributes-device" name="device" value="/dev/clust/lv1"/>
<nvpair id="ext4-clust-lv1-instance_attributes-directory" name="directory" value="/mnt/lv1"/>
<nvpair id="ext4-clust-lv1-instance_attributes-fstype" name="fstype" value="ext4"/>
</instance_attributes>
<operations>
<op id="ext4-clust-lv1-monitor-interval-60s" interval="60s" name="monitor"/>
</operations>
</primitive>
<primitive class="ocf" id="bind-mntlv1data-appdata" provider="heartbeat" type="Filesystem">
<instance_attributes id="bind-mntlv1data-appdata-instance_attributes">
<nvpair id="bind-mntlv1data-appdata-instance_attributes-device" name="device" value="/mnt/lv1/data"/>
<nvpair id="bind-mntlv1data-appdata-instance_attributes-directory" name="directory" value="/app/data"/>
<nvpair id="bind-mntlv1data-appdata-instance_attributes-fstype" name="fstype" value="none"/>
<nvpair id="bind-mntlv1data-appdata-instance_attributes-options" name="options" value="bind"/>
</instance_attributes>
<operations>
<op id="bind-mntlv1data-appdata-monitor-interval-60s" interval="60s" name="monitor"/>
</operations>
</primitive>
</group>
[...]
Example: Using Constraints
# ### Create the resources
# pcs resource create clustVG LVM volgrpname=clust exclusive=1 --group myGroup
# pcs resource create ext4-clust-lv1 Filesystem device=/dev/clust/lv1 directory=/mnt/lv1 fstype=ext4 --group myGroup
# pcs resource create bind-mntlv1data-appdata Filesystem device=/mnt/lv1/data directory=/app/data fstype=none options=bind --group myGroup
# ### Now add constraints to start them together and in order
# pcs constraint colocation add ext4-clust-lv1 with clustVG
# pcs constraint colocation add bind-mntlv1data-appdata with ext4-clust-lv1
# pcs constraint order start clustVG then ext4-clust-lv1
Adding clustVG ext4-clust-lv1 (kind: Mandatory) (Options: first-action=start then-action=start)
# pcs constraint order start ext4-clust-lv1 then bind-mntlv1data-appdata
Adding ext4-clust-lv1 bind-mntlv1data-appdata (kind: Mandatory) (Options: first-action=start then-action=start)
# ### Now see what the configuration looks like in the CIB:
# pcs cluster cib
[...]
<resources>
[...]
<primitive class="ocf" id="ext4-clust-lv1" provider="heartbeat" type="Filesystem">
<instance_attributes id="ext4-clust-lv1-instance_attributes">
<nvpair id="ext4-clust-lv1-instance_attributes-device" name="device" value="/dev/clust/lv1"/>
<nvpair id="ext4-clust-lv1-instance_attributes-directory" name="directory" value="/mnt/lv1"/>
<nvpair id="ext4-clust-lv1-instance_attributes-fstype" name="fstype" value="ext4"/>
</instance_attributes>
<operations>
<op id="ext4-clust-lv1-monitor-interval-60s" interval="60s" name="monitor"/>
</operations>
</primitive>
<primitive class="ocf" id="clustVG" provider="heartbeat" type="LVM">
<instance_attributes id="clustVG-instance_attributes">
<nvpair id="clustVG-instance_attributes-volgrpname" name="volgrpname" value="clust"/>
<nvpair id="clustVG-instance_attributes-exclusive" name="exclusive" value="1"/>
</instance_attributes>
<operations>
<op id="clustVG-monitor-interval-60s" interval="60s" name="monitor"/>
</operations>
</primitive>
<primitive class="ocf" id="bind-mntlv1data-appdata" provider="heartbeat" type="Filesystem">
<instance_attributes id="bind-mntlv1data-appdata-instance_attributes">
<nvpair id="bind-mntlv1data-appdata-instance_attributes-device" name="device" value="/mnt/lv1/data"/>
<nvpair id="bind-mntlv1data-appdata-instance_attributes-directory" name="directory" value="/app/data"/>
<nvpair id="bind-mntlv1data-appdata-instance_attributes-fstype" name="fstype" value="none"/>
<nvpair id="bind-mntlv1data-appdata-instance_attributes-options" name="options" value="bind"/>
</instance_attributes>
<operations>
<op id="bind-mntlv1data-appdata-monitor-interval-60s" interval="60s" name="monitor"/>
</operations>
</primitive>
</resources>
<constraints>
<rsc_colocation id="colocation-ext4-clust-lv1-clustVG-INFINITY" rsc="ext4-clust-lv1" score="INFINITY" with-rsc="clustVG"/>
<rsc_colocation id="colocation-bind-mntlv1data-appdata-ext4-clust-lv1-INFINITY" rsc="bind-mntlv1data-appdata" score="INFINITY" with-rsc="ext4-clust-lv1"/>
<rsc_order first="clustVG" first-action="start" id="order-clustVG-ext4-clust-lv1-mandatory" then="ext4-clust-lv1" then-action="start"/>
<rsc_order first="ext4-clust-lv1" first-action="start" id="order-ext4-clust-lv1-bind-mntlv1data-appdata-mandatory" then="bind-mntlv1data-appdata" then-action="start"/>
</constraints>
[...]
Root Cause
In some cases, it may be required or desired to have a file system that is managed in a highly available configuration by the cluster resource manager, and to also have a subdirectory of that mountpoint be "bind mounted" elsewhere (i.e., mount -o bind /path/to/subdirectory /alternate/mountpoint). rgmanager has no such resource agent available for it, whereas pacemaker can use the ocf:heartbeat:Filesystem agent with options including bind to accomplish this.
This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.