<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://dikapediav2.com/wiki/index.php?action=history&amp;feed=atom&amp;title=Device_Mapper_Multipath</id>
	<title>Device Mapper Multipath - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://dikapediav2.com/wiki/index.php?action=history&amp;feed=atom&amp;title=Device_Mapper_Multipath"/>
	<link rel="alternate" type="text/html" href="https://dikapediav2.com/wiki/index.php?title=Device_Mapper_Multipath&amp;action=history"/>
	<updated>2026-05-16T15:47:27Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.41.0</generator>
	<entry>
		<id>https://dikapediav2.com/wiki/index.php?title=Device_Mapper_Multipath&amp;diff=52&amp;oldid=prev</id>
		<title>Ardika Sulistija: Created page with &quot;====What is Device Mapper Multipathing?==== ----  What is Multipath? • Multipath is a storage network design technique that allows for fault tolerance or increased throughput by providing multiple concurrent physical connections (paths) from the storage to the individual host systems.  multipathd and multipath internally use WWIDs to identify devices. WWIDs are also used as map names by default.  ———  Device Mapper Multipathing (DM-Multipath) is a native multipat...&quot;</title>
		<link rel="alternate" type="text/html" href="https://dikapediav2.com/wiki/index.php?title=Device_Mapper_Multipath&amp;diff=52&amp;oldid=prev"/>
		<updated>2024-08-21T14:47:03Z</updated>

		<summary type="html">&lt;p&gt;Created page with &amp;quot;====What is Device Mapper Multipathing?==== ----  What is Multipath? • Multipath is a storage network design technique that allows for fault tolerance or increased throughput by providing multiple concurrent physical connections (paths) from the storage to the individual host systems.  multipathd and multipath internally use WWIDs to identify devices. WWIDs are also used as map names by default.  ———  Device Mapper Multipathing (DM-Multipath) is a native multipat...&amp;quot;&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;====What is Device Mapper Multipathing?====&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
What is Multipath?&lt;br /&gt;
• Multipath is a storage network design technique that allows for fault tolerance or increased throughput by providing multiple concurrent physical connections (paths) from the storage to the individual host systems.&lt;br /&gt;
&lt;br /&gt;
multipathd and multipath internally use WWIDs to identify devices. WWIDs are also used as map names by default.&lt;br /&gt;
&lt;br /&gt;
———&lt;br /&gt;
&lt;br /&gt;
Device Mapper Multipathing (DM-Multipath) is a native multipathing in Linux, Device Mapper Multipathing (DM-Multipath) can be used for Redundancy and to Improve the Performance. It aggregates or combines the multiple I/O paths between Servers and Storage, so it creates a single device at the OS Level.&lt;br /&gt;
&lt;br /&gt;
For example, Lets say a server with two HBA card attached to a storage controller with single ports on each HBA cards. One lun assigned to the single server via two wwn number of both cards. So OS detects two devices: /dev/sdb and /dev/sdc. Once we installed the Device Mapper Multipathing. DM-Multipath creates a single device with a unique WWID that reroutes I/O to those four underlying devices according to the multipath configuration. So when there is a failure with any of this I/O paths, Data can be accessible using the available I/O Path.&lt;br /&gt;
&lt;br /&gt;
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/dm_multipath/mpath_devices&lt;br /&gt;
&lt;br /&gt;
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html-single/configuring_device_mapper_multipath/index&lt;br /&gt;
&lt;br /&gt;
https://www.learnitguide.net/2016/06/how-to-configure-multipathing-in-linux.html&lt;br /&gt;
&lt;br /&gt;
https://ubuntu.com/server/docs/device-mapper-multipathing-introduction&lt;br /&gt;
&lt;br /&gt;
http://www.datadisk.co.uk/html_docs/redhat/rh_multipathing.htm&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====How to Set Up a Multipath Device====&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
This was done on RHEL7 + EC2 Nitro instance type and RHEL7 + VMWare. &lt;br /&gt;
&lt;br /&gt;
1) Launch a fresh EC2 instance (Nitro/C5) with an extra EBS volume (eg. nvme1n1). If you try with xen/t2, multipathd will fail to get path uid (i.e. this would happen with an /dev/xvdb device). &lt;br /&gt;
&lt;br /&gt;
    $ lsblk&lt;br /&gt;
    NAME        MAJ:MIN RM SIZE RO TYPE MOUNTPOINT&lt;br /&gt;
    nvme0n1     259:0    0  10G  0 disk &lt;br /&gt;
    ├─nvme0n1p1 259:1    0   1M  0 part &lt;br /&gt;
    └─nvme0n1p2 259:2    0  10G  0 part /&lt;br /&gt;
    nvme1n1     259:3    0   9G  0 disk &lt;br /&gt;
    nvme2n1     259:4    0   8G  0 disk&lt;br /&gt;
&lt;br /&gt;
2) (Optional: You can do this if you want to create a multipath device on an LVM, on EBS) Create PV and VG on /dev/xvdb:&lt;br /&gt;
&lt;br /&gt;
    $ sudo yum -y install lvm2&lt;br /&gt;
    $ sudo pvcreate /dev/xvdb&lt;br /&gt;
    $ sudo vgcreate testvg /dev/xvdb&lt;br /&gt;
&lt;br /&gt;
3) Install multipath:&lt;br /&gt;
&lt;br /&gt;
    $ sudo yum -y install device-mapper-multipath&lt;br /&gt;
&lt;br /&gt;
4) Next create /etc/multipath.conf file:&lt;br /&gt;
    $ sudo mpathconf --enable&lt;br /&gt;
&lt;br /&gt;
5) Configure multipath to create mpath devices on any device attached the server. This is done by removing or commenting out the entry &amp;quot;find_multipaths yes&amp;quot;, or setting it to &amp;quot;no&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
    $ sudo vi /etc/multipath.conf&lt;br /&gt;
    ...&lt;br /&gt;
    # grep -C 2 find_multipath /etc/multipath.conf &lt;br /&gt;
    defaults {&lt;br /&gt;
        user_friendly_names yes&lt;br /&gt;
            #find_multipaths yes &amp;lt;&amp;lt;&amp;lt;&amp;lt; this entry must be commented&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
6) Restart multipathd. Now you will have a multipath device on top of EBS volume:&lt;br /&gt;
&lt;br /&gt;
    $ sudo systemctl restart multipathd&lt;br /&gt;
 &lt;br /&gt;
    $ lsblk&lt;br /&gt;
    NAME        MAJ:MIN RM SIZE RO TYPE  MOUNTPOINT&lt;br /&gt;
    nvme0n1     259:0    0  10G  0 disk  &lt;br /&gt;
    ├─nvme0n1p1 259:1    0   1M  0 part  &lt;br /&gt;
    └─nvme0n1p2 259:2    0  10G  0 part  /&lt;br /&gt;
    nvme1n1     259:3    0   9G  0 disk  &lt;br /&gt;
    └─mpathb    253:0    0   9G  0 mpath &lt;br /&gt;
    nvme2n1     259:4    0   8G  0 disk  &lt;br /&gt;
    └─mpathc    253:1    0   8G  0 mpath&lt;br /&gt;
 &lt;br /&gt;
    $ lsblk -f&lt;br /&gt;
    NAME        FSTYPE       LABEL UUID                                 MOUNTPOINT&lt;br /&gt;
    nvme0n1                                                             &lt;br /&gt;
    ├─nvme0n1p1                                                         &lt;br /&gt;
    └─nvme0n1p2 xfs                95070429-de61-4430-8ad0-2c0f109d8d50 /&lt;br /&gt;
    nvme1n1     mpath_member                                            &lt;br /&gt;
    └─mpathb                                                            &lt;br /&gt;
    nvme2n1     mpath_member                                            &lt;br /&gt;
    └─mpathc&lt;br /&gt;
 &lt;br /&gt;
 # multipath -ll&lt;br /&gt;
 mpathc (nvme.1d0f-766f6c3032646533643935396435376335653331-416d617a6f6e) dm-1 NVME,Amazon Elastic Block Store              &lt;br /&gt;
 size=6.0G features=&amp;#039;0&amp;#039; hwhandler=&amp;#039;0&amp;#039; wp=rw&lt;br /&gt;
 `-+- policy=&amp;#039;service-time 0&amp;#039; prio=1 status=active&lt;br /&gt;
   `- 2:0:1:1 nvme2n1 259:1 active ready running&lt;br /&gt;
 mpathb (nvme.1d0f-766f6c3032656238336464336134616233316330-416d617a6f6e) dm-0 NVME,Amazon Elastic Block Store              &lt;br /&gt;
 size=5.0G features=&amp;#039;0&amp;#039; hwhandler=&amp;#039;0&amp;#039; wp=rw&lt;br /&gt;
 `-+- policy=&amp;#039;service-time 0&amp;#039; prio=1 status=active&lt;br /&gt;
   `- 1:0:1:1 nvme1n1 259:0 active ready running&lt;br /&gt;
&lt;br /&gt;
* If &amp;lt;b&amp;gt;user_friendly_names&amp;lt;/b&amp;gt; was set to &amp;lt;i&amp;gt;no&amp;lt;/i&amp;gt; or was disabled, then it would just show the WWID instead of &amp;quot;mpatha, mpathb,&amp;quot; etc.&lt;br /&gt;
&lt;br /&gt;
====How to Set Up a Multipath Device using iSCSI in vCenter====&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
        https://www.altaro.com/vmware/adding-linux-iscsi-target-esxi/ -- need targetcli, but its a dependency hell. Follow the next document, and then follow this again to mount the LUN. &lt;br /&gt;
&lt;br /&gt;
        Ubuntu targetcli - https://www.server-world.info/en/note?os=Ubuntu_18.04&amp;amp;p=iscsi&amp;amp;f=1 ---- THIS IS KEY!!!!!!&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.hostclient.doc/GUID-4D0E250E-4F8C-4F86-81C1-EC9D317CE02E.html&lt;br /&gt;
&lt;br /&gt;
https://www.codyhosterman.com/2017/07/setting-up-software-iscsi-multipathing-with-distributed-vswitches-with-the-vsphere-web-client/&lt;br /&gt;
&lt;br /&gt;
https://www.youtube.com/watch?v=OBMkP0Vdy6Q&lt;br /&gt;
&lt;br /&gt;
https://masteringvmware.com/how-to-add-iscsi-datastore/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== REPRODUCTION (Steps to configure a multipath device on top of multiple disks on EC2) (clean this up)====&lt;br /&gt;
----&lt;br /&gt;
Main docs we&amp;#039;re following: &lt;br /&gt;
https://linux.dell.com/files/whitepapers/iSCSI_Multipathing_in_Ubuntu_Server_1404_LTS.pdf&lt;br /&gt;
 https://www.hiroom2.com/2018/05/05/ubuntu-1804-tgt-en/&lt;br /&gt;
&lt;br /&gt;
The ideal network configuration in a multipath environment is to connect each network port on your&lt;br /&gt;
server to a different subnet. That way, you have additional resilience in case one of your subnets goes&lt;br /&gt;
down (i.e. bad switch or router).&lt;br /&gt;
&lt;br /&gt;
However, you can also connect both of your network ports to the same subnet if that is all you have,&lt;br /&gt;
as depicted in Figure 2. In this case, your network subnet becomes a single point of failure, but you&lt;br /&gt;
still have high-availability capabilities in case one of your network ports fails. To increase resiliency in&lt;br /&gt;
this scenario, connect each network port to a different switch in your subnet.&lt;br /&gt;
&lt;br /&gt;
For simplicity purposes, I used the network topology shown in Figure 2 with only one subnet. I have a&lt;br /&gt;
Class C network (192.168.1.0/24) and I used the following IP addresses&lt;br /&gt;
&lt;br /&gt;
1) Spun up ubuntu 18. This instance will act as the iSCSI storage server/target. We will call this &amp;quot;Instance A&amp;quot;. Attached a secondary NIC of the same subnet and follow this to set it up so you don&amp;#039;t get asymmetric routing: https://repost.aws/knowledge-center/ec2-ubuntu-secondary-network-interface&lt;br /&gt;
   * Note: **Both private IPs/NIC MUST be reachable**.&lt;br /&gt;
&lt;br /&gt;
   * Example of network configuration:&lt;br /&gt;
   ```&lt;br /&gt;
   $ cat /etc/netplan/51-eth1.yaml &lt;br /&gt;
   network:&lt;br /&gt;
     version: 2&lt;br /&gt;
     renderer: networkd&lt;br /&gt;
     ethernets:&lt;br /&gt;
       eth1:&lt;br /&gt;
         addresses:&lt;br /&gt;
          - 172.31.29.150/20&lt;br /&gt;
         dhcp4: no&lt;br /&gt;
         routes:&lt;br /&gt;
          - to: 0.0.0.0/0&lt;br /&gt;
            via: 172.31.16.1 # Default gateway (check your subnet)&lt;br /&gt;
            table: 1000&lt;br /&gt;
          - to: 172.31.29.150&lt;br /&gt;
            via: 0.0.0.0&lt;br /&gt;
            scope: link&lt;br /&gt;
            table: 1000&lt;br /&gt;
         routing-policy:&lt;br /&gt;
           - from: 172.31.29.150&lt;br /&gt;
             table: 1000&lt;br /&gt;
   &lt;br /&gt;
   $ ip r show table 1000&lt;br /&gt;
   default via 172.31.16.1 dev eth1 proto static &lt;br /&gt;
   172.31.29.150 dev eth1 proto static scope link &lt;br /&gt;
   &lt;br /&gt;
   $ ip addr show&lt;br /&gt;
   1: lo: &amp;lt;LOOPBACK,UP,LOWER_UP&amp;gt; mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000&lt;br /&gt;
       link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00&lt;br /&gt;
       inet 127.0.0.1/8 scope host lo&lt;br /&gt;
          valid_lft forever preferred_lft forever&lt;br /&gt;
       inet6 ::1/128 scope host &lt;br /&gt;
          valid_lft forever preferred_lft forever&lt;br /&gt;
   2: eth0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 9001 qdisc fq_codel state UP group default qlen 1000&lt;br /&gt;
       link/ether 02:b4:46:25:01:31 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
       inet 172.31.22.88/20 brd 172.31.31.255 scope global dynamic eth0&lt;br /&gt;
          valid_lft 3311sec preferred_lft 3311sec&lt;br /&gt;
       inet6 fe80::b4:46ff:fe25:131/64 scope link &lt;br /&gt;
          valid_lft forever preferred_lft forever&lt;br /&gt;
   3: eth1: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc fq_codel state UP group default qlen 1000&lt;br /&gt;
       link/ether 02:91:23:27:f3:57 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
       inet 172.31.29.150/20 brd 172.31.31.255 scope global eth1&lt;br /&gt;
          valid_lft forever preferred_lft forever&lt;br /&gt;
       inet6 fe80::91:23ff:fe27:f357/64 scope link &lt;br /&gt;
          valid_lft forever preferred_lft forever&lt;br /&gt;
   ```&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2) Install tgt &lt;br /&gt;
    ```&lt;br /&gt;
   $ sudo apt install -y tgt&lt;br /&gt;
   ```&lt;br /&gt;
3) Create iSCSI target. This article uses a file as logical unit. You can use block device as logical unit.&lt;br /&gt;
   ```&lt;br /&gt;
   $ sudo mkdir /var/lib/iscsi&lt;br /&gt;
   $ sudo dd if=/dev/zero of=/var/lib/iscsi/disk bs=1M count=1K&lt;br /&gt;
   ```&lt;br /&gt;
   Create iSCSI target (tid 1).&lt;br /&gt;
   ```&lt;br /&gt;
    $ sudo tgtadm --lld iscsi --op new --mode target --tid 1 -T iqn.2018-05.com.hiroom2:disk&lt;br /&gt;
   ```&lt;br /&gt;
   Add logical Unit (lun 1) to iSCSI target (Target ID 1).&lt;br /&gt;
   ```&lt;br /&gt;
   $ sudo tgtadm --lld iscsi --op new --mode logicalunit --tid 1 --lun 1 -b /var/lib/iscsi/disk&lt;br /&gt;
   ```&lt;br /&gt;
   Publish iSCSI target (tid 1) to all IP address. You can specify 192.168.11.1 and 192.168.11.0/24 in addition to ALL.&lt;br /&gt;
   ```&lt;br /&gt;
   $ sudo tgtadm --lld iscsi --op bind --mode target --tid 1 -I ALL&lt;br /&gt;
   ```&lt;br /&gt;
   Save configruation for iSCSI target. If you do not save configuration, configuration will be removed after restarting tgtd. (This command was slightly different than Yaron&amp;#039;s/doc)&lt;br /&gt;
   ```&lt;br /&gt;
   $ sudo tgt-admin --dump | tee /etc/tgt/conf.d/disk.configuration&lt;br /&gt;
&lt;br /&gt;
    OUTPUT:&lt;br /&gt;
    ------&lt;br /&gt;
     default-driver iscsi&lt;br /&gt;
&lt;br /&gt;
    &amp;lt;target iqn.2018-05.com.hiroom2:disk&amp;gt;&lt;br /&gt;
        backing-store /var/lib/iscsi/disk&lt;br /&gt;
    &amp;lt;/target&amp;gt;&lt;br /&gt;
   ```&lt;br /&gt;
4) Connect to iSCSI target with open-iscsi which is iSCSI initiator. iSCSI initiator runs on server which is installed iSCSI target. The partition is the following before connecting to iSCSI target (locally). &lt;br /&gt;
   ```&lt;br /&gt;
    $ cat /proc/partitions&lt;br /&gt;
&lt;br /&gt;
    OUTPUT:&lt;br /&gt;
    ------&lt;br /&gt;
    major minor  #blocks  name&lt;br /&gt;
    7        0      24972 loop0&lt;br /&gt;
    7        1      56972 loop1&lt;br /&gt;
    7        2      64976 loop2&lt;br /&gt;
    7        3      54516 loop3&lt;br /&gt;
    7        4      94036 loop4&lt;br /&gt;
    202        0    8388608 xvda&lt;br /&gt;
    202        1    8274927 xvda1&lt;br /&gt;
    202       14       4096 xvda14&lt;br /&gt;
    202       15     108544 xvda15&lt;br /&gt;
   ```&lt;br /&gt;
   ```&lt;br /&gt;
   $ sudo apt install -y open-iscsi&lt;br /&gt;
   ```&lt;br /&gt;
   ```&lt;br /&gt;
   $ sudo iscsiadm -m discovery -t st -p localhost &lt;br /&gt;
&lt;br /&gt;
   OUTPUT:&lt;br /&gt;
   ------&lt;br /&gt;
   127.0.0.1:3260,1 iqn.2018-05.com.hiroom2:disk&lt;br /&gt;
   ```&lt;br /&gt;
   Conect to iSCSI target (locally):&lt;br /&gt;
   ```&lt;br /&gt;
   $ sudo iscsiadm -m node --targetname iqn.2018-05.com.hiroom2:disk -p localhost -l&lt;br /&gt;
&lt;br /&gt;
   OUTPUT:&lt;br /&gt;
   -------&lt;br /&gt;
   Logging in to [iface: default, target: iqn.2018-05.com.hiroom2:disk, portal: 127.0.0.1,3260] (multiple)&lt;br /&gt;
   Login to [iface: default, target: iqn.2018-05.com.hiroom2:disk, portal: 127.0.0.1,3260] successful.&lt;br /&gt;
   ```&lt;br /&gt;
   The partition is the following after connecting to iSCSI target. The partition sda is appended. &lt;br /&gt;
   ```&lt;br /&gt;
   $ cat /proc/partitions&lt;br /&gt;
&lt;br /&gt;
   OUTPUT:&lt;br /&gt;
   ------&lt;br /&gt;
   major minor  #blocks  name&lt;br /&gt;
   7        0      24972 loop0&lt;br /&gt;
   7        1      56972 loop1&lt;br /&gt;
   7        2      64976 loop2&lt;br /&gt;
   7        3      54516 loop3&lt;br /&gt;
   7        4      94036 loop4&lt;br /&gt;
   202        0    8388608 xvda&lt;br /&gt;
   202        1    8274927 xvda1&lt;br /&gt;
   202       14       4096 xvda14&lt;br /&gt;
   202       15     108544 xvda15&lt;br /&gt;
   8        0    1048576 sda &amp;lt;---------- &lt;br /&gt;
   ```&lt;br /&gt;
   Check the WWID of the disk:&lt;br /&gt;
   ```&lt;br /&gt;
   # /lib/udev/scsi_id --whitelisted --device=/dev/sda&lt;br /&gt;
   360000000000000000e00000000010001&lt;br /&gt;
   ```&lt;br /&gt;
5) So now I know how to mount it locally. I will mount the device onto another EC2. I spun up another Ubuntu instance (we&amp;#039;ll call this Instance B). And ran the following commands. **NOTE**: BE SURE TO ALLOW INBOUND for TCP port 3260 on each ENI of instance A !:&lt;br /&gt;
   ```&lt;br /&gt;
   $ sudo iscsiadm -m discovery -t st -p 172.31.29.150&lt;br /&gt;
&lt;br /&gt;
   OUTPUT:&lt;br /&gt;
   ------&lt;br /&gt;
   172.31.29.150:3260,1 iqn.2018-05.com.hiroom2:disk&lt;br /&gt;
   ```&lt;br /&gt;
   ```&lt;br /&gt;
   # Connect to the target using the private IP of ENI #1&lt;br /&gt;
   $ sudo iscsiadm -m node --targetname iqn.2018-05.com.hiroom2:disk -p 172.31.29.150 -l&lt;br /&gt;
&lt;br /&gt;
   OUTPUT:&lt;br /&gt;
   ------&lt;br /&gt;
   Logging in to [iface: default, target: iqn.2018-05.com.hiroom2:disk, portal: 172.31.29.150,3260] (multiple)&lt;br /&gt;
   Login to [iface: default, target: iqn.2018-05.com.hiroom2:disk, portal: 172.31.29.150,3260] successful.&lt;br /&gt;
   ```&lt;br /&gt;
   ```&lt;br /&gt;
   # Connect to the target using the private IP of ENI #1&lt;br /&gt;
   $ sudo iscsiadm -m discovery -t st -p 172.31.30.116&lt;br /&gt;
&lt;br /&gt;
   OUTPUT:&lt;br /&gt;
   ------&lt;br /&gt;
   172.31.30.116:3260,1 iqn.2018-05.com.hiroom2:disk&lt;br /&gt;
   ```&lt;br /&gt;
   ```&lt;br /&gt;
   $ sudo iscsiadm -m node --targetname iqn.2018-05.com.hiroom2:disk -p 172.31.30.116 -l&lt;br /&gt;
&lt;br /&gt;
   OUTPUT:&lt;br /&gt;
   ------&lt;br /&gt;
   Logging in to [iface: default, target: iqn.2018-05.com.hiroom2:disk, portal: 172.31.30.116,3260] (multiple)&lt;br /&gt;
   Login to [iface: default, target: iqn.2018-05.com.hiroom2:disk, portal: 172.31.30.116,3260] successful.&lt;br /&gt;
   ```&lt;br /&gt;
   ```&lt;br /&gt;
   $ sudo /lib/udev/scsi_id --whitelisted --device=/dev/sda&lt;br /&gt;
   360000000000000000e00000000010001&lt;br /&gt;
&lt;br /&gt;
   $ sudo /lib/udev/scsi_id --whitelisted --device=/dev/sdb&lt;br /&gt;
   360000000000000000e00000000010001&lt;br /&gt;
   ```&lt;br /&gt;
Done! You now have a multipath device on top of multiple disks/EBS volumes:&lt;br /&gt;
   ```&lt;br /&gt;
$ lsblk&lt;br /&gt;
NAME         MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT&lt;br /&gt;
loop0          7:0    0 55.7M  1 loop  /snap/core18/2745&lt;br /&gt;
loop1          7:1    0 63.5M  1 loop  /snap/core20/1891&lt;br /&gt;
loop2          7:2    0 24.4M  1 loop  /snap/amazon-ssm-agent/6312&lt;br /&gt;
loop3          7:3    0 53.2M  1 loop  /snap/snapd/19122&lt;br /&gt;
loop4          7:4    0 91.9M  1 loop  /snap/lxd/24061&lt;br /&gt;
sda            8:0    0    1G  0 disk  &lt;br /&gt;
└─mpatha     253:0    0    1G  0 mpath &lt;br /&gt;
sdb            8:16   0    1G  0 disk  &lt;br /&gt;
└─mpatha     253:0    0    1G  0 mpath &lt;br /&gt;
nvme0n1      259:0    0    8G  0 disk  &lt;br /&gt;
 ├─nvme0n1p1  259:1    0  7.9G  0 part  /&lt;br /&gt;
 ├─nvme0n1p14 259:2    0    4M  0 part  &lt;br /&gt;
 └─nvme0n1p15 259:3    0  106M  0 part  /boot/efi&lt;br /&gt;
&lt;br /&gt;
 $ sudo multipath -ll&lt;br /&gt;
 mpatha (360000000000000000e00000000010001) dm-0 IET,VIRTUAL-DISK&lt;br /&gt;
 size=1.0G features=&amp;#039;0&amp;#039; hwhandler=&amp;#039;0&amp;#039; wp=rw&lt;br /&gt;
 |-+- policy=&amp;#039;service-time 0&amp;#039; prio=1 status=active&lt;br /&gt;
 | `- 0:0:0:1 sda     8:0   active ready running&lt;br /&gt;
  `-+- policy=&amp;#039;service-time 0&amp;#039; prio=1 status=enabled&lt;br /&gt;
 `- 1:0:0:1 sdb     8:16  active ready running&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
====How to get the WWID of a device====&lt;br /&gt;
----&lt;br /&gt;
On Vmware:&lt;br /&gt;
https://access.redhat.com/solutions/93943&lt;br /&gt;
&lt;br /&gt;
For RHEL7 and RHEL8&lt;br /&gt;
 # /lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/sda&lt;br /&gt;
 36000c2931a129f3c880b8d06ccea1b01&lt;br /&gt;
&lt;br /&gt;
For RHEL6&lt;br /&gt;
 # scsi_id --whitelisted --replace-whitespace --device=/dev/sda&lt;br /&gt;
 36000c2931a129f3c880b8d06ccea1b01&lt;br /&gt;
&lt;br /&gt;
For RHEL5&lt;br /&gt;
 #scsi_id -g -u -s /block/sdb&lt;br /&gt;
 36000c2931a129f3c880b8d06ccea1b01&lt;/div&gt;</summary>
		<author><name>Ardika Sulistija</name></author>
	</entry>
</feed>