With NPIV, you can configure the managed system so that multiple logical partitions can access independent physical storage through the same physical fibre channel adapter. (NPIV means N_Port ID Virtualization. N_Port ID is a storage term, for node port ID, to identify ports on the nod (FC Adpater) in the SAN area.)
To access physical storage in a typical storage area network (SAN) that uses fibre channel, the physical storage is mapped to logical units (LUNs) and the LUNs are mapped to the ports of physical fibre channel adapters. Each physical port on each physical fibre channel adapter is identified using one worldwide port name (WWPN).
NPIV is a standard technology for fibre channel networks that enables you to connect multiple logical partitions to one physical port of a physical fibre channel adapter. Each logical partition is identified by a unique WWPN, which means that you can connect each logical partition to independent physical storage on a SAN.
To enable NPIV on the managed system, you must create a Virtual I/O Server logical partition (version 2.1, or later) that provides virtual resources to client logical partitions. You assign the physical fibre channel adapters (that support NPIV) to the Virtual I/O Server logical partition. Then, you connect virtual fibre channel adapters on the client logical partitions to virtual fibre channel adapters on the Virtual I/O Server logical partition. A virtual fibre channel adapter is a virtual adapter that provides client logical partitions with a fibre channel connection to a storage area network through the Virtual I/O Server logical partition. The Virtual I/O Server logical partition provides the connection between the virtual fibre channel adapters on the Virtual I/O Server logical partition and the physical fibre channel adapters on the managed system.
The following figure shows a managed system configured to use NPIV:
on VIO server:
root@vios1: / # lsdev -Cc adapter
fcs0 Available 01-00 8Gb PCI Express Dual Port FC Adapter (df1000f114108a03)
fcs1 Available 01-01 8Gb PCI Express Dual Port FC Adapter (df1000f114108a03)
vfchost0 Available Virtual FC Server Adapter
vfchost1 Available Virtual FC Server Adapter
vfchost2 Available Virtual FC Server Adapter
vfchost3 Available Virtual FC Server Adapter
vfchost4 Available Virtual FC Server Adapter
on VIO client:
root@aix21: /root # lsdev -Cc adapter
fcs0 Available C6-T1 Virtual Fibre Channel Client Adapter
fcs1 Available C7-T1 Virtual Fibre Channel Client Adapter
Two unique WWPNs (world-wide port names) starting with the letter “c” are generated by the HMC for the VFC client adapter. The pair is critical and both must be zoned if Live Partition Migration is planned to be used. The virtual I/O client partition uses one WWPN to log into the SAN at any given time. The other WWPN is used when the client logical partition is moved to another managed system using PowerVM Live Partition Mobility.
lscfg -vpl fcsX will show only the first WWPN
fcstat fcsX will show only the active WWPN
Both of them are showing only 1 WWPN but fcstat will show always the active WWPN which is in use (which will change after an LPM), however lscfg will show as a static value the 1st WWPN assigned to the HBA only.
One VFC client adapter per physical port per client partition and maximum 64 active VFC client adapter per physical port. There is always one-to-one relationship between the virtual Fibre Channel client adapter and the virtual Fibre Channel server adapter.
The difference between traditional redundancy with SCSI adapters and the NPIV technology using virtual Fibre Channel adapters is that the redundancy occurs on the client, because only the client recognizes the disk. The Virtual I/O Server is essentially just a pass-through managing the data transfer through the POWER hypervisor. When using Live Partition Mobility storage moves to the target server without requiring a reassignment (opposite with virtual scsi), because the virtual Fibre Channels have their own WWPNs that move with the client partitions on the target server.
After creating an FC client adapter, and trying to make it persistent across restarts, another different pair of virtual WWPNs would be generated, when creating the adapter in the profile. To prevent this undesired situation, which would require another SAN zoning and storage configuration, make sure to save any virtual Fibre Channel client adapter DLPAR changes into a new partition profile by selecting: Configuration -> Save Current Configuration and change the default partition profile to the new profile.
—————————————————–
NPIV clients num_cmd_elem attribute should not exceed the VIOS adapter’s num_cmd_elems.
If you increase num_cmd_elems on the virtual FC (vFC) adapter, then you should also increase the setting on the real FC adapter.
—————————————————–
Check NPIV adapter mapping on client:
root@bb_lpar: / # echo “vfcs” | kdb <--vfcs a="" br="" is="" kdb="" subcommand="">…
NAME ADDRESS STATE HOST HOST_ADAP OPENED NUM_ACTIVE
fcs0 0xF1000A000033A000 0x0008 aix-vios1 vfchost8 0x01 0x0000 <--shows br="" client="" for="" is="" on="" server="" this="" used="" vfchost="" vio="" which="">fcs1 0xF1000A0000338000 0x0008 aix-vios2 vfchost6 0x01 0x0000
—————————————————–
NPIV creation and how they are related together:
FCS0: Physical FC Adapter installed on the VIOS
VFCHOST0: Virtual FC (Server) Adapter on VIOS
FCS0 (on client): Virtual FC adapter on VIO client
--shows>--vfcs>
Creating NPIV adapters:
0. install physical FC Adapters to VIO Servrs
1. HMC -> VIO Server -> DLPAR -> Virtual Adapter (don’t forget profile (save current))
2. HMC -> VIO Client -> DLPAR -> Virtual Adapter (the ids should be mapped, don’t forget profile)
3. cfgdev (VIO server), cfgmgr (client) <--it adapter="" br="" bring="" client="" fcsx="" new="" on="" server="" the="" up="" vfchostx="" vio="" will="">4. check status:
lsdev -dev vfchost* <--lists adapters="" br="" fc="" server="" virtual=""> lsmap -vadapter vfchost0 -npiv <--gives about="" adapter="" br="" detail="" fc="" more="" server="" specified="" the="" virtual=""> lsdev -dev fcs* <--lists adapters="" br="" fc="" physical="" server=""> lsnports <--checks br="" fabric="1" means="" npiv="" readiness="" ready="">5. vfcmap -vadapter vfchost0 -fcp fcs0 <--mapping adapter="" br="" fc="" physical="" s="" the="" to="" vio="" virtual="">6. lsmap -all -npiv <--checks br="" maping="" the="">7. HMC -> VIO Client -> get the WWN of the adapter <--if be="" br="" first="" is="" lpm="" needed="" no="" only="" the="" used="" will="" wwn="">8. SAN zoning
—————————————————–
Checking if VIOS FC Adapter supports NPIV:
On VIOS as padmin:
$ lsnports
name physloc fabric tports aports swwpns awwpns
fcs0 U78C0.001.DAJX633-P2-C2-T1 1 64 64 2048 2032
fcs1 U78C0.001.DAJX633-P2-C2-T2 1 64 64 2048 2032
fcs2 U78C0.001.DAJX634-P2-C2-T1 1 64 64 2048 2032
fcs3 U78C0.001.DAJX634-P2-C2-T2 1 64 64 2048 2032
value in column fabric:
1 – adapter and the SAN switch is NPIV ready
2 – adapter or SAN switch is not NPIV ready and SAN switch configuration should be checked
—————————————————–
Getting WWPNs from HMC CLI:
lshwres -r virtualio –rsubtype fc –level lpar -m
lpar_name,wwpns
bb_lpar,”c05076066e590016,c05076066e590017″
bb_lpar,”c05076066e590014,c05076066e590015″
bb_lpar,”c05076066e590012,c05076066e590013″
bb_lpar,”c05076066e590010,c05076066e590011″
—————————————————–
--if>--checks>--mapping>--checks>--lists>--gives>--lists>--it>
Replacement of a physical FC adapter with NPIV
1. identify the adapter
$ lsdev -dev fcs4 -child
name status description
fcnet4 Defined Fibre Channel Network Protocol Device
fscsi4 Available FC SCSI I/O Controller Protocol Device
2. unconfigure the mappings
$ rmdev -dev vfchost0 -ucfg
vfchost0 Defined
3. FC adapters and their child devices must be unconfigured or deleted
$ rmdev -dev fcs4 -recursive -ucfg
fscsi4 Defined
fcnet4 Defined
fcs4 Defined
4. diagmenu
DIAGNOSTIC OPERATING INSTRUCTIONS -> Task Selection -> Hot Plug Task -> PCI Hot Plug Manager -> Replace/Remove a PCI Hot Plug Adapter.
—————————————————–
Changing WWPN number:
There are 2 methods: changing dynamically (chhwres) or changing in the profile (chsyscfg). Both of them are similar and both of them done in HMC CLI.
I. Changing dynamically:
1. get current adapter config:
# lshwres -r virtualio –rsubtype fc -m
2. remove adapter from client LPAR: rmdev -Rdl fcsX (if needed unmanage device prior from storage driver)
3. remove adapter dynamically from HMC (it can be done in GUI)
4. create new adapter with new WWPNS dynamically:
# chhwres -r virtualio -m -o a -p aix_lpar_01 –rsubtype fc -a “adapter_type=client,remote_lpar_name=aix_vios1,remote_slot_num=123,”wwpns=c0507603a42102de,c0507603a42102df”” -s 8
5. cfgmgr on client LPAR will bring up adapter with new WWPNs.
6. save actual config to profile (so next profile activation wil not bring back old WWPNs)
(vfc mapping removal did not needed in this case, if there are some problems try reconfig. that one as well at VIOS side)
—————————————————–
II. changing in the profile:
same as above just some commands are different:
get current config:
# lssyscfg -r prof -m
create new adapters in the profile:
chsyscfg -m
-m – managed system
-r prof – profile will be changed
-i ‘ – attributes
name=default – name of the profile, which will be changed
lpar_id=5 – id of the client LPAR
7 – adapter id on client (slot id)
client – adapter type
1 – remote PLAR id (VIOS server LPAR id)
aix_vios1 – remote LPAR name (VIOS server name)
4 – remote slote number (adapter id on VIOS server)
WWPN – both WWPN numbers (separated with , )
1 – required or desired (1- required, 0- desired)
Here VFC unmapping was needed:
vfcmap -vadapter vfchost4 -fcp <--remove br="" mapping="">vfcmap -vadapter vfchost4 -fcp fcs2 <--create br="" mapping="" new="">
—————————————————–
Virtual FC login to SAN:
When new LPAR with VFC has been created, before to see LUNs (to install AIX), for the first time VFC Adapter has to be logged in to SAN.
This can be done on HMC (above HMC V7 R7.3) with command chnportlogin
chnportlogin: it allows to allocate, log in and zone WWPNs before the client partition is activated.
On HMC:
1. lsnportlogin -m
lpar_name=bb_lpar,lpar_id=4,profile_name=default,slot_num=2,wwpn=c0507607d1150008,wwpn_status=0,logged_in=none,wwpn_status_reason=null
lpar_name=bb_lpar,lpar_id=4,profile_name=default,slot_num=2,wwpn=c0507607d1150009,wwpn_status=0,logged_in=none,wwpn_status_reason=null
lpar_name=bb_lpar,lpar_id=4,profile_name=default,slot_num=3,wwpn=c0507607d115000a,wwpn_status=0,logged_in=none,wwpn_status_reason=null
lpar_name=bb_lpar,lpar_id=4,profile_name=default,slot_num=3,wwpn=c0507607d115000b,wwpn_status=0,logged_in=none,wwpn_status_reason=null
The WWPN status. Possible values are:
0 – WWPN is not activated
1 – WWPN is activated
2 – WWPN status is unknown
2. chnportlogin -m
3. lsnportlogin -m
lpar_name=bb_lpar,lpar_id=4,profile_name=default,slot_num=2,wwpn=c0507607d1150008,wwpn_status=1,logged_in=vios,wwpn_status_reason=null
lpar_name=bb_lpar,lpar_id=4,profile_name=default,slot_num=2,wwpn=c0507607d1150009,wwpn_status=1,logged_in=vios,wwpn_status_reason=null
lpar_name=bb_lpar,lpar_id=4,profile_name=default,slot_num=3,wwpn=c0507607d115000a,wwpn_status=1,logged_in=vios,wwpn_status_reason=null
lpar_name=bb_lpar,lpar_id=4,profile_name=default,slot_num=3,wwpn=c0507607d115000b,wwpn_status=1,logged_in=vios,wwpn_status_reason=null
4. Storage team can do LUN assginment, after they finished, you can do logout:
chnportlogin -m
—————————————————–
IOINFO
If HMC is below V7 R7.3 ioinfo can be used to cause VFC adapters to login to SAN.
ioinfo also can be used for debug purposes or to check if disk are available/which disk is boot disk
It can be reached from SMS menu with number: 8 (Open Firmware Prompt)
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
1 = SMS Menu 5 = Default Boot List
8 = Open Firmware Prompt 6 = Stored Boot List
Memory Keyboard Network SCSI Speaker ok
0 > ioinfo
!!! IOINFO: FOR IBM INTERNAL USE ONLY !!!
This tool gives you information about SCSI,IDE,SATA,SAS,and USB devices attached to the system
Select a tool from the following
1. SCSIINFO
2. IDEINFO
3. SATAINFO
4. SASINFO
5. USBINFO
6. FCINFO
7. VSCSIINFO
q – quit/exit
==> 6
Then choose VFC client device from list –> List Attached FC Devices (this will cause that VFC device to login to SAN)
After that on VIOS: lsmap -npiv … will show LOGGED_IN
(to quit from “ioinfo” command “reset-all” will do a reboot of the LPAR)
—————————————————–-->-->-->--create>--remove>