Monday, October 24, 2011

How to configure NPIV (N_Port ID Virtualization)


Step By Step NPIV configuration

For maximum redundancy for the paths create the instance on dual VIOS. We will consider an scenario having Power6/7 Server, with 2 PCI Dual/Single port 8 GB Fibre Card with VIOS level – 2,2 FP24 installed and VIOS is in shutdown state.
First we need to create Virtual fibre channel adapter on each VIOS which we will later on map to physical fibre adapter after logging into VIOS similarly as we do for Ethernet

Please Note: - Create the all lpar clients as per requirements and then configure the Virtual fiber adapter on VIOS. Since we are mapping one single physical fiber adapter to different hosts, hence we need to create that many virtual fiber channel adapter. Dynamically virtual fiber channel adapter can be created but don’t forget to add in profile else you lost the config on power-off.

1.      1. Create Virtual fibre channel adapter on both VIOS server.
          HMC--> Managed System-->Manage Profile-->Virtual Adapter    
 Let say I have define the virtual fiber adapter for AIX client Netwqa  with adapter ids 33 & client adapter id 33


Similarly on Vios2 for multipath redundancy:-


If you have any more LPARs which you want to configure for NPIV, repeat the above mentioned steps with those LPAR details. 
  2. Mapping defined virtual fiber channel adapter to Physical HBA ports
Now activate VIOS Lpar. Logon to VIOS server and check the status of physical Fibre channel port. Or if Vios are already running then run cfgmgr or config manager to get the defined virtual FC adapter on Vios servers.
$ lsnports
Name   physloc                                            fabric tports aports swwpns awwpns
fcs0    U5802.001.008A824-P1-C9-T1     0        64        64      2048     2048
fcs1    U5802.001.008A824-P1-C9-T2     0        64        64      2048     2048
fcs2    U5877.001. 0083832-P1-C9-T1     0        64        64      2048     2048
fcs3    U5877.001.0083832-P1-C9-T2      0        64        64      2048     2048

If the value for the ‘fabric’ parameter shows as ‘0’ that means that HBA port is not connected to a SAN switch supporting the NPIV feature. Please connect fibre cable between Physical fibre channel adapter and San switches. If the value for the ‘fabric’ parameter shows as ‘1’ that means that HBA port is connected to a SAN switch supporting the NPIV feature

Above commands displays
            Name:- Display Name and
physloc :- location of physical adapter.
aports:- Display number of available physical ports (aports)
awwpns:- Display total numbers of WWPNs that physical port support.
After connecting fibre channel cable, execute lsnport again you should get fabric=1

$ lsnports
Name   physloc                                       fabric tports aports swwpns awwpns
fcs0    U5802.001.008A824-P1-C9-T1     1        64        64      2048     2048
fcs1    U5802.001.008A824-P1-C9-T2     1        64        64      2048     2048
fcs2    U5877.001. 0083832-P1-C9-T1     1        64        64      2048     2048
fcs3    U5877.001.0083832-P1-C9-T2      1        64        64      2048     2048
Run the ‘lsdev –vpd | grep vfchost’ command to know which device represents the Virtual FC adapter on any specific slot. Or run`lsmap -npiv –all`to list number of FC adapter and their mapping to physical adapter
Here we are interested in vfchost2 as I am showing the example for connecting vfchost2.
Check Status and Flags:-

Status:LOGGED_IN, Flags: a<LOGGED_IN,STRIP_MERGE>
-> The vfchost adapter is mapped to a physical adapter, and the associated client is up and running.
Status: NOT_LOGGED_IN, Flags:1<NOT_MAPPED,NOT_CONNECTED>
-> The vfchost adapter is not mapped to a physical adapter

Status: NOT_LOGGED_IN, Flags:4<NOT_LOGGED>
-> The vfchost adapter is mapped to a physical adapter, but the associated client is not running. If you suspect a problem, check for VFC_HOST errors.

ClntName:- will only be displayed when your mapped vio client is booted and running state.

ClntOS : Name will only be displayed when your mapped vio client is booted and running state

Now we need to map the device ‘vfchost2’ to the physical HBA port ‘fcs1’ using the ‘vfcmap -vadapter vfchost2 -fcp fcs1’command. Once it is mapped, check the status of the mapping using the command ‘lsmap -vadapter vfchost2 -npiv’. Please note that the status of the port is showing as ‘NOT_LOGGED_IN’, this is because the client configuration is not yet complete and hence it cannot login to the fabric.

$ vfcmap -vadapter vfchost2 -fcp fcs1

List the adapter using below ` lsmap –vadapter vfchost2 –npiv`.
Since Aix client is not configured and mapped that’s why status is not Logged_IN, it will not display the ClntName and ClntOS along with VFC client name and DRC

Repeat the above mentioned steps in the second VIOS LPAR also. If you have more client LPARs, repeat the steps for all those LPARs in both the VIOS LPARs.

  3. .Aix Client Configuration
Create Virtual FC client adapter on Aix lpar by navigating HMC and below tabs:-
HMC à VIO Client (NETWQA) à manage ProfileàVirtual Adapterà Action à Create as
Create the second Virtual FC Client Adapter with the slot number details as shown in below figure. Make sure the slot numbers match with the slot numbers we have entered in the second VIOS LPAR while creating the Virtual FC Server Adapter.
Now activate the AIX LPAR and install AIX in it, note that the minimum version required to support the NPIV feature is AIX 5.3 TL9 or AIX 6.1 TL2. Once the AIX installation is complete, depending on the SAN Storage box, you need to install the necessary subsystem driver and configure it.  If Aix is already running then issue `cfgmgr` command.
Install SDDPCM driver for multipathing depending upon the storage you have.

You can now check the status of the Virtual FC Server Adapter ports in both the VIOS to check whether the ports are successfully logged in to the SAN fabric.
VIOS2
4.  Allocating San Storage:-
You can now assign the storage to the Aix  lpar. Do proper zoning between san storage and wwpn of Aix client FC virtual adapter. Use below command to check the WWPN of virtual Fibre channel adapter on AIX client.
#lscfg -vpl fcs*
Or below commands as shown below:-
You can also get the wwpn number from AIX client profile through HMC as shown below:-

NOTE: When viewing the properties of the Virtual FC Client Adapter from the HMC, it will show two WWNs for each Virtual FC Client Adapter as shown above. The second WWN shown here is not used until there is a live migration activated on this LPAR through Live Partition Mobility. When a live migration happens for this LPAR, the new migrated hardware will be accessing SAN storage using the second WWN of the Virtual FC Client Adapter, so you have to make sure the second WWN is also configured in Zoning and Access Control.

Use lspath or pcmpath query adapter , ‘datapath query adapter’, ‘datapath query device’, ‘lsvpcfg’ , pcmpath query essmap etc commands to check the mutlipathing and hdisk configured properly.

It will show the output as shown below. You can see that there are 4 separate paths for the disk ‘hdisk2’ which is through two separate virtual FC adapters as I have connected my DS storage to fiber switch through 4 cables for each fiber card.
**Zoning on SAN Switch is out of scope for this document; if you want to know how to do zoning you can drop a comment or mail me.

Limitations:-
§  NPIV is only supported on 8Gb FC adapters on p6 hosts. The FC switch needs to support NPIV, but does not need to be 8 Gb (the 8 Gb adapter can negotiate down to 2 and 4 Gb).
§  Maximum number of 64 NPIV adapters per physical adapter (see lsnports)
§  16 virtual fibre channel adapters per client
§  No support for IP over FC (FCNET)
§  Optical devices attached via virtual fibre channel are not supported at this time
Diagnostics no supported for virtual fibre channel adapters

Important NPIV Commands
$lsnports
Display information about physical ports on physical fibre ports
$lsmap –npiv –all
Display Virtual fibre channel adapter created in VIO Server and there status
$lsmap –npiv –vadapter vfchost0
Display attributes for virtual fibre channel adapter
$vfcmap –vadapter vfchost0 –fcp fcs0
Map virtual fibre adapter with physical fibre adapter
$ vfcmap –vadapter vfchost0 –fcp
Unmaps Virtual fibre channel adapter
$ portcfgnpivport ------ > On IBM brocade san switch
0 - Disable the NPIV capability on the port
1 - Enable the NPIV capability on the port
Usage :- $portcfgnpivport 10 1
Unable NPIV functionality on Port 10 of san switch
Also configure Fibre card to dyntrk = yes and fc_err_recov :fast_fail on Aix Lpar

16 comments:

  1. do you configure dyntrk and fc_err_recov on the VIO or on the client lpar?

    ReplyDelete
  2. You configure the dyntrk=yes and fc_err_recov=fast_fail on each of the physical fiber cards on the vio server

    ReplyDelete
  3. Can we use NPIV for TSM LANfree Backup

    ReplyDelete
  4. Awesome. Nice one. Thx for the post.

    ReplyDelete
  5. how to see these parameters dyntrk and fc_err_recov on physical fiber cards

    ReplyDelete
  6. no need of dyntrk or fc_err_recov ,, coz disks r comin directlyu from san, so,, no need.

    ReplyDelete
  7. excellent post...thanks for sharing

    ReplyDelete
  8. how to check the disk tier in vio for a particular disk.

    ReplyDelete
  9. very nice and easily understanding...

    ReplyDelete
  10. i would request if you could send me the zoning part details at arun_4uece@rediffmail.com

    ReplyDelete
  11. $portcfgnpivport 10 1

    it will not "unable"
    it will "enable"

    ReplyDelete
  12. zoning :::

    Logon to your SAN switch and create a new zoning, or customize an
    existing one.

    The command zoneshow, which is available on the IBM 2109-F32 switch,
    lists the existing zones as shown in Example 2-27.

    Example 2-27 The zoneshow command before adding a new WWPN
    itsosan02:admin> zoneshow
    Defined configuration:
    cfg: npiv vios1; vios2
    zone: vios1 20:32:00:a0:b8:11:a6:62; c0:50:76:00:0a:fe:00:18
    zone: vios2 C0:50:76:00:0A:FE:00:12; 20:43:00:a0:b8:11:a6:62
    Effective configuration:
    cfg: npiv
    zone: vios1 20:32:00:a0:b8:11:a6:62
    c0:50:76:00:0a:fe:00:18
    zone: vios2 c0:50:76:00:0a:fe:00:12
    20:43:00:a0:b8:11:a6:62

    To add the WWPN c0:50:76:00:0a:fe:00:14 to the zone named vios1,
    execute the following command:

    itsosan02:admin> zoneadd "vios1", "c0:50:76:00:0a:fe:00:14"

    To save and enable the new zoning, execute the cfgsave and cfgenable
    npiv commands, as shown in Example 2-28 on page 76.

    Example 2-28 The cfgsave and cfgenable commands
    itsosan02:admin> cfgsave

    You are about to save the Defined zoning configuration. This
    action will only save the changes on Defined configuration.
    Any changes made on the Effective configuration will not
    take effect until it is re-enabled.

    Do you want to save Defined zoning configuration only? (yes, y, no, n): [no]
    y
    Updating flash ...
    itsosan02:admin> cfgenable npiv
    You are about to enable a new zoning configuration.
    This action will replace the old zoning configuration with the
    current configuration selected.
    Do you want to enable 'npiv' configuration (yes, y, no, n): [no] y
    zone config "npiv" is in effect
    Updating flash ...

    With the zoneshow command you can check whether the added WWPN is
    active, as shown in Example 2-29.

    Example 2-29 The zoneshow command after adding a new WWPN
    itsosan02:admin> zoneshow

    Defined configuration:

    cfg: npiv vios1; vios2

    zone: vios1 20:32:00:a0:b8:11:a6:62; c0:50:76:00:0a:fe:00:18;
    c0:50:76:00:0a:fe:00:14

    zone: vios2 C0:50:76:00:0A:FE:00:12; 20:43:00:a0:b8:11:a6:62

    Effective configuration:

    cfg: npiv
    zone: vios1 20:32:00:a0:b8:11:a6:62
    c0:50:76:00:0a:fe:00:18
    c0:50:76:00:0a:fe:00:14
    zone: vios2 c0:50:76:00:0a:fe:00:12
    20:43:00:a0:b8:11:a6:62

    c. After you have finished with the zoning, you need to map the LUN
    device(s) to the WWPN. In our example the LUN named NPIV_AIX61 is
    mapped to the Host Group named VIOS1_NPIV, as shown in Figure 2-27.



    13.Activate your AIX client partition and boot it into SMS.

    14.Select the correct boot devices within SMS, such as a DVD or a NIM Server.

    15.Continue to boot the LPAR into the AIX Installation Main menu.

    16.Select the disk where you want to install the operating system and continue to
    install AIX.

    ReplyDelete
  13. ok here is the source of the best doc for npiv including zoning :: practical approach

    http://www-01.ibm.com/support/docview.wss?uid=isg3T1012452

    ReplyDelete
  14. Do we have to do dyntrk or fc_err_recov for the client partition disk or not??

    ReplyDelete
  15. WWN getting changed after LPAR shutdown on vFC, any idea.

    ReplyDelete
    Replies
    1. You probably did DLPAR of npiv then added NPIV into profile. this will generate NPIV with different WWN!
      If you DLPAR npiv then do "save current config" this will copy the running donfig into your LPAR profile

      Delete