Fibre Channel and NAS SMB Storage

I’d like to pen my experiences of storage protocols to hopefully benefit some other soul who’s responsibility is an entire organisations data asset.

 

In the past, I’ve been fortunate enough to be pretty much exclusive to HP Server and Storage Hardware. HP do build fantastic Server Systems. They are very good at the Desktop space too,
There, I said it, I’m an HP fan for the compute environment.

 

I’ve experienced using MSA1000, 2000, EVA 3000, 4000 and 4400 storage systems which are all solid robust units which I would bet any business on. Sadly though they all seem to come with the mystical Fibre Channel baggage. I’m not particularly averse to dealing with a Fibre Channel network as long as it doesn’t have a Storage Virtualisation product which is misconfigured and miscabled in the mix (my recent nightmare) but, that being said, it is still a pain in the ass.
If you were (and you should) to follow the vendor compatibility matrices for the associated HBA firmware, FC Switch OS, and Storage Processor Firmware’s you will likely find yourself in situation more often than you’d like (I’d prefer never) where you have to say “I need to take the whole SAN offline to update this, and because of this and the compatibility required to stay supported, I have to update this, this and this” all these “this” are usually Host HBAs and FC Switch Firmwares.

 

It’s a dark day when it comes, it really is. I don’t care how much process you put in for it, it’s a dread. I’m sure the Storage Consultants and Field Engineers out there are of a different opinion, but when a multi faceted engineer that’s part of a small team like myself is presented with that situation. I stand my ground in saying it’s a bad day at the office.

 

Don’t get me wrong. I didn’t ever experience data loss or extended outage in FC world, but the knowledge that you’re dealing with an entire organisations data and the paranoia of knowing hundreds of people can’t do their job should something go awry is less than appealing.
You can shout ‘you should have it backed up’ all you like. The prospect of restoring all your backups to another storage repository – because of course you have those lying around don’t you – is equally as appealing.

 

Fast forward to, for me, today, and NFS. NFS is an old school protocol. Been around for ages, now I believe in v4 should you choose to use it, but widely in use as v3.

NFS as you can imagine by nature is used across Ethernet. Now up till fairly recently that meant 100Mb/1Gb connectivity options. Today 10Gb CNAs are gaining uptake, though not for me. So the connectivity options now exceed the available Fibre Channel speeds that I know of (2, 4 and 8Gb). That’s of course if you require a large amount of throughput from your Hosts to your storage.

 

There’s the key right there. Do yourself a favour. Get some kind of performance metrics out of your Fibre Channel infrastructure. Do it now, start Googling how to get numbers out of your Brocade, HP or Cisco or whatever flavour of FC switching you own. You owe it to yourself to find this out.

 

If you already know your throughput requirements require 10Gb CNAs or FC 4, 8Gb HBAs you probably know that as you are running Enterprise Class systems, you have responsibility for looking after an infrastructure that likely doesn’t fit into the normal SMB market. Or, you have a special use case that can’t be canned with those two statements and the remainder of my musings won’t have as great an effect on you.

For the rest of us, running business around the world that aren’t supporting data warehouses and need to run a modest budget and keep some head space at the same time, I think you’d struggle to find your performance reports show that your doing anything that couldn’t be satisfied by Gigabit Ethernet.

 

The key to most storage systems performing is, in a large part, determined by how many IOPS you require and how many IOPS the array can deliver. You can right size your array in terms of capacity but hugely undersell yourself in terms of performance. I would never buy an array based primarily on capacity any more. Of course in the past I did before I knew better, luckily I didn’t get stung in any major ways. It would definitely have been better if I had right sized using IOPS, but that was then and this is now.

These days in the world of VMware and some other peoples take on Virtualisation, you need to provide storage that can handle A LOT of different machines all accessing the same shared storage.
That includes machines that are barely active to machines that are Tier1 such as Exchange and SQL environments. They key to that is providing enough spindles to satisfy your I/O requirements. Only then, once you’ve satisfied your I/O req’s should you look back at your new array design and say “Is that enough room for my data?”. I’ll bet you it’s plenty once you’ve right sized for your I/O req’s.

 

Back to the Transport. NFS.
I don’t think these days there’s any trouble with MS Supporting their OS’s and applications on VMware. One exception I know off the top of my head is the lack of support for Exchange Mailbox Servers using their datastores over NFS. From my point of view, I’ve ignored this caveat and should my organisation arise at an issue where we need support, we’ve decided we’ll use sVmotion the VM to either some DAS or iSCSI system for the duration of the support case should we get busted.

Other than this potentially important caveat there’s no issue in terms of support for your entire infrastructure running from NFS served datastores.

 

So how does my Datacenter look now I’ve migrated 10TB of production data to NFS using sVmotion over the course of a week? Beautifully simple.

Host – Storage Switch – Storage Processor. Done.

Oh my gosh. Gone are the days of Zoning. Gone are the days of FC domain compatibility matrices. Gone are the days of WWNs, F-Ports, N-Ports, NPIV and all that shizzle.

Hello Ethernet.

 

I think a major component in getting a successful NFS implementation is ensuring your deploying either on autonomous storage switches, emulating the FC Switch topology model, switches that stack with fabric interconnects, and on switches that have enough Fabric bandwidth to accommodate for all the ports your going to provision running at full rate. So I’m saying no over subscription and no unmanaged rubbish devices here essentially. There’s also a lot of VMware design guides to help you along with NFS which I won’t bother you with as this isn’t a ‘how to’, it’s simply a ‘what it’s like now I have’.

 

So if you’re going to be in the market for a new storage system. I encourage you to engage with your Technology Partners and ask them what NFS based storage solutions are available. Ask them for a demo or a Webex to some Lab systems.

 

You will be very glad you did.

 

Ciao for now

Advertisements

ESXi installable on USB

A very simple subject but whilst setting up my home VCP5 setup I had the requirement of getting the hypervisor installed on a USB key without burning it to a CD to use in the destination server – For those with iLO or equivalent you can continue browsing more interesting parts of the Internet – but to achieve this mean feat my instructions are as follows:-

Fire up VMware Workstation
Create a VM with no HDD and no network adapters, but added a USB controller
Start the VM with a long boot delay, enough for me to attach the USB key to the VM once the VM was powered up and attached the ESXi 5 installer ISO to the CD drive of the vm.
Let the VM boot and viola, you’ll be installing to the USB key of the VM from your installation media.
You can then take said key to your machine of choice (remember this isn’t supported in production by VMware unless you’re using a manufacturer provided and somewhat costly ESXi USB key) and fire up your new ESXi host from the key.

Sorted me out anyways!
Ciao for now
P x