FAST VP

FAST VP



After writing up this post on FAST VP INYO Goodness last night, I went to delete my notes I had taken on the IPAD and realised I�d forgotten one very cool new feature which may have slipped under most people�s radar.
When creating a LUN from a FAST VP enabled pool in the current version of FLARE, you have the following options to select from.
  • Auto-Tier
  • Highest Available Tier
  • Lowest Available Tier
These of course are Auto-Tier policy�s; selecting one determines which algorithm is used to distribute data through promotions and demotions of 1GB slices between storage tiers.
At LUN creation time I refer to these as �Initial Data Placement� policy�s, a term which I�ve actually taken from one of the VNX best practice documents found on PowerLink. Each policy directly impacts which storage tier the data is first allocated to.
The Highest and Lowest options are self-explanatory; Auto-Tier unless I�m mistaken uses an algorithm to distribute the data over all available storage tiers which in my opinion increases the risk of having performance issues before the pool has had sufficient time to warm up.
When you create a LUN, you�ll find that Auto-Tier is actually the default selection, however I always change this to Highest Available Tier to ensure that data starts off on the highest performing disk and once the migration of data is complete I switch the policy to Auto-Tier to let FAST work its magic.
But Now�� INYO introduces a new policy
  • Start High, Then Auto-Tier
The introduction of this policy effectively means I no longer have to go and remember to do this manually. While some might think this is a non event in terms of new features, to me its a good example of how FAST VP is evolving based on feedback from partners and customers�. and that I like.
Performance and efficiency of data is all about the locality of the data, if you�re migrating data from an older storage array to a VNX, the last thing you want is for people to complain about performance. Although Auto-Tier is the default option when creating a LUN, the option Highest Available Tier is the policy recommended in the best practice documentation.


One of my favorite sessions at EMC World this year was titled �VNX FAST VP � Optimizing Performance and Utilization of Virtual Pools� presented by Product Manager Susan Sharpe.
As a technical person I always worry about having to sit through sessions which end up being too sales focused or that the presenter will not have enough technical knowledge to answer the �hard ones� come question time. This was not the case and it was evident that Susan had probably been around from the very beginning of FAST VP and answered every question thrown at her with ease.
I had hopes of taking my notes from this session and writing up a post about the changes to FAST VP but as you would expect Chad over at VirtualGeek was quick off the mark with this post covering the goodness to come with INYO.
Rather than post the same information I decided to post something on the three points which relate directly to FAST VP and share why as someone who designs and implements VNX storage solutions�these changes were much-needed and welcomed with open arms.
Mixed Raid VP Pools
Chad started off In this section by saying this was the number #1 change requested, and I totally agree.
When you look at the best practice guide for VNX Block and File, the first thing that stands out in terms of disk configuration is that EMC recommend disks be added to a pool in a 4+1 Raid 5 configuration. however when you add NL-SAS drives to the pool you get a warning message pop up� �EMC Strongly Recommends Raid 6 be used for NL-SAS drives 1TB or larger when used in a pool� .. something along those lines.
So the problem here is that until INYO is released, you can�t mix raid types. This means to follow best practice when adding NL-SAS drives larger than 1TB to a pool, you need to make everything Raid 6, including those very costly SSD disks. This of course means your storage efficiency goes out the window.
Why The Warning ? - In my opinion this comes about because of the rebuild times associated with large NL-SAS drives, while the chances of having a double drive failure within a raid group during rebuild is very unlikely, it is potentially a lot higher than the chances of a double drive failure with the smaller faster SAS drives. Never say Never right ?
FAST VP Pool Automatic Rebalancing
As an EMC partner we have access to a really cool application which produces heat maps from CX/VNX NAR files and shows the utilization ratios for the private raid groups within a pool (and much more). It was common to see one or more private raid groups doing considerably more I/O than the others and without the �rebalance� functionality it was difficult to remedy. (to be fair this was typically seen on pools without FAST VP enabled)
Now with INYO adding drives will cause the pool to re-balance as well as what might possibly be an automated/scheduled rebalance of the pool data across all drives. This now means when a customers heat map shows that raid groups are over utilized, you can throw some more disk at it and let the re-balance do its thing.
Higher Core Efficiency
A number of times over the last few years I�ve encountered customers who were moving away from �competitor vendor X� to EMC storage and were used to much larger raid groups and sometimes it was a tough pill to swallow when they expected to get �X TB� and got considerably less having configured the pool using 4+1 Raid 5. (which to date is still best practice)
Susan and Chad both mention that EMC engineering looked at the stats from customer VNX work loads and decided that 4+1 was rather conservative so in order to improve and drive better storage efficiency they would open up support for the following parity configurations.
  • 8+1 for RAID 5 (used with 10K/15K SAS or SSDs)
  • 14+2 for RAID 6 (target is NL SAS)
If you�ve come from a unified (Celerra) background then 8+1 is nothing new and I don�t expect this to cause too much concern. Having these additional parity configurations just makes configuring a VNX that much more flexible and allows us to keep a larger range of people happy.
What you decide to use may depend on the number of DAE�s you have, the expected work load and also if your lucky enough to also have FAST Cache enabled. The best piece of free advice I can give is �Know your workload�
I�m really excited about these improvements and I think this is really going to make a lot of people happy !




download file now

Unknown

About Unknown

Author Description here.. Nulla sagittis convallis. Curabitur consequat. Quisque metus enim, venenatis fermentum, mollis in, porta et, nibh. Duis vulputate elit in elit. Mauris dictum libero id justo.

Subscribe to this Blog via Email :