Storage Power Efficiency

I’ve said it before and I’ll say it again, “When you sweat assets, all you’re gonna get is sweat.” For example, I recently replaced my 10 year old gas boiler. Not because it didn’t work, and not because my house is cold. But because of these 4 requirements:

  1. Need to lower my gas bills – improve efficiency
  2. Concern over failure – reduce risk of failure
  3. Need for instant hot water – more effective solution
  4. Rising monthly support costs – cost avoidance

Have my requirements been met? Absolutely. The boiler comes with 10 years warranty, gas consumption is lower, and we’ve always got hot water. Plus, taking into account lower gas consumption, inclusive warranty and repair bill avoidance, I estimate the new boiler will pay for itself in less than 5 years.

Aged IT infrastructure is no different. If sweating those assets you may want to think again. And take advantage of efficient and effective solutions. Refreshing your storage arrays is a great example of lowering your power bills while delivering improved IT Ops efficiency with improved service delivery back to the business. Let’s look at power consumption. And to put your thoughts into context by seeing how much it costs to make a cup of tea.

Rising power cost are a continual concern. Both domestic and commercial power costs have increased by over 50% and costs aren’t likely to come down anytime soon. My kettle consumes 2840 watts of power. So that’s 2.84 kW. And it boils 0.5 ltr of water in just 1 minute. Power cost is measured in kilowatt hours (kWh). So at £0.34 per kWh, If I boil 0.5ltr of water, it costs me £0.02 @ £0.34 per kWh (UK tariff). But if I was to leave my kettle on for 24 hours, it would cost me £23.20.

(2.84 kW x 24 hours) x £0.34 per kWh = £23.20 per 1 day

If I boiled my kettle for year would cost::

£23.2 x 365 days = £8,468 per 1 year (€9,596)

And if I leave my kettle continuously on for 5 years::

£8,468 x 5 years = £42,340 (€47,982)

So here’s a question…… If I over-provision my kettle and fill it to 1.7 ltr. How much more will it cost? The answer is….. 200% more power cost. And 400% more time to boil. Obviously, over-provisioning my kettle is not efficient, and not an effective use of time….And bad for Planet Earth our home. Fortunately, I don’t leave my kettle boiling water 24/7. And I don’t drink too much tea. But IT infrastructure runs 24/7 every day. And is very power hungry. So here’s a tip…..Don’t sweat that asset, modernize and save power costs, improve operational efficiency, and deliver improved services back to the business.

Let’s look at the outcomes in the following example. Here we modernize to Dell Technologies PowerStore 1200T taking advantage of software defined and hardware accelerated 4:1 data reduction. WIth high-bandwidth NVMe SSD disks. As you see the benefits and outcomes are compelling. Not just capacity and performance gains, but also considerable power reduction. Here’s the stats:

  • 91% less power
  • 80% less rack space
  • 96% less latency
  • > 2.2 x more performance

And there’s more….If moving to modern storage that runs on less that 1 kWh, taking only 2 rack units, there’s the opportunity to consolidate data center infrastructure. Moving servers, network switches and more into a single rack will result in less data center racks. Less racks equates to lower data center costs. And reduced IT operational costs.

So what’s the likely cost savings? Well, taking all things into consideration that a modern solution like Dell Technologies PowerStore delivers you’ll be surprised. See example calculation below. Do you still want to sweat those old assets?

Part 4: VSAN deduplication and compression – The TARDIS Effect

tardis

In the UK we have a saying, “You can’t put a quart in a pint pot,” meaning something large won’t fit in something small. As a kid growing up in the 70s I’d feast on a diet of Star Trek and Dr Who. But it was the “science” behind the fiction that got me thinking. Take the TARDIS, a telephone box as a time travelling spaceship, pure genius. Small on the outside, but crazy large on the inside. How did they do that? Well, now I have the answer – deduplication with compression.

My VSAN journey has got to Data Reduction Street. And VSAN does deduplication with compression just fine. I’m getting a data reduction ratio of 3:1 with a mix of Windows Server, Linux and Windows 7 virtual machines. And the crazy thing is I’m now saving almost as must capacity as I started with. Let’s break that down….

Per each host we have 1 disk group comprising 2 x SSDs. Each host has 1 SSD dedicated to data caching, while the other disk is dedicated to data storage (vmdk, swap files, etc.). So, my VSAN datastore is made up of 3 x SSD data disks.

The disks were marketed as 256 GB which is 256 x 1000. However vSphere sees these disks as GiB which is binary multiples of 1024 bytes. So, converting 256GB to GiB we get:

256GB x 0.931323 = 238.42 GiB

So, with my datastore made up of 3 x SSD I get the following total capacity:

3 x 238.42 GiB = 715 GiB (rounded)

But deduplication and compression requires “thinking space”, so I have some overheads. Therefore, my usable capacity is:

715 GiB – 37.85 GiB = 677 GiB (rounded)

Now into my VSAN datastore I’ve provisioned 10 VMs, (2 x Linux, 2 x Windows Server, 6 x Windows 7). Before data reduction my storage provisioning requirement is almost 1TiB which of course won’t fit in 677 GiB. Unless we have “The TARDIS Effect”.

Now, after dedupe and compression I’m only using 314 GiB. Wow! Capacity savings are a crazy 621 GiB. That’s almost what I started with. And I even got 363 GiB free for more VMs.

Now that’s with a data reduction ratio of 3:1 (rounded). If doing VDI with many similar desktops sharing the same cloned VM I’d expect a reduction ration of 10:1. So yes, with VSAN and all-flash, a quart will fit in a pint pot, many times over.

What been learnt?

  • VSAN deduplication and compression works
  • No complex configuration, just click Enable button
  • Using hyper-converged with all-flash makes good economic sense given capacity savings achieved

Part 3: VSAN pit stop

vm_raid1

At this stage of our journey let’s take a pit stop and check understanding……. OK, if you’re a traditional storage administrator it’s time to forget everything you know. Software defined storage is different, very different. But this is good. First forget disk RAID groups, then forget storage pools, and now forget LUNs. When you’ve forgotten all that, forget storage sizing as you know it.

With VSAN, host and disk resilience is set at the VM level. Each VM can have different settings by simply applying storage policies per each VM. Really, very simply, server disks are presented to ESXi hosts as just a bunch of disk (JBOD). No hardware RAID. And with VSAN datastore storage policy settings we get very granular control over hosting our VMs. Great for solving the noisy neighbour problem.

Excited to get a feel for VSAN I’ve migrated my Windows 7 VDI desktop from a spinning disk datastore to my SSD enabled VSAN datastore. My first observation – with all-flash VSAN, VDI performance is lightning fast.

The above shows the default VSAN storage policy applied to my VM. With stripe width set to 1 there isn’t really any striping, but a single instance of the vmdk object is replicated. I must tolerate at least 1 host failure, or a single failed SSD. With RAID1 the objects that make up my VM (vmdk, etc) are copied to SSD disk on both ha-esx01 and ha-esx02. Then these objects are kept in sync. My 3rd host works as a witness preventing a split-brain scenario between ha-esx01 and ha-esx02.

Whereas RAID1 mirroring in traditional storage has +100% overhead so does the RAID1 storage policy when applied to VMs. For example, my Windows 7 VDI VM has a 32GB virtual hard disk (vmdk). So, with RAID1, 64GB will be required. However, with my all flash build I can leverage deduplication and compression. And with data reduction I’m getting a reduction ratio of 1.5:1. So, after data efficiency services I only need datastore capacity for 42GB. That’s a saving of 22GB. Now that’s with just 1 VM. As I create more VMs the data reduction ratio will increase saving me even more disk space.

With only 3 hosts I can’t better RAID1. But with 4 hosts I could do RAID5. And with 6 hosts I could go to RAID6. It’s worth knowing the disk capacity overheads for each policy:

  • RAID1 +100% (2 x replication) – can tolerate 1 failure
  • RAID5 +33% (3+1) – can tolerate 1 failure, but uses less disk space
  • RAID6 +50% (4+2) – can tolerate 2 failures, but use a more space than RAID5

If its performance you’re after, RAID1 is best option as I/O is split across hosts. But with flash you be getting stellar performance whichever option you choose. In fact VSAN supports up to 100,000 IOPs per host.

Now if a storage administrator you’ll be thinking, “How do I size that?” I know, it’s not easy as each VM can have a different RAID storage policy applied. The good news is hyper-converged is easily scalable, both scale-up and scale-out. So not such a problem if you get it wrong. My advice is to group your VMs per storage policy to be applied, and then total required storage. Then add capacity overhead per each policy RAID group. Finally deduct estimated data reduction ratio (very conservative) of 1.5:1. This approach will leave some in reserve taking care of space used for other files. So, for example:

sizing

So, for my 60 x VMs using a mix of RAID1 and RAID5 I’ll need approximately 6TB disk space. This approach should allow space for VM snapshots, swap files and configuration files. Of course I’m assuming a conservative data reduction ratio of 1.5:1. When hosting many similar VMs I’d expect a data reduction ratio of 10:1. I’ll see if I can test that theory and post results.

VxRAIL is Dell EMC’s hyper-converged appliance inclusive of vSphere and VSAN. There’s many server nodes and options offering choice depending on storage capacity and compute requirements. It’s ready built and good to go. Get it up and running in minutes. Even the entry E Series gives almost 6TB all-flash storage. And with all-flash we get data deduplication and compression.

vxrail_specs

Dell EMC VxRail appliance spec. sheet

What been learnt?

  • Forget what you know as a traditional storage administrator. No disk RAID groups, storage pools, LUNs
  • Host and disk failure tolerance can be applied at virtual machine level (software defined RAID1, RAID5/6)
  • Different storage policies can be applied per each VM
  • Traditional storage sizing won’t work. Size by VM, storage policy applied and data reduction estimated. Then allow headroom for snapshots etc.

Part 2: My VSAN journey continues

vsan_vds_overview

Sometimes I just have to do it. And that’s GEEK-OUT. Over the weekend I’ve feasted on geek-out. And guess what? It’s alive!! My VSAN is alive……!

This was my weekend check list:

tasklist

Although not needed as VSAN has been around for a while now, I wanted to upgrade my lab to the latest vSphere version. And I must say, upgrading my 6.0 vSphere environment to 6.5 was seamless. In particular, I really liked the 2 stage approach in upgrading vCenter. VMware continues to make software defined infrastructure easy. Great job guys….

So, I can confirm it my VSAN is alive and kicking. And I love it. But in this blog I’ll not talk so much about VSAN, instead I’ll talk about networking. And networking is key to VSAN. So think networks first.

As with all other vSphere clustered services, VSAN relies on high-speed, inter-node connectivity. Although it’s tempting to jump straight in, my advice is first carefully plan your network. I decided to use a dedicated VLAN with 2 x dedicated ports per each host. Now as this is hyper-converged, all network traffic including VSAN will go through the same IP switches. There’s no separate storage fibre channel switching here, I’ll use the same IP switch for VMs and VSAN traffic. So, it’s important to segregate VSAN traffic to ensure best performance.

On my Cisco managed switch I configured 6 ports to be used for my VSAN VLAN. Of course in production you’d use a stack of at least 2 x IP switches to ensure resilience, but hey this is my lab. Now when configuring my VLAN these are the settings I used:

VLAN Id2
Dedicated subnet10.208.218.1/28
Jumbo framesEnabled
MulticastEnabled

With the physical IP switch configured I set about configuring the required vSphere virtual switch. Although not strictly required, my design decision was to create a vSphere Distributed Switch (VDS). But as I said, my objective is hands-on for the learning opportunity. Shown above you’ll see the dedicated VDS created for VSAN.

Now my network runs VSAN dedicated traffic. In my next blogs I’ll talk about how each server node had a disk group added to VSAN. And how these disk groups got joined over the network to make up the single datastore presented to each ESXi host. Then over this network how VSAN storage objects are distributed.

What have I learnt?

  • Network planning is key. Do this first
  • Best to use dedicated VLAN if using same switches for all network traffic
  • Ensure resiliency by using multiple ports per hosts. And at least 2 x IP switches in your stack
  • Enable multicast and Jumbo frames on your switch
  • Document VSAN network for future reference and trouble shooting

Journey to the centre of the VSAN

Part 1

Fantastic!! Server room and data centre infrastructure is reduced down to its lowest common denominator – the x86 rack server. This all started with server virtualisation, then network virtualisation, and is now complete with storage virtualisation. So, with just rack servers, top of rack IP switches and software defined infrastructure, we get simplistic scale-out nodes inclusive of block SAN storage, networking, and of course the hypervisor.

When running virtual servers, hyper-converged appliances offer the next step in data centre consolidation and simplicity. But storage hasn’t disappeared, it’s still there under the hypervisor covers. I’ve learnt a long time ago, the best way to learn is by doing. So, over the next few weeks I’ll be getting under those covers to understand vSphere VSAN storage. This journey will start by rebuilding my vSphere lab. Then I’ll continue with VSAN installation and configuration. And finally I’ll test out functionality including expected behaviours using vmotion and HA functions. And to journeys end, I’ll test enterprise storage functions including storage policies, deduplication and compression.

But before getting started here’s a little about VMware’s VSAN…… First, the bottom line – if right for your data centre VSAN can replace external, aged SAN storage when used with VMware. VMware gave us highly available virtual servers and the ability to move VMs between hosts, but only if adding external SAN storage. Now with storage defined as software, virtual SAN resides internal, inside the rack servers.

The vision for this journey is to have 3 x vSphere 6.5 ESXi servers (nodes) clustered together. Each server will be installed with 2 x 256GB SSD drives enabling me to test VSAN deduplication, compression and RAID5/6 capabilities. For networking I’ll install additional 1Gbps network cards to be dedicated to VSAN inter-node traffic. And so that SSD disks are dedicated to VSAN, I’ll install the ESXi hypervisor to USB drives, and set ESXi to boot from USB.

At journeys end, this is what my 3 x node lab will look like……

vsan_overview

So just what is VSAN?

Why should we I get excited and learn this stuff? Well firstly, VSAN fully supports all the great tried and tested vSphere features that have made our lives so much easier in the data centre. So, features that rely on SAN storage just work, that’s vSphere high availability (HA), vSphere distributed resource scheduler (DRS), of course vMotion.

VSAN works to provide both resiliency and scale-out storage. Resiliency is achieved by integrating with vSphere cluster features while adding additional RAID resiliency ensuring VMs are protected should disks fail. Scale-out is achieved because when add nodes with additional disks, we’re also adding compute and memory resource.

I’m really keen on this scale-out capability. Traditional SANs based on 1 or 2 active storage processors (SPs) scale-up as we add more disk. But it’s like when I load my car before a camping trip. The boot’s full, bikes loaded on the roof rack and I’m towing a trailer. Needless to say performance suffers. It’s the same when fully loading traditional scale-up SANs. More load, but no more horsepower. But as we add VSAN nodes and we’re adding both capacity and compute horsepower.

And then we have the fact VSAN is not an additional VM being run by the hypervisor. VSAN is built directly into the hypervisor kernel. This approach is very smart and has the following advantages:

  • Shortest data path, with high performance as outcome
  • No need to manage, support, update separate virtual SAN VMs, with low admin overhead as outcome

As capacities have gone up and prices have come down, all flash is the way to go. And with all flash VSAN we get up to 100,000 IOPs per node. And then VSAN supports up to 200 VMs per node, or 6,400 VMs per 64 node cluster. With all that scale-out grunt VSAN is Enterprise ready and fully capable. From branch offices running 2 nodes, to Private Clouds running up to 64 nodes, VSAN scales. As a solutioner I see many use cases.

Hardware

Until I win the lottery I don’t have the budget to run hardware compatible with VMware’s HCL. So, I’ll use my basic x86 tower servers with commercial grade SSDs. I’m also be using 1Gbps networking. Although in theory this’ll work, when using all flash 10GbE should be used. But this is my lab, and the objective is to learn and share my experiences.

To be honest, for quickest time to value my recommendation is to buy and not build. With VxRail Dell EMC has a whole range of hyper-converged appliances good to go, check out:

And if you want a fully built racked and stacked solution inclusive of IP switches I’d recommend VxRack SDDC. Taking this approach you’ll also get additional support taking care of all the stuff we don’t like doing, yep driver updates.

There’s a knock on the door. Exciting times, the post man has just delivered my new SSD drives and SATA cables. Now it’s time to rebuild my lab with SSDs and vSphere 6.5. When I’ve clustered my servers I’ll configured VSAN. Journey to the centre of the VSAN has begun…..

VDI Optimised Storage – XtremIO

It makes good sense. If moving from persistent physical Windows desktops, then persistent VDI desktops are a good fit. Unless, of course, if your users are task workers, for example order entry clerks or call center operatives. In this case, session based VDI such as Microsoft RDS or Citrix XenApp would be the most efficient and cost effective desktop provisioning solution.

Up until recently, persistent VDI desktops would blow any budget, or fail when moving from POC to production. Simply put, persistent VDI is storage hungry. So at scale we have many separate virtual desktops all competing for storage space and performance. So, yesterday the only answers were lots of spinning disk, detune the desktop to run under minimal IOPs and use inflexible solutions that boot many temporary VDI desktops from a single master image. Given this, the results are poor user experience, and we’ve taken the personal out of PC. So VDI at scale sucks, right?

The good news is that today’s storage technologies allow building cost effective and working VDI solutions using persistent VDI desktops that are easily managed – just as you manage your physical PCs today. And looking at it with user eyes, well now I’ve got mobility provided with a user experience that’s not detuned or inferior. Hey! I’m a happy user and I love my IT department. 🙂

XtremIO by EMC

So what is optimised storage? Basically, its about getter more for less. And we all like that. How can I deliver on the promise that’s VDI with optimised storage? Let me explain through example……

I’ve been lucky enough to have seen the light. For a few months now I’ve been working with XtremIO.

Now VDI at scale has 2 big problems; required storage capacity and required I/O performance. Developments in storage technologies using Enterprise Flash SSD drives with intelligent storage arrays solve these problems. XtremIO is EMC’s Enterprise class, intelligent all flash array solution capable of providing storage capacity through intelligent data deduplication with consistent I/O performance at scale. Many thousands VDI desktops can now be hosted within a small and easy to manage footprint. I kid you not, when I setup XtremIO it took only a couple of hours, including coffee breaks.

I’m now convinced to satisfy the storage needs of VDI at scale, two key storage solutions are required. In fact I’ll go as far as to say that no Enterprise VDI solution should be without the following:

  • Inline deduplication: Breaks hosted VDI desktops into small data chunks. As data enters the array, only unique data chunks are stored and shared on demand. Of course, hosting VDI with many Windows 7 desktops made up of the same operating system and application files provides excellent deduplication. For example, with VDI XtremIO can reduce storage requirements by as much as 90%. That’s a big saving in complexity, data center space and power. Simply said “Don’t store it, dedupe it.”
  • Flash performance: Many thousands VDI desktops equate to hundreds of thousands of storage input/output requests or IOPs (input/output per second). To meet this need traditional storage spreads the load over many hundreds of spinning disk drives fronted by large memory cache. Using all enterprise class eMLC solid state drives (SSD), XtremIO reduces the need for many spinning disk drives and eliminates complexity without the need for memory cache and storage data tiers. Simply said, “No data tiers and complex policies, its all fast. And fast is always on.”

Meeting VDI storage needs with XtremIO, VDI desktops are hosted using minimal data center footprint. And because the whole solution is simplified, VDI desktop deployment can typically be completed within a much shorter time. Taking advantage of deduplication and SSD performance enables VDI without the need to detune desktop provision. So no need to compromise end user experience.

Table 1 below summarizes how XtremIO meets VDI requirements.

VDI requirementXtremIO feature
Very high level of random small IOPs supplying many desktops at once. Consistent end-user experienceHigh IOPs performance using all flash SSD array. Consistent <1ms response times
Users have non-persistent and persistent VDI images. Persistent desktops are easier to manage and provide best end-user flexibility, but require large amounts of storage capacityInline deduplication up to 90% enables both non-persistent and full persistent VDI desktops with 1:1 user to personal VDI desktop mapping or shared pooled VDI desktops
Ease of management by desktop teams. Use of specialists for complex storage solutions increases operating costs, storage must be simpleEasy to use graphical management interface. No complex storage configurations across multiple tiers. Only 3 steps to install and setup. Low OPEX
Simple and rapid VDI desktop provisioningFull VMware VAAI integration. Offloads VMware copy and clone tasks to XtremIO. Enables rollout and updates of hundreds of desktops in minutes
Grow VDI desktop numbers within same storage systemModular solution from 1 X-Brick. As scaled out linear increase of capacity and IOPs as load spread across all X-Bricks in the cluster
Low data center hosting costsSmall data center footprint, each X-Brick is only 6 rack units. And can support over 2,500 persistent VDI desktops or 3,500 non-persistent. Or many 000s more if using RDS or XenApp
Low cost per VDI desktopSmall footprint equals reduced cost

As a VDI guy I’m excited. It’s not a JBOD (just a bunch of disk), XtremIO is a fully featured, intelligent Block storage array that has been designed and developed from the ground-up to take full advantage of flash solid state disks (SSDs). For the Enterprise, XtremIO has been designed to deliver the high performance, reliability and scalability expected when providing primary storage for many business end-user desktops and applications.

Most suitable for VDI environments with many common system and data files, for example, Windows 7 desktops, XtremIO returns a large amount of usable storage capacity following inline data deduplication. With an average 5:1 reduction ratio logical usable capacity starts at 37.5TB for a single X-Brick scaling to 150TB when using a cluster with four X-Bricks. And performance scales from 250,000 to 1,000,000 depending on reads and writes and amount of duplicated data.

XtremIO cluster with 4 Xbricks
XtremIO cluster with 4 X-bricks

XtremIO is made up of X-Bricks. Each X-Brick node is a self-contained unit configured with no single point of failure to be highly available with dual active controllers, multiple connectivity and battery backup. In the event of power or component failure this configuration enables VDI services to continue without disruption allowing for scheduled maintenance and component replacement.

Small, but powerful. A single X-brick
Small, but powerful. A single X-brick

Often in life we find new problems require even newer solutions. I love it, XtremIO solves the 2 storage problems associated with VDI at scale; capacity and performance. By being an intelligent All Flash Array and not a fast JBOD, XtremIO simplifies VDI enabling rapid rollout for thousands of VDI desktops with easier ongoing support and management.

EMC sizing and validated testing has proven a single 6U X-Brick can support over 2.500 persistent VDI desktops or up to 3,500 non-persistent. And if taking a blended approach to VDI with a mix of RDS or XenApp session based, non-persistent and persistent Windows 7 desktops, XtremIO can host many 000s desktops within an X-Brick cluster. And I’m not joking, I’m currently working on a solution that’ll host 4,000 XenApp sessions and 1,000 persistent VDI desktops all from a single X-Brick. This will be tested using LoginVSI workload generator. Maybe I’ll get more, I wouldn’t be surprised.

VDI is now inclusive of most business desktop strategies with the promise of consolidating desktop computing in the data center for improved manageability, efficiency, and security. VDI is also being seen as the enabling technology supporting flexible and increasingly mobile worker styles. Storage has long been a complex problem when delivering VDI at scale, but XtremIO simplifies and saves costs with a much smaller and easy to manage footprint. Now, the VDI desktop can be personal again. And simply managed with your existing desktop management tools such as SCCM.

The results of using XtremIO for VDI at scale are:

  • Low cost per desktop: Taking advantage of data reduction through deduplication and high performance flash SSD drives the storage footprint is minimal. VDI is an ideal use case as very high data reduction through deduplication is possible by leveraging the commonality of hosted Windows 7 operating system files and application data.
  • No compromised user experience: As an intelligent all flash array data is load balanced across all X-Bricks and all SSD drives across the XtremIO cluster. The approach maintains consistently <1ms response times. The result is a VDI solution that can service all worker types including demanding knowledge workers.
  • Simplicity: XtremIO has a single console that manages the cluster as a single entity. Because Enterprise features are automatic and on by default, and because there is no data tiering using multiple RAID groups, levels and volumes, setup and ongoing management tasks are minimal.
  • Linear scale: Adding X-Bricks to the cluster is true linear scale out for capacity and performance. All data and workload is services by all SSDs and controllers with the cluster.
  • High Availability: No single point of failure with built in redundancy and efficient SSD disk protection.
  • Reduced costs: Smaller data center footprint saving power and hosting costs. Reduced ongoing costs as a result of simplicity and ease of management.

So my message is clear – if doing VDI at scale you need data inline deduplication to solve the capacity problem. And to solve the performance problem you need consistently high IOPs provision with very low latency. Only eMLC SSD Flash drives coupled with intelligent data services will give you this. Basically, if you’re designing your VDI solution based on detuning the desktops and lots of spinning disk, think again. That was yesterday, this is now.

My advice, check out http://xtremio.com/vdi

Accelerate VMware ESX with SSD Flash

I’m having a bit of a “geek out” this weekend. But before I test my speedy new SSDs attached to VMs hosted within my ESX hosts, I thought I’d share just how easy it is to install and configure Samsung 840 Pro SSD Flash disks. My servers are HP ML110s and I’m using VMware vSphere 5.1.

1. Install the SSD within the caddy. Then install within the server disk drive bay.

ssd_caddy

2. Connect the SSD to the 6Gb/s SATA bus, and connect power cables.

Note: Samsung 840 Pro is also compatible with 3Gb/s.

SATA_cable

3. Start up your ESX server, then connect and login using vClient.

4. Using vClient, click the ESX Configure tab and then select Storage Adapters. Click Rescan All.

ssd01

5. Click OK to scan all attached devices.

6. Select Storage, then click Add Storage.

ssd04

7. Select Disk/LUN storage type and then click Next.

8. You should now see the SSD disk is added. Select you SSD, then click Next.

ssd06

9. Select VMFS-5 file system. Click Next.

10. Give the Datastore to be created a name, e.g. 01flashSSD01.

ssd09

11. Click Next to create the Datastore using all available disk space.

12. Review your configuration. Click Finish.

ssd11

And that’s it. I did say easy. You now have high-speed SSD disk installed, configured and presented as a VMFS-5 Datastore. In my next blog I’ll run a 64-bit Windows 2008 server on SSD and I’ll use IOMETER to compare IOPs achieved with traditional spinning disk.

Volumetric Efficiency with Turbocharged EMC FAST VP

When I left school, working in motor vehicle engineering I learnt how adding a turbocharger to an engine would increase the volumetric efficiency. For example, fit a turbo charger to a 1.6ltr engine and it performs like a 2.0ltr engine – we’ve increased the volumetric efficiency. But the real advantage is when 2.0ltr performance is not required, because now we get the fuel efficiency of a smaller lump. So what we’ve got here is performance on demand, and efficiency from having a smaller engine in the first place. Why am I talking about this? Because when you think about it, intelligent storage with a bit of Flash follows the same concept – Flash turbocharges storage by increasing performance on demand.
Using optimized software with automated tiering technologies that leverage Flash/SSDs, top storage vendors such as EMC are giving us volumetric efficiency by:

  • Providing a caching layer that keeps data in cache (memory) serving the hottest data.
  • Adding a performance SSD tier in a multi-tiered array (turbocharger) that’s used when needed.

I call it Volumetric Efficiency, EMC’s calls it FAST VP (fully automated tiering for virtual pools). With FAST VP, storage is managed in pools consisting of different drive types including Flash SSD, together with high-performance SAS disks and high-capacity nearline (NL-SAS). Cache will deal with high I/O bursts. But during normal running, FAST VP turbocharges data blocks by intelligently moving blocks between tiers on demand. For example, highly active data will be given a boost by being moved from SAS disks to the SSD disks, and back again when activity decreases.
Just like my turbocharged engines, FAST VP results in a smaller, efficient storage footprint that requires less data center space, reduced power and less cooling. Of course, turbochargers are now fitted to most cars, trucks, trains and even ship engines. In motor vehicle technology terms it’s why we get more for less. It’s why my car can go from 0-60mph in less than 9 seconds. And it’s why I get 45mpg when not heavy footed. Yes, it’s a diesel, but performs like a petrol sports.
So, when refreshing your Enterprise storage, think Volumetric Efficiency and think how your car works. As EMC says, “A little bit of Flash goes a long way”. The picture below illustrates my point where 3 racks of traditional 15k storage has been replaced by a single turbocharged stack using FAST VP with SSD Flash tier.

EMC FAST_VP
FAST VP gives storage volumetric efficiency, turbo boost

Samsung SSD 840 Pro vs 15K RPM Disks

It’s February year 2000. And Seagate have announced the world’s first hard drive to operate at 15,000 RPM delivering 3.9 msec average seek times – the fastest hard drive in the world. But that’s where it stopped. Hard drives got smaller, but not faster. The laws of physics have prevented spinning disks going any faster. So we aggregate many disks using RAID in order to meet demanding IOPs hungry OLTP databases and now VDI. Storage hasn’t kept pace with Moore’s law, not like CPUs. Not until now……

IMG_0190
HP DL360 servers with ULTRA320 15k HHDs

In my garage lab, I’ve recently retired my 2 trusted old HP DL380 servers. Equipped with 15K disks these servers were the best I could get, and my workhorse for demonstrating VDI. So just what did I get? Let’s do the math.

Disk_perf
ULTRA320 SCSI HHD 15k, 167 IOPs with 64K blocks

As we can see, each disk gives me up to 167 IOPS. With 6 disks per server at max I  get (6 x 167) = 1002 IOPs. Now taking into account 70% utilization we get (1002 x 0.7) = 701 IOPs available per sever.

Now enter the Samsung SSD 840 Pro. I’ve switched off my DL380s, my and installed 256GB SSD drives in each of my HP ML110 VMware ESX hosts. First benefit – lower power bills!!!

IMG_0180
Samsung 840 Pro, not much bigger than a credit card

Samsung make some real wild performance claims. But even if they’re half right it’s light years ahead of my old 15K disks.

  • Capacity: 256GB
  • Random IOPs: Read up to 100,000
  • Random IOPs: Writes up to 90,000

Now that’s Samsung’s spec, so next it’s testing time to see for myself just what “hyper drive” feels like. Seriously, I must admit first impressions are fantastic. Flash is certainly the way to go. It’s funny, what’s been done here in micro is what smart Business is doing. Spinning disk arrays are being replaced or supplemented with Flash disks. So instead of 150 spinning disks to give 15,000 IOPs we can add a Flash tier to hybrid arrays such as EMC’s VNX. And we can connect all Flash arrays where extreme IOPs are required. Whatever way you look at it, we get more for less. Hyper drive, stellar performance in a small foot print.