Scott Drummonds on Virtualization

Server Flash + Array Flash


With the recent general availability of VFCache, EMC has buzzed with ideas about what to do with server-based solid state storage.  Server-based solid state has been around for years.  I remember when Fusion-io visited us at VMware in 2009.  I spent a lot of time thinking about use cases, value, costs, and features. Now at EMC I am asking myself even bigger questions: how far can we go with this technology?  How much can we federate it, migrate data among nodes and within shared storage, protect it, and replicate it?  There are a lot of smart people in EMC that are way ahead of me on this.

But for the time being, the world is using server cache to speed up applications while living with mobility limitations.  Because of my performance background I am still a speed junky.  My long time in that field makes me a bit of a cynic, too.  When I saw a version of the following chart used in an internal EMC presentation I was skeptical.  Take at look at this and ask yourself if you believe it.

Read the rest of this entry »

Flash Or SSD? (or: Why Interfaces Matter)

Comments Off on Flash Or SSD? (or: Why Interfaces Matter)

In my three part series on flash I interchangeably used the terms “flash” and “SSD”.  In a recent article on this subject, Steven Foskett on IBM’s Storage Community successfully convinced me that I should stop using these terms interchangeably.  He then suggested that flash would persevere while SSD would not.  I disagree.

Read the rest of this entry »

The Flash Storage Revolution: Part III


In this final installment of the series, I will provide some detail behind flash storage sizing.  My previous entry contained an analytical and theoretical approach to sizing flash in today’s storage.  When I first studied the ideas I introduced in that post, I thought the flash sizing exercise was hopeless.  After all, how are customers to measure data cooling?  How could a storage admin quantify skew?

As it turns out, familiarity with these abstract concepts is not needed to size flash in your environment.  The same principles that Intel and AMD apply in sizing microprocessor cache can be applied to storage.  There are generalizations that will suit the majority of deployments.

Read the rest of this entry »

The Flash Storage Revolution: Part II


In the previous entry on this ongoing series covering the flash storage revolution, I concluded that flash is now an essential part of enterprise storage. But its value proposition is hinged on high utilization. High utilization cannot be sustained without efficient auto-tiering or accurate cache sizing for flash-based cache.

This article will describe the theory behind optimal cache sizing.  Practical guidance will follow in part three, the last entry in this series. I will again lean heavily on Denis Vilfort’s presentation that I offer for download on my blog.

Read the rest of this entry »

The Flash Storage Revolution: Part I


Six weeks ago I finally upgraded my MacBook to solid state storage.  The change in performance is so dramatic, to say the least.  I have been selling flash storage to EMC’s customers for over a year now and they have been loving it.  But I did not really get how valuable flash is until I saw it on my own laptop.

After this revolution of my own mind, I want to dedicate a few blog entries to the issue of solid state storage in the enterprise.  First I want to frame the problem that flash both solves and causes.  In the second entry I will introduce some of the theory behind flash sizing.  My last article will give you some very simple practical advice on how to use flash in your enterprise.

Read the rest of this entry »

MLC Flash Versus SLC Flash

1 Comment »

EMC’s recent announcement at EMC World of Project Lightning documents a program to increase the use of flash devices in enterprise storage. The project includes increased use of flash storage in EMC arrays, all-flash storage configurations, and support for Multi-layer Cell (MLC) flash. This last subject–MLC flash and its difference from SLC flash–piqued my curiosity.

Many years ago I studied electrical engineering. I was an awful at it. Analog was never my thing. I much prefer ones and zeroes. But I challenge myself to think about electronics once every blue moon. So I decided to delve into SLC and MLC flash technologies to understand how they differ and why we should care.  The content below summarizes my online research and the little bit I remember from school. If you can add, correct, or update this article I would be happy to have your comments.

Read the rest of this entry »

Justifying SSDs


Ever since I saw the results of VMware’s first performance work on EMC’s Enterprise Flash Drives (EFDs) I knew the storage world was about to change.  Even though I love the idea of SSD, I still struggle with the justification of their purchase.  I have had trouble quantifying the value of an EFD and fearlessly committing customers’ money to their purchase.  In this article I want to offer a few thoughts on these devices as I formulate my own ideas as to when SSDs are needed and how we can all enjoy their benefits.

Read the rest of this entry »

Databases, Storage, and Solid State Disks

Comments Off on Databases, Storage, and Solid State Disks

A colleague of mine dropped by my desk on Friday to talk about storage best practices for virtualized databases (SQL Server in this case).  He observed a VMware deployment where the data and log files for a SQL Server virtual machine were consolidated on a single VMFS volume backed by a RAID 5 LUN.  “Is this a VMware best practice?” he asked.  “Should you not put the redo logs on a RAID 10 LUN?”  The answers are ‘no’ and ‘yes’, respectively.  And with the solid state disk (SSD) auto-tiering from EMC (FAST) the second answer is an emphatic “YES!”

Read the rest of this entry »

Optimizing Memory Utilization


My recent series of blog articles have discussed ESX memory management the the performance specter of host swapping. My last article attempts to correct the misconception that VMware recommends against over-commit memory.  In that article I suggested that memory over-commit is requirement in optimizing memory utilization. Today I want to provide a specific example to show why this is true.   I am have also included tips for identifying host swapping in your environments.
Read the rest of this entry »

Solid State Disks and Host Swapping


Recently I have been thinking, talking, and writing about ESX host memory swapping a lot.  ESX swaps memory under the same conditions that traditional operating systems do; the application(s) is using more memory than available on the physical hardware.  Host swapping is an unavoidable consequence of this condition, whether virtualization is present or not.

Read the rest of this entry »