For the years since its launch, EMC’s active/active fully-available VPLEX has been incredibly popular. Customers love what it does, technologists love talking about it. Similarly, Site Recovery Manager (SRM) has become one of VMware’s most popular products for the elegance, simplicity, testability and audit-ability. Until recently it was not possible to deploy both in one environment.
At EMC World 2012 in Las Vegas, EMC announced GeoSynchrony 5.1, the new operating environment for VPLEX. With this release we can now offer RecoverPoint replication, which enables a complete disaster recovery (DR) and disaster avoidance (DA) solution. We can finally deliver a SRM and VPLEX capabilities to the same applications in a single deployment.
The Singapore Solutions Center builds live demonstrations for our customers. We use these demonstrations for training, experimentation, and sometimes to create videos that show cool technologies in action. This week Osmund Amoroso in the SSC released a canned demo (a video) showing work they have done on VFCache.
In this demo Osmund shows an Oracle database running on vSphere. The database is tested by an OLTP workload. Osmund shows results before and after enabling VFCache. Note the 43% increase in transaction rate. And be reminded that applications that spend less time waiting for data will use more of the CPU, as the video shows.
The Storage Networking Industry Association (SNIA) asked me to present last week at their Implementing Information Infrastructure Symposium (IIIS). IIIS ran in Singapore and Kuala Lumpur. My talk was on the future of virtualization.
The format was ostensibly a debate. An opposing position was offered by HP in KL and IBM in Singapore. But the abstract for talks gave the presenters a wide berth: we could talk of storage virtualization, virtualization futures, and technology trends. I talked about each a little. The HP and IBM representatives focused more on storage virtualization technologies. As a result the debate was without confrontation. There was some disagreement but it primarily occurred in the Q&A.
In the past few weeks I had the pleasure of getting deep into a customer’s performance problem. Ultimately we identified some interesting issues in the environment that we traced back to an overloaded array. Like most performance problems, the complaints started at the application layer and then shifted to vSphere. Like many configurations, it was difficult to pinpoint why the storage was slow. But EMC account teams pride themselves in customer responsiveness. We assembled a small team to help out. I was amazed and grateful that experts from our midtier specialists in Australia, Malaysia, and India all pitched in on the analysis!
If you are a VMware administrator you may choose to leave the nuts and bolts of storage management to your storage teams. While this article talks about those nuts and bolts, I ask you to read on. A little knowledge about how your array works will make you an awesome VMware administrator. It will help you work with your storage administrators to get the most out of your array. When your array is at its best, so are your virtual machines.
The analysis you below is the product of tools EMC can run against your EMC storage in a very short time. The data collection took 24 hours in this case. But the figures I will show were auto-assembled in minutes. This is one of the many cool things an EMC technical consultant or one of our partners can do for you.
On April 12 EMC announced a new effort to deliver infrastructure proven solutions through our partners. The brand name for these solutions is VSPEX. The VSPEX team has already published all kinds of great material on EMC’s VSPEX community.
The EMC Channel team here in Singapore is bringing the VSPEX word to all of our partners throughout Asia Pacific and Japan. Our Cisco channel manager asked me to create a video she could use with Cisco to tell them more about the project. She told me to keep it brief–under 30 seconds–and have some fun with it.
A couple weeks ago I joined a discussion between engineers and customer-facing technologists within EMC and VMware. There was some confusion around a claim by EMC with respect to Transparent Page Sharing (TPS). There exists an EMC paper that hints at disabling TPS. The astute Michael Webster thought this contradicted best practices I provided when leading VMware’s performance technical marketing team. Michael was correct so I decided to jump in and see what I could learn.
A week ago I attended a customer briefing here at the Singapore Executive Briefing Center (EBC). One of my colleagues gave an overview on business continuity and disaster recovery (BCDR). His presentation included a number that I posted on Twitter. David Manconi asked for supporting evidence of my claim so I thought I would post it here.
With the recent general availability of VFCache, EMC has buzzed with ideas about what to do with server-based solid state storage. Server-based solid state has been around for years. I remember when Fusion-io visited us at VMware in 2009. I spent a lot of time thinking about use cases, value, costs, and features. Now at EMC I am asking myself even bigger questions: how far can we go with this technology? How much can we federate it, migrate data among nodes and within shared storage, protect it, and replicate it? There are a lot of smart people in EMC that are way ahead of me on this.
But for the time being, the world is using server cache to speed up applications while living with mobility limitations. Because of my performance background I am still a speed junky. My long time in that field makes me a bit of a cynic, too. When I saw a version of the following chart used in an internal EMC presentation I was skeptical. Take at look at this and ask yourself if you believe it.
Customers have asked me to recommend a protocol for their vSphere environments more times than I can remember. The best answer to this question is “stick with what you know”. By far staying with an existing infrastructure is the best solution. This leverages your existing skills, minimizes risk, and keeps costs down. And no protocol can on its own claim to be the undisputed best choice.
But choosing between protocols does imply some design differences, limitations or benefits. In this article I want to collect some of these items for your consideration. As I asked my friends and colleagues about this subject I realized no one person could completely enumerate the protocol choice implications. So, add your comments to the bottom and we will continue to update this article as a living document.