Where does ONTAP Cloud from NetApp fit into your plans?
The other day, I had an interesting conversation about migrating applications to the cloud. We started discussing the merits of ONTAP Cloud from NetApp and if it was more of a marketing opportunity than a real problem solver. The point being made is that it doesn’t seem like a clear direction for the product. This really had me thinking and I would like to share some of my thoughts on this interesting subject. Where does a storage OS fit into a cloud model?
Warning: Much of this article will be controversial and intended to make you think about your own cloud strategy.
As we think about the AWS strategy for consumption, the first big mind shift when going to the cloud comes when we remember that servers are cattle and not pets. This is a very important distinction that is often misunderstood or forgotten when designing a robust cloud solution. Let’s take the ever popular Lambda or Elastic Beanstalk as examples. We design both of these solutions for very different purposes but intended for the same type of solution – Rapid deployment or re-deployment of applications and data. How many times have you heard that you can scale up or down and it just works in AWS? Well, it does but only if your application can sustain that model. The design of your application and data dictates how successful your application could be in this auto-scale model. If we consider the traditional 3-tier web application, then moving to the cloud makes a ton of sense. I can leverage RDS for my database, Elastic Beanstalk with Auto-Scale for my application tier, and round it all out with another Beanstalk for the front-end (or I could get right on the edge and host my static web assets directly from an S3 bucket). I see this type application all the time in AWS type environments. In this scenario, we would store application data in the database or some external repository such as S3 buckets. The model focuses on the ability to horizontally scale out applications as most of the data resides outside of the hosts.
So, what happens when the data must live on a local machine? This is where the world gets a bit trickier and really depends on the nature of the application. Most tutorials or guides will explain that you must store you source code in a repository (Git, CodeCommit, S3 Bucket, etc) or create custom AWS images. This type of model works great when the application data is fairly static and doesn’t need much by the way of updates. In the case of an application that is constantly evolving, you run into some interesting challenges around data propagation even between running systems. Consider a traditional application the requires the data to be stored on local storage and doesn’t benefit from object storage. These systems become extremely difficult to run in the cloud and requires that you either change their design or simply never migrate the solution. These types of applications could benefit from some type of shared storage solution or elaborate replication strategy. Remember, the current model in AWS is a shared-nothing solution as it relates to Elastic Block Storage (EBS) aka local storage.
Why use ONTAP Cloud? These types of legacy applications make the first use case and the most sense. You probably wouldn’t put a highly available Microsoft SQL system on it as likely you are going to just use native MSSQL HA with replication between Availability Zones or simply use AWS Relational Database Service (RDS). Where it really makes sense is in the space of Test/Dev where data and copies really matter. In this case, NetApp FlexClones blow away the competition by far and instantly solves our shared data problem. In a matter of seconds, my application data is up and running for my scaled out solutions. The NetApp FlexClone technology can generate new copies of the data instantly and without increasing my bill. Let’s consider the impact on my horizontal scaling. The solution gives me full copies of my data from exactly that moment in time with zero warming time. In an auto-scale world, this is huge when you consider bringing up multiple systems to fulfill an immediate need. What would happen if you needed to copy 200Gb of data to a new server… or even 10?
NetApp has another interesting component that brings a ton of value to this new scale-out world. The current trend of retrofitting applications for use with Docker has brought its own challenges and successes. Now you can deploying new applications in mere seconds for almost instantaneous compute power. This is a huge game changer. The challenge, as always, revolves around that pesky data copy process. NetApp introduced a very interesting solution to this problem with their NetApp Docker Volume Plug-in (nDVP). This thing is just cool. Using the nDVP, you can provision new Docker Volumes for your containers in seconds and have those exist on your NetApp ONTAP Cloud system. The best part comes when you spin up new containers that include FlexClones of new data. Now, scaling your application is as simple as adding a new container into the solution and all of your data is there… even if you have 5TB of data in that application. The clone happens in seconds. Welcome to the new Blue-Green deployment model.