In a recent podcast, Clayton Weise, Director of Cloud Services, sat down with Principal Analyst Dana Gardner to discuss how KeyInfo crafted a storage capability that has broad extensibility into hybrid cloud and multi-cloud support. Below is an excerpt from that discussion. To download the full podcast, CLICK HERE.
Gardner: What prompted you to improve on the way that object storage is being offered as a service? How might this become a new business opportunity for you?
Weise: About this time last year, at Hewlett Packard Enterprise (HPE) Discover, I was wandering the event floor. We had just gotten out of a meeting with SwitchNAP, which is a major data center in Las Vegas. We had been talking to them about some preferred concepts and deployments for storage for their clients.
That discussion evolved into realizing that there are number of clients inside of Switch and their ecosystem that could make use of storage that was more locally based, that needed to be closer at hand. There were cost savings that could be gained if you have a connection within the same data center, or within the same fiber network.
Under this model, there would be significantly less expensive ways of pulling data in and out of a cloud, since you wouldn’t have transfer fees as you normally would. There would also be an advantage to privacy, and to cutting latency, and other beneficial things because of a private network all run by Switch and through their fiber network. So we looked at this and thought this might be interesting.
In discussions with the number of groups within HPE while wandering the floor at Discover, we found that there were some pretty interesting ways that we could play games with the network to allow clients to not have to uproot the way they do things, or force them to do things, for lack of a better term, “Our way.”
If you go to Amazon Web Services or you go to Microsoft Azure, you do it the Microsoft way, or you do it the Amazon way. You don’t really have a choice, since you have to follow their guidelines.
Where we saw value is, there are times in the midmarket space for clients — ranging from a couple of hundred million dollars up to maybe a couple of billion dollars in annual revenue — where they generally use object storage as kind of a inexpensive way to store archival, or less-frequently accessed data. So [the cloud storage] became an alternative to tape and long-term storage.
We’ve had this massive explosion of unstructured data, files, and all sorts of things. We have a number of clients in medical and finance, and they have just seen this huge spike in data.
The challenge is: To deploy your own object storage is a fairly complex operation, and it requires a minimum number of petabytes to get started. In that midmarket, they are not typically measuring their storage in that petabytes level.
These customers are more typically in the tens to hundreds of terabytes range, and so they need an inexpensive way to offload that data and put it somewhere where it makes sense. In the medical industry particularly, there’s a lot of concern about putting any kind of patient data up in a public cloud environment — even with encryption.
We thought that if we are in the same data center, and it is a completely private operation that exists within these facilities, that will fulfill the total need — and we can encrypt the data.
But we needed a way to support such private-cloud object storage that would be multitenant. Also, we just have had better luck working with open standards. The challenge with dealing with proprietary systems is you end up locked into a standard, and if you pick wrong, you find yourself having to reinvent everything later on.
I come from a networking background; I was an Internet plumber for many years. We saw the transition then on our side when routing protocols first got introduced. There were proprietary routing protocols, and there were open standards, and that’s what we still use today.
So we took a similar approach in object storage as a private-cloud service. We went down the open source path in terms of how we handled the provisioning. We needed something that integrated well with that. We needed a system that had the multitenancy, that understood the tenancy, and that is provided by OpenStack. We found a solution from HPE called Distributed Cloud Networking (DCN) that allows us to carve up the network in all sorts of interesting ways, and that way we don’t have to dictate to the client how to run it.
Many clients are still running traditional networks. The adoption of Virtual Extensible LAN (VXLAN) and other types of SDDC within the network is still pretty low, especially in the mid-market space. So to go to a client and dictate that they have to change how they run the network it is not going to work.
And we wanted it to be as simple as possible. We wanted to treat this as much as we could as a flat network. By using a combination of DCN, Altoline switches from HPE, and some of other software, we were able to give clients a complete network carrying regular Virtual Local Area Networks (VLANs) across it. We then could tie this together in a hybrid fashion, whereby the customers can actually treat our cloud environment as a natural extension of their existing networks, of their existing data centers.
Gardner: You are calling this hybrid storage as a service. It’s focused on object storage at this point, and you can take this into different data center environments. What are some of the sweet spots in the market?
Weise: The areas where we are seeing the most interest have been backup and archive. It’s an alternative to tape. The object service becomes a very inexpensive way to store large amounts of data, and unlike tape — where it’s inconvenient to access the data — with object as a service everything is accessible very, very easily.
For customers that cannot directly integrate into that object service as supported by their backup software, we can make use of object gateways to provide a method that’s more like traditional access. It looks like a file, or file share, and you edit the file share to be written to the object storage, and so it acts as a go-between. For backup and archive, it makes a really, really great solution.
The other two areas where we’ve seen the most interest have been in the medical space, specifically for large medical image files and archival. We’re working now specifically to build that type of solution, with HIPAA Compliance. We have gone through the audits and compliance verification.
The second use-case has been in the media and entertainment industry. In fact, they are the very first to consume this new system and put in hundreds of terabytes worth of storage — they are an entertainment industry client in Burbank, California. A lot of these guys are just shuffling along on external drives.
For them it’s often external arrays, and it’s a lot more Mac OS users. They needed something that was better, and so hybrid object storage as a service has created a great opportunity for them and allows them to collaborate.
They have a location in Burbank, and then they brought up another office in the UK. There is yet another office for them coming up in Europe. The object storage approach allows a kind of central repository, an inexpensive place to place the data — but it also allows them to be more collaborative as well.
Carving up the network
Gardner: We have had a weak link in cloud computing storage, which has been the network — and you solved some of those issues. You found a prime use-case with backup and archival, but it seems to me that given the storage capabilities that we’ve seen that this has extensibility. So where it might go next in terms of a storage-as-a service (SaaS) that hybrid cloud providers would use? Where can this go?
Weise: It’s an interesting question because one of the challenges we have all faced in the world of cloud is we have virtualized servers and virtualized storage, meaning there is disaggregation; there is a separation between the workload that’s running and the actual hardware it’s running on.
In many cases, and for almost all clients in the mid-market, that level of virtualization has not occurred at the network level. We are still nailed to things. We are all tied down to the cable, to the switch port, and to the human that can figure those things out. It’s not as flexible or as extensible as some of the other solutions that are out there.
In our case, when we build this out, the real magic is with the network. That improved connection might be a cost savings for a client — especially from a bandwidth standpoint. But as you get a private cross-connect into that environment to make use of, in this case, SaaS, we can now carve that up in a number of different ways and allow the client to use it for other things.
To download the full podcast, CLICK HERE.