What Is Software Defined Storage?

One of the major trends in IT storage today is the accelerating growth of software defined storage (SDS). According to market research firm MarketsandMarkets, the market for SDS products will grow from $4.72 billion in 2016 to $22.56 billion by 2021. That’s an outstanding compound annual growth rate (CAGR) of 36.7%.

But many IT leaders, even storage professionals, remain unsure of exactly what the term SDS really signifies.

Actually, that confusion is not surprising. As with many new technologies that begin to expand their market share, some storage vendors have taken the opportunity to break out the software component of existing products and call it SDS. And, of course, their definition of SDS just happens to precisely match the feature set of the product they are trying to sell.

Contrary to what some skeptics claim, however, SDS is much more than just the latest marketing buzzword. In fact, many proponents see it as the vanguard of a revolutionary advance in how enterprise storage is managed and delivered.

SDS Defined

The Storage Networking Industry Association (SNIA) defines SDS as “virtualized storage with a service management interface.”

Although storage virtualization has been in use for some time, SDS takes it to a new level. The distinctive feature of SDS is the decoupling of the intelligence of the storage system from the underlying hardware. This means SDS is storage-agnostic – it isn’t tied to any particular type of hardware or media. Instead it treats all the devices it controls, whether spinning disks, flash memory arrays, or even entire SAN or NAS subsystems, as part of a single storage pool. Users and applications (via standard APIs) can access storage through a consistent software interface without needing to have any knowledge of what hardware is actually storing the data.

One of the major benefits of the storage heterogeneity SDS allows is that costly special-designed storage appliances are not required (though, of course, they can be used if desired). Instead, inexpensive commodity hard drives attached to x86 hosts can be used, mixed in with higher performance technologies such as flash memory arrays as necessary. The SDS software has the intelligence to use tiering and caching functions to dynamically assign particular sets of data to the appropriate storage devices based on the performance demands of the workload being run.

The result of hiding all the storage hardware behind the SDS software interface is that flexibility, scalability, and control are maximized, while costs for hardware, maintenance, and storage management are minimized.

How IBM Power Systems Are Challenging x86 Servers in the Corporate Data Center

Intel’s x86 architecture has long been dominant in the corporate server marketplace for good reason. Chips based on the x86 framework have been at the heart of personal computers and other devices for more than three decades, and a standardized, widely adopted infrastructure for development and support of x86-based products is in place. The head start x86 enjoys over any potential challengers is immense.

But that hasn’t stopped IBM from joining the fray.

In 2014, IBM sold its x86 server business to Lenovo and pinned its hopes for increasing its penetration of the enterprise and cloud server markets on its upgraded Power Systems line. At that time the consensus in the media and among competitors was that IBM’s efforts would be too little, too late. Intel’s x86 standard was simply too well entrenched to be displaced.

But that assessment is beginning to change. Servers based on the company’s Power8 RISC processor seem to be gathering momentum in the marketplace. In 2015, IBM’s financial results revealed that it had enjoyed revenue growth in its Power Systems line for the first time in four years.

Key to that growth, say analysts, was IBM’s decision to add Linux as an alternative to its proprietary AIX operating system. There are now thousands of ISVs (independent software vendors) developing new Power8 Linux applications or working to port existing x86-based Linux applications to the Power environment. And, IBM claims, it has demonstrated some very good reasons for its customers to do exactly that.

In a June, 2015 conference presentation, the company revealed certified benchmark test results showing that Power8 servers significantly outperformed x86 servers in running financial workloads. In fact, an IBM Power System 824 server more than doubled the performance of a best-in-class x86 machine.

Says Terry Keene, CEO of Integration Systems, LLC,

“Comparing the x86 and Power processors on a micro-benchmark level will show little raw performance advantages for either. Comparing the two using enterprise workloads will demonstrate a significant advantage for Power in data workloads such as databases, data warehouses, data transaction processing, data encryption/compression, and certainly in high-performance computing.”

IBM is aggressively pursuing its objective of gaining a double-digit share of the server market by 2020. And with its even more powerful Power9 chips due out in 2017, the company seems well positioned to reach that goal.

Why x86 Servers Continue to Dominate the Data Center

It wasn’t that long ago that there was a widespread expectation that x86-based servers would soon be displaced in corporate data centers, and in the cloud, by servers that use ARM processors. But so far, things haven’t turned out that way. Servers using x86 chips still maintain a more than 90 percent market share. As Intel spokesman William Moss notes, “There has been a lot of hype about ARM in the datacenter, but very few deployments.”

ARM chips, which are RISC (Reduced Instruction Set Computer) processors, already dominate the mobile device market. They are widely used in such products as smartphones, laptops, and tablet computers. But their penetration of the server market has so far been minimal. And Linus Torvalds, the creator of the Linux kernel, thinks he knows why.

“What matters is all the infrastructure around the instruction set, and x86 has all that infrastructure,” Torvalds says. “Being compatible just wasn’t as big of a deal for the ARM ecosystem as it has been traditionally for the x86 ecosystem.”

In the world of Android-based mobile devices, the environment in which ARM chips have flourished, there is little standardization between manufacturers. Because the chipsets and hardware configurations of the various smartphones and tablets are unique to single products or product lines, the support infrastructure for ARM implementations is very fragmented. For example, it’s not possible to create a single Android update build that can be deployed across the devices of multiple manufacturers.

On the other hand, the x86 ecosystem has more than 30 years of development behind it, and standards are well understood and widely adhered to. That means it simply takes a lot less time and expense to develop and support x86 server environments than if ARM processors were used.

That’s not to say that the dominance of x86 in the data center is unassailable. IBM, for example, is making a determined effort to grab an increasing share of the server market with its own line of RISC processors, the Power8 and upcoming Power9 products.

But for the moment, x86 remains king of the data center realm.

Come back Thursday to read “How IBM Power Systems Are Challenging x86 Servers in the Corporate Data Center”.