Study Shows IBM i Has Big Cost Advantage Over Alternatives

According to an August 2017 study conducted by Quark + Lepton, an independent research and management consulting firm, IBM i on Power Systems servers provides a substantial TCO (total cost of ownership) advantage over equivalent Windows or Linux platforms.

For the study, which was funded by IBM, Quark + Lepton used three different server/database configurations: an IBM Power Systems server running IBM i Operating System V7.3 with DB2, an x86 server running Windows Server 2016 and SQL Server 2016 and an x86 with Linux and Oracle Database 12c. TCO estimates were based on the costs of hardware acquisition and maintenance, OS and database licenses and support, system and database admin personnel salaries and facilities expenses. Several different use cases were analyzed.

A Big TCO Advantage

The results of the study showed the projected three-year TCO for the three setups to be as follows:

  • Power Systems/IBM i/DB2 – $430,815
  • x86/Windows/SQL Server – $1.18 million
  • x86/Linux/Oracle – $1.27 million

The study concludes that “costs for use of IBM i on Power Systems are lower across the board”. For example, initial hardware and software acquisition costs for the IBM i systems averaged 8% less than the Windows systems, and fully 24 % less than the Linux systems.

Perhaps the most surprising factor in the stark differential between the IBM i solution and the others was in the cost of required support staff. Based on a 300-user scenario, IBM i required 0.3 FTE (full time equivalent) support personnel, compared to 0.5 FTE for the Windows setup and 0.55 FTE for Linux.

But the biggest differential in staff costs arose from the fact that IBM i admins could handle both the OS and the database. Those double-duty IBM i personnel commanded salaries of about $86,000, while Windows and Linux sysadmins were paid $71,564 and $86,843 respectively. However, the Windows and Linux setups also required the support of separate database admins, adding $100,699 (SQL Server) and $103,283 (Oracle) to the personnel costs for those solutions.

Simplicity

In its conclusion the report notes that while the industry is trending toward ever-greater complexity, the simplicity of IBM i makes it by far the most cost-effective platform on which to base an organization’s IT infrastructure.

Demand for Cloud Technologies

Those who have skills and experience with cloud technologies are going to be much in demand in the next few years. According to Tech.Co, the use of cloud computing technologies is expected to quadruple in the near future. Estimates are that cloud data centers will manage a whopping 92 percent of all workloads.

So who are the biggest contributors to this massive progression to the cloud? The biggest players are the IoT (internet of things) and big data centers. Most of the growth will occur in public cloud data centers, with the use of private clouds beginning to decline. Interestingly, predictions are that infrastructure as a service (IaaS) will decline somewhat, due to many organizations focusing on improving their own corporate infrastructures, including both data storage for sensitive information and acquisition of their own high-speed connections.

In addition, a recent study, “2017 Cloud Computing and Business Intelligence Market Study” conducted by Dresner Advisory Services, notes that as organizations are turning to public clouds, they are also looking for cloud-based business intelligence tools such as dashboards, advanced visualization tools, ad hoc queries, data integration and data quality, end-user self-service and reporting features. The study goes on to note the trend for increasing demand for cloud-based BI services, is largely driven by smaller organizations. However, not included in their across the board “must have” list for BI services, is social media or streaming analytics, although these are still important in certain industries.

Trust in the cloud is not just increasing for businesses. Consumers are also expected to demand more from the cloud. Estimates are that personal cloud storage will increase from 47 to 59 percent. That may not sound like a huge percentage increase, but globally the increase represents about a billion more users.

The future looks bright in the cloud, supported by both business and consumer demand. Anyone interested in applying their technology skills to this trend will most likely have a bright future as well.

Hyperconverged Infrastructure (HCI) Now Runs on IBM Power Systems

Stefanie ChirasToday’s corporate data centers are using more and more compute and storage resources to meet rapidly increasing operational requirements. Because traditional data center architectures are experiencing great difficulty in meeting these new demands, an alternative technology is swiftly gaining acceptance. Hyperconverged Infrastructure, or HCI, is on what Stefanie Chrias, IBM’s VP Power Systems, calls “a rapid growth trajectory.” And now, for the first time, this new technology that is so swiftly penetrating enterprise data centers is available to run on IBM’s Power Systems platforms.

But what, exactly, is hyperconverged infrastructure?

HCI takes the fundamental elements of the data center, servers, data storage, and networking, and packages them together in a single unified appliance. The entire unit, as well as its component parts, is controlled entirely by sophisticated software under the direction of detailed policies established by IT administrators. Both the compute engine and the storage controller run on the same server platform, and each appliance functions as a node in a cluster.

The constituent parts of the HCI appliance are hidden behind a unified “single pane of glass” software interface. So, there is no need for users or applications to deal directly with the hardware or its particular characteristics. The software can automatically and transparently carry out tasks such as performing data backups, scaling out (simply by adding nodes) to provision additional storage as needed, or swapping out nodes that fail. This approach greatly simplifies the IT management task.

Part of the appeal of HCI is that it was designed to run on inexpensive industry-standard x86-compatible servers and storage devices. But that meant IBM’s RISC-based Power Systems line was shut out of this fast-growing market.

HCI and Power Systems

IBM Power SystemsNow, however, IBM has announced that it is partnering with Nutanix, which 451 Research has named as the leading HCI provider, to market appliances based on the Power Systems line rather than x86 servers. Because of the superior compute and data handling capabilities of the Power architecture, IBM believes this new platform will allow enterprise customers to “run any mission critical workload, at any scale, with world-class virtualization and automation capabilities.” The platform is particularly suited to running high performance database, analytics, machine learning, and artificial intelligence applications.

For IBM, HCI is “a fundamentally different approach to enterprise application needs.” It also represents an important emerging market that IBM didn’t want to be left out of.


Looking for Power Systems education? Take a look at COMMON’s online offerings.

Layers of a Scalable Cloud Architecture

Cloud

The cloud computing ecosystem is huge and consists of several technologies. Many companies rely on these varying cloud infrastructure to deliver their products and services efficiently. This brings up the question, how scalable is your cloud architecture?

Using the right architecture is extremely crucial for your entire cloud’s operation. It is important that organizations understand the specific requirements of their servers, and if they are already using a cloud platform, decide on the type of cloud architecture that would be best for their business logic.

Before choosing a cloud computing architecture, the first thing that’s required is a scalable structure. Cloud computing is scalable when all its components are independent of each other. This independence allows systems to scale at exceptional levels and is usually implemented at the design stage.

Features of a Scalable Cloud Architecture

Typically, cloud computing systems involve different cloud components communicating with each other on a system that functions like a messaging lineup. How these components interact is what determines the scalable nature of your infrastructure. There are two layers that make up a scalable cloud architectures:

1. The Client / Front-end

The client structure is where all users interface with the target platform. This is usually the mobile or web application that manages users, sessions and pages. The client usually makes API calls to the server.

The front-end comprises of single user or a network of users. Note that some front-ends will not look like the regular applications we see everyday. The main thing to remember during the design stage is that this is the layer that communicates with the back-end. Therefore RESTful calls to the back-end is the main purpose during the front-end design stage. Whatever visual design you build into your cloud’s front-end, making API calls to the server is the main focus at this stage.

2. Server / Back-end

Your server comprises of data, caching services and all services that interact directly with your server applications. This interaction is necessary for data delivery.

Your server applications drive your business functions and can include apps like CRM, inventory, accounting, reservation system and much more. Adding new applications is part of scalability, so as you add new applications, the demands of higher traffic and computing loads must be anticipated. Your front-end will not automatically scale to size unless you ensure that your back-end accommodates the new load and traffic.

For best practices in maintaining and protecting client’s data, a cloud computing structure requires a high level of redundancy than is necessary for a system hosted locally. The backup created by this redundancy means that the back-end server can jump in and access backup images for quick restoration of data.

In a highly scalable cloud computing architecture, applications are managed, controlled and served by the back-end. The strength of the back-end is how it manages security protocols, traffic and system files. If the applications on your server are broken down and classified into sub-components of the main server, your cloud infrastructure will deliver limitless efficiency and possibilities, making scalability much easier.

IBM Power Systems Benefits

IBM Power Systems provides one of the leading IT management systems in the market. In the past, it was primarily focused on simply running a smooth operating system and solving key problems. However today it has migrated to new applications. In particular, the IBM Power Systems Linux based servers, called the OpenPOWER LC servers, are a hardware solution that many managers will find intriguing.

IBM Power Systems

Speed and Storage

OpenPOWER LC servers have two key benefits. The first is simply the speed and storage capabilities. The key specs are:

  • Up to 20 cores (2.9-3.3Ghz)
  • 2 sockets
  • 512 GB memory (16 DIMMs)
  • 115 GB/sec max sustained memory bandwidth
  • 12 3.5” SATA drives, 96 TB storage
  • 5 PCIe slots, 2 CAPI enabled
  • 2 Nvidia K80 GPU capable

That means that the analytical and big data capabilities are off the chart. In fact, MongoDB runs twice as fast and EDB Postgress runs 1.8 times as fast on the system.

Companies are dealing with more complex problems that require even more power than ever before. Firms are dealing with analytics issues such as supply chain optimization, agile asset management, fraud prevention and enterprise data management that only can be handled with a powerful big data server like the OpenPOWER LC.

Integration

The second benefit is that the server integrates nicely with existing data systems and other servers. This product is fully compatible to be plugged right into a server farm. There is no need to do extensive customization or back-end fixes.  Instead, IT managers can add it into the existing stock as a powerful new tool.

COMMON is a leading organization helping Power Systems professionals through educational events, certification, and ongoing training. For more information, please visit our website.

How IBM Power Systems Are Challenging x86 Servers in the Corporate Data Center

Intel’s x86 architecture has long been dominant in the corporate server marketplace for good reason. Chips based on the x86 framework have been at the heart of personal computers and other devices for more than three decades, and a standardized, widely adopted infrastructure for development and support of x86-based products is in place. The head start x86 enjoys over any potential challengers is immense.

But that hasn’t stopped IBM from joining the fray.

In 2014, IBM sold its x86 server business to Lenovo and pinned its hopes for increasing its penetration of the enterprise and cloud server markets on its upgraded Power Systems line. At that time the consensus in the media and among competitors was that IBM’s efforts would be too little, too late. Intel’s x86 standard was simply too well entrenched to be displaced.

But that assessment is beginning to change. Servers based on the company’s Power8 RISC processor seem to be gathering momentum in the marketplace. In 2015, IBM’s financial results revealed that it had enjoyed revenue growth in its Power Systems line for the first time in four years.

Key to that growth, say analysts, was IBM’s decision to add Linux as an alternative to its proprietary AIX operating system. There are now thousands of ISVs (independent software vendors) developing new Power8 Linux applications or working to port existing x86-based Linux applications to the Power environment. And, IBM claims, it has demonstrated some very good reasons for its customers to do exactly that.

In a June, 2015 conference presentation, the company revealed certified benchmark test results showing that Power8 servers significantly outperformed x86 servers in running financial workloads. In fact, an IBM Power System 824 server more than doubled the performance of a best-in-class x86 machine.

Says Terry Keene, CEO of Integration Systems, LLC,

“Comparing the x86 and Power processors on a micro-benchmark level will show little raw performance advantages for either. Comparing the two using enterprise workloads will demonstrate a significant advantage for Power in data workloads such as databases, data warehouses, data transaction processing, data encryption/compression, and certainly in high-performance computing.”

IBM is aggressively pursuing its objective of gaining a double-digit share of the server market by 2020. And with its even more powerful Power9 chips due out in 2017, the company seems well positioned to reach that goal.

Why x86 Servers Continue to Dominate the Data Center

It wasn’t that long ago that there was a widespread expectation that x86-based servers would soon be displaced in corporate data centers, and in the cloud, by servers that use ARM processors. But so far, things haven’t turned out that way. Servers using x86 chips still maintain a more than 90 percent market share. As Intel spokesman William Moss notes, “There has been a lot of hype about ARM in the datacenter, but very few deployments.”

ARM chips, which are RISC (Reduced Instruction Set Computer) processors, already dominate the mobile device market. They are widely used in such products as smartphones, laptops, and tablet computers. But their penetration of the server market has so far been minimal. And Linus Torvalds, the creator of the Linux kernel, thinks he knows why.

“What matters is all the infrastructure around the instruction set, and x86 has all that infrastructure,” Torvalds says. “Being compatible just wasn’t as big of a deal for the ARM ecosystem as it has been traditionally for the x86 ecosystem.”

In the world of Android-based mobile devices, the environment in which ARM chips have flourished, there is little standardization between manufacturers. Because the chipsets and hardware configurations of the various smartphones and tablets are unique to single products or product lines, the support infrastructure for ARM implementations is very fragmented. For example, it’s not possible to create a single Android update build that can be deployed across the devices of multiple manufacturers.

On the other hand, the x86 ecosystem has more than 30 years of development behind it, and standards are well understood and widely adhered to. That means it simply takes a lot less time and expense to develop and support x86 server environments than if ARM processors were used.

That’s not to say that the dominance of x86 in the data center is unassailable. IBM, for example, is making a determined effort to grab an increasing share of the server market with its own line of RISC processors, the Power8 and upcoming Power9 products.

But for the moment, x86 remains king of the data center realm.

Come back Thursday to read “How IBM Power Systems Are Challenging x86 Servers in the Corporate Data Center”.