Integrating PowerVC in an IBM i Shop

By Dana Boehler

The speed of business has never been faster. Product release cycles have shrunk to timelines inconceivable in the past. Some fashion retailers are now releasing new product every two weeks, a cycle that historically only happened 4-8 times a year, and certain retailers even have product available immediately after it is displayed on the runway.

The demand for immediate insight into the state of sales numbers, ad campaigns, and other business functions has made the continuous aggregation of data commonplace. And if those factors weren’t pressure enough, the threat of ever-evolving security hazards is generating mountains of updates, code changes, and configuration adjustments — all of which need to be properly vetted before entering a production environment.

All of this activity needs to run on infrastructure that administrators like ourselves must manage, often with fewer coworkers to assist. Thankfully, for those of us running IBM i on IBM Power Systems, IBM has provided a robust cloud management tool that allows us to quickly spin up and spin down systems: PowerVC.

PowerVC allows users to manage existing IBM Power System partitions, create images from those partitions, and deploy new partitions based on those images. More recent versions of PowerVC support IBM i management and deployment (earlier versions did not).

Over the past year, I have been using PowerVC to greatly reduce the amount of time it takes to bring a system into the environment. Typically, creating a new system would take several hours of hands-on keyboard work over the course of a few days of hurry-up-and-wait time. The first time I deployed a partition from PowerVC, however, I was able to reduce that to about an hour, and after more refinements in my deployment, images, and process, I am now down to under 25 minutes. That’s 25 minutes to have a fully deployed, PTF’d system up and running.

The full implications of this may not be readily apparent. Obviously, net new systems can be deployed much more quickly. But more importantly, new modes of development can be more easily be supported. PowerVC supports self-service system provisioning, which enables teams to create their own systems for development, test, and QA purposes, and then tear them down when no longer needed. Since the systems are focused on the task at hand, they do not need the resources a fully utilized environment would need.

There’s more: Templates can be created in PowerVC to give the self-service users different CPU and memory configurations, and additional disk volumes can be requested as well. Post-provisioning scripts are supported for making configuration changes after a deployed system is created. In our environment, we are taking this a step further by integrating PowerVC with Red Hat’s Ansible automation software, which has given us greater flexibility in pre- and post-provisioning task automation.

In practice, using PowerVC removes many of the barriers to efficient development inherent in traditional system deployment models and permits continuous deployment strategies. Using PowerVC, a developer tasked with fixing a piece of code can spin up a clean test partition with the application and datasets already installed, create the new code fix, spin up a QA environment that has all the scripted tests available for testing the code, and then promote the code to production and delete the partitions that were used for development and testing.

You do have to make some changes to the environment in order to support this model. Code needs to be stored in a repository, so it can be kept in sync between all systems involved. The use of VIOS is also required. Additionally, note that when using this type of environment, the administrator’s role becomes more centered around image/snapshot maintenance (used for deployment templates) and automation scripting rather than the provisioning and maintenance of systems.

For full information on the product and its installation, I recommend visiting IBM’s knowledge center.

Guest Blogger

Dana Boehler is a Systems Engineer and Security Analyst at Rocket Software, specializing in IBM i.

Introducing the POWER9 Server Family

POWER9 is here. As many in our community will be looking to upgrade, we want to provide information on what these new servers offer you and your business.

According to IBM, POWER9-based servers are built for data intensive workloads, are enabled for cloud, and offer industry leading performance.

As you have experienced, Power Systems have the reputation of being reliable, and the POWER9-based servers are no exception. POWER9 gives you the reliability you’ve come to trust from IBM Power Systems, the security you need in today’s high-risk environment, and the innovation to propel your business into the future. They truly provide an infrastructure you can bet your business on. From a Total Cost of Ownership (TCO) standpoint, a savings of 50% can be realized in 3 to 5 years when moving to POWER9 per IBM calculations.

When compared to other systems, POWER9 outperforms the competition. IBM reports:

  • 2x performance per core on POWER9 vs. X86
  • Up to 4.6x better performance per core on POWER9 vs. previous generations

Learn more about POWER9 by visiting the new landing page. For more detailed data regarding POWER9 performance, be sure to click on the Meet the POWER9 Family link.

Attending the COMMON Fall Conference & Expo? Be sure to attend the POWER Panel session on POWER9. This will be your opportunity to learn more about the servers from experts.

3 Tasks You Can Take to Improve Your IBM i’s Security and Ease of Administration

By Dana Boehler

Securing an expansive platform like an IBM i system can be an intimidating task, a task that many times falls into the hands of a systems administrator when more specialized help is not available in-house. Deciding what tasks and projects will add value, while reducing administrative overhead, is also difficult. In this article I have chosen three things you can do in your environment that can get you started in ascending order of time and effort.

1. Run the ANZDFTPWD Command

Run the ANZDFTPWD command – This command checks the profiles on your system for passwords that are the same as the user profile name and outputs the list to a spooled file. Even on systems with well controlled *SECADM privileges (the special authority that allows a user to create and administer to user profiles), you will find user profiles that have either been created with or reset to have a password that is the same as the user profile name, which could provide an unauthorized user a method for gaining access to system resources. Additionally, the command has options to either disable or expire any user profiles found to have default passwords if desired.

2. Use SQL to Query Security Information from Library QSYS2

In recent updates to the supported IBM i OS versions, IBM made a very powerful set of tools available for querying live system and security data by using SQL statements. This allows users with the appropriate authority to create very specific reports on user profiles, group profiles, system values, audit journal data, authorization lists, PTF information and many other useful data points. These files in QSYS2 are table views directly accessing the information they are querying so the data is current every time a statement is run. One of the best things about creating output this way is there is no need for creating an outfile to query from or refresh re-querying. A detailed list of the information available and the necessary PTF and OS levels required to use these tools can be found here.

3. Implement a Role-based Security Scheme

The saying used to be the IBM i OS “is very secure”, but that statement has changed to the more accurate “is very securable”. This change in language reflects the reality that these systems are now very open to the world as shipped but can be one of the most secure systems when deployed with security in mind. For those who are not aware of role-based authority on IBM i, it is basically a way of restricting access to system resources using authorities derived from group profiles. Group profiles are created for functions within the organization, and then authorities are assigned to those group profiles. When a user profile is created it is configured with no direct access to objects on the system, instead group profiles are added to allow access to job functions. Although implementing role-based security may seem like a daunting task it pays huge dividends in ease of administration after the project is in place. For one thing having role-based security in place allows the administrator to quickly change security settings for whole groups of users at once when needed, instead of touching each user’s profile. It also allows for using group profiles as the object owners instead of individual user profiles, which means the process of removing users who create large numbers of objects or objects that are constantly locked is much easier. Using role-based security also relies on group profile for authority, so the likelihood of inadvertently granting a user too much or too little authority by copying another similar user is far less likely.

These a just a few of things you can do to get started securing your IBM i. In future posts, I intend to delve into more depth, especially regarding role-based security.

Guest Blogger

Dana Boehler is a Senior Systems Engineer at Rocket Software.

Stop Limiting Yourself to 10 Character Field Names in RPG Files

At POWERUp18 this May, someone asked me “When will RPG’s native file access support long names from databases and display files?” The answer is: it already does and has for quite some time! Originally it required reading into data structures, but that’s not even true anymore. Today, RPG’s native file access fully supports long names!

Since having that conversation, I’ve been asking around. It seems that not many RPGers know about this very useful feature. So in today’s post, I’ll show you how it works.

What I Found Out When Asking Around

I didn’t do any sort of official survey, and these statistics weren’t achieved scientifically, but…while I was informally asking around, here’s what I discovered:

  • “Everyone” knows you can create column (field) names longer than 10 characters using SQL
  • “Everyone” knows you can read, insert, update and delete using long field names in SQL
  • About half of the people know that DDS also supports long field names
  • About 25% thought RPG could use long names if you use data structures for I/O
  • Only a few people (maybe 5%) knew RPG supports long names without data structures

We can do it with or without data structures! The data structure support has been around for a long time, and perhaps that’s why more people are familiar with it. (But, it can be awkward to code.)

The ability to use long names without data structures was added in 2014. If you have the latest PTFs for RPG on IBM i 7.1 or 7.2, then you have this support already. It was also included in the RPG compiler that was shipped with IBM i 7.3 in 2016. If you’re running 7.3, PTFs aren’t even needed.

Defining Tables with Long Names in SQL

Defining a field name longer than 10 characters is very natural in SQL. You simply define your table with a statement like this:

What may not be obvious, however, is that when a field is longer than 10 characters, there are actually two names assigned to it. The regular “column name” and what SQL refers to as the “system name”, which is limited to 10 characters for compatibility with older tools like CPYF, OPM languages or CL programs. In the above example, the system will generate names like CUSTO00001 for CUSTOMER_NUMBER and COMPA00001 for COMPANY_NAME.

Since there are still a lot of places where you might want to use the shorter system names, I recommend explicitly naming them. That way your short names can be meaningful. For example:

When you are working on modernizing your databases, this dual-name approach is really handy because it lets you keep the old, short (and often ugly) names that you’ve always used, keeping the files compatible with older programs. New programs can use the longer names and be more readable. Eventually, you can rewrite all the programs to use the long names, so this is a great tool for modernizing!

Long Names in DDS (Including Displays and Printers!)

DDS also supports long names using the ALIAS keyword. The name “alias” makes sense if you think about it, since any time you have a field longer than 10 characters, there are two names and either can be used. In other words, they are aliases for each other.

The DDS equivalent would be coded like this:

Mind you, I don’t recommend defining your new physical files using DDS, but for existing files, it may be easier to simply add the ALIAS keyword than to re-do the file using SQL.

Another reason why it’s important to know about the ALIAS keyword in DDS is because it also supports externally defined display, printer and even ICF files. Since all of these types of files support long names and have a great mechanism for backward compatibility with programs that must use short names, there’s no reason not to take advantage of this!

Long Names in RPG Programs

Like DDS, RPG uses the term “alias” when referring to long names. Starting with IBM i 7.1, RPG was able to use this long name support in qualified external data structures. This original support also required you to qualify the record format names (by using the QUALIFIED keyword on the F-spec) because without this keyword, RPG generates input specs. At the time 7.1 was released, I-specs did not support long names.

This support that shipped with 7.1 could be somewhat awkward. For example, the code to update the last paid date for customer 1000 would look like this:

I found this awkward because you had to use separate data structures for input and output (though, a PTF was released later that relaxed that rule) and you had to qualify both the field and record format names. I don’t know if everyone agrees, but I found that cumbersome!

The 2014 update added full support for long names, without any need to use data structures at all! So, if you’re up-to-date on PTFs and are running IBM I 7.1 or newer (which you really should be), you can do this instead:

Naturally, the alias keyword works in free format code as well. Here’s an example:

Don’t forget that alias support extends to display files and printer files, too!  There’s no reason to use short cryptic field names anymore. Take advantage of the long ones. You’ll love it!

Completely Free ILEditor and IBM Technology Refresh Recap

Today I’ll look at a powerful open source (and completely free!) IDE for ILE programs (CL, C/C++, Cobol or RPG) named ILEditor that is being actively developed by Liam Allan who is one of the brightest minds in the industry. In fact, last week Allan added a new GUI interface to the editor that makes it feel much more professional, while keeping it easy to use. I’ll also give you a quick overview of the announcement IBM made last week about updates to IBM i 7.2 and 7.3.

The IBM Announcement

On February 13th, just in time for Valentine’s Day (because IBM wants to be my valentine!), IBM announced new Technology Refreshes. These include support for POWER9 processors, which look incredible – but, alas, I’m not a hardware guy. They also include updates to Integrated Web Services (IWS), Access Client Solutions (ACS), RPG and more.
Here are links to the official announcements:

IBM i 7.2 Technology Refresh 8 (TR8)

IBM i 7.3 Technology Refresh 4 (TR4)

You should also check out Steve Will’s blog post.

My Thoughts

The most exciting part of this announcement for me is the introduction of the new DATA-INTO opcode in RPG. Here’s the sample code that IBM provided in the announcement:

DATA-INTO myDs %DATA(‘myfile.json’ : ‘doc=file’) %PARSER(‘MYLIB/MYJSONPARS’);

It appears that this will work similarly to Open Access, where the RPG compiler will examine your data structure and other variables that it has all the details for and work together with a back-end handler that will map it into a structured format. Open Access refers to the back-end program as a “handler”, whereas DATA-INTO seems to call it a “parser”, but the general idea is the same.

As someone who has written multiple open source tools to help RPG developers work with XML and JSON documents, this looks great! One of the biggest challenges I face with these open source projects is that they don’t know the details of the calling program’s variables, so they can’t ever be as easy to use as a tool like XML-INTO. For example, the YAJL tools that I provide to help people read JSON documents require much more code than the XML-INTO opcode, because XML-INTO can read the layout of a data structure and map data into it, whereas with YAJL you must map this data yourself. However, DATA-INTO looks like it will solve this problem, so that once I’ve had time to write a DATA-INTO parser, you’ll be able to use YAJL the same way as XML-INTO.

Unfortunately, as I write this, the PTFs are not yet available, so I haven’t been able to try it. I’m very excited, however, and plan to blog about it as soon as I’ve had a chance to try it out!

What is ILEditor?

ILEditor (pronounced “I-L-Editor”) came from the mind of Liam Allan, who is one of the best and the brightest of the 2018 IBM Champions. I have the privilege of working with Liam at Profound Logic Software, and I can tell you that his enthusiasm for computer technology and IBM i programming know no bounds. In fact, one day last week after work, Liam sent me a text message about his new changes to ILEditor, sounding very excited. When I factored in the time zone difference, I realized it was 1:00 a.m. where he lives!

For many years, one of the most common laments in the IBM i programming community has been about the cost and performance of RDi. Please don’t misunderstand me. I love RDi, and I use it every day. I believe RDi is the best IDE for IBM i development that’s available today. That said, sometimes we need something else for various reasons. Some shops can’t get approval for the cost of RDi. Others might want something that uses fewer resources or something they can install anywhere without needing additional RDi licenses. Whatever the reason, ILEditor is very promising alternative! I wouldn’t be surprised if it eventually is able to compete with RDi.

Why Not Orion? Or SEU?

The concept of Orion is a great. It’s web-based, meaning that you don’t have to install it and it’s available wherever you go. Unfortunately, it’s not really a full IDE – at least not yet! I hope IBM is working to improve it. It does not know how to compile native ILE programs or show compile errors. Its interface is designed around the Git version control software, which makes it tricky to use unless you happen to store your code in Git. And quite frankly, it’s also a little bit buggy. I hope to see improvements in these areas, but right now it’s not a real option.

The most popular alternative to RDi today is SEU. In fact, historically this was the primary way that code was written for IBM i. So, you may think it’s still a good choice. However, I don’t think it’s viable today for two reasons:

  1. The green-screen nature makes it cumbersome to use. This is no problem for a veteran programmer, because they’re used to it. But for IT departments to survive, they need to bring in younger talent. Younger talent is almost always put off by SEU. I even know students who gave up the platform entirely because they thought SEU seemed so antiquated, and they wanted no part of it.
  2. SEU hasn’t received any updates since January 2008. That means all features added to RPG in the past 10 years – which includes three major releases of the operating system –will show as syntax errors in SEU.

About ILEditor

ILEditor is open source, runs on Windows and was released as open source under the GNU GPL 3.0 license. That means it is free and can be used for both private and commercial use. If you like, you can even download the source code and make your own changes. It can read source from source members or IFS files. In addition to editing the source, it can compile programs, show you the errors in your programs, work with system objects and display spooled files. It even has an Outline View (like RDi does) that will show you the variables and routines in your program.

The main web site for ILEditor is: worksofbarry.com/ileditor/.

If you want to see the source code, you’ll find the Github project here.

You do not need to install any software on your IBM i to use ILEditor. Instead, the Windows program uses the standard FTP server that is provided with the IBM i operating system to get object and source information and to run compile commands. An FTPES (FTP over SSL) option is provided if a more secure connection is desired.

Connecting for the First Time

When you start ILEditor, it will present you with a box where you can select the host to connect to. Naturally, the first time you run it there will be no hosts defined, so the box will be empty. You can click “New Host” to define one.

Once you have a host defined, it will be visible as an icon, and double-clicking the icon will begin the connection.

When you set up a new system, there are five fields you must supply, as shown in the screenshot below:

Alias name = You can set this to whatever you wish. ILEditor will display this name when asking you the host to connect to, so pick something that is easy to remember.

Host name / IP address = the DNS name or IP address of the IBM i to connect to.

Username = Your IBM i user profile name.

Password = Your IBM i password – you can leave this blank if you want it to ask you every time you connect.

Use FTPES = This stands for FTP over Explicit SSL. Check this box if your IBM i FTP server has been configured to allow SSL and you’d like the additional security of using an encrypted connection.

The Main IDE Display

Once you’ve connected, you’ll be presented with a screen that shows the “Toolbox” on the left and a welcome screen containing getting started information and developer news, as shown in the screenshot below.

Any of the panels in ILEditor, including these two, can dragged to different places on the display or closed by clicking the “X” button in the corner of the panel. There is also an icon of a pin that you can click to toggle whether a panel is always open or whether it is hidden when you’re not using it. If you look carefully on the right edge of the window, you’ll see a bar titled “Outline View”. This is an example of a hidden panel. If you click on the panel title, the panel will open. If you click the pin, it will stay open. You can adjust the size of any panel by dragging its border.

When you open source code, it will be placed in tabs in the center of the display (just as the welcome screen is initially.) These can also be resized or moved with the mouse. This makes the UI very flexible and simple to rearrange to best fit your needs.

The Toolbox

Perhaps the best place to start is with the toolbox.  Here’s what that panel looks like:

Most of the options in this panel are self-explanatory. I will not explain them all but will point out a few interesting things that I discovered when using ILEditor:

  • The “Library List” is primarily used when compiling a program. This is the library list to find file definitions and other dependencies that your program will need.
  • The “Compile Settings” lets you customize your compile commands. Perhaps you have a custom command you use when compiling. Or perhaps you use the regular IBM commands but want to change some of the options used. In either case, you’ll want to look at the Compile Settings.
  • As you might expect, “Connection Settings” has the host name, whether to use FTPES and other settings that are needed to connect to the host. In addition to that, there are some other useful options hidden away in the connection settings:
    • On the IFS tab, you’ll find a place to configure where your IFS source code is stored and which library it should be compiled into.
    • On the Editor tab, there is a setting to enable the “Outline View”. You’ll want to make sure this is checked, otherwise you’ll be missing out on this feature.
    • On the ILEditor tab, there’s a setting called “Use Dark Mode”. This will change the colors when it displays your source code to use a black background (as opposed to the default white background), which many people, myself included, find easier on the eyes.
  • When you change something in the “Connection Settings” (including the options described above), you will need to disconnect from the server and reconnect so that the new settings take effect.

Opening Source Code from a Member List

ILEditor allows you to open source code from either an IFS file or a traditional source member. You can use the Member Browser or IFS Browser options in the toolbox to browse your IBM i to find the source you wish to open and open it.

The Member Browser opens as a blank panel with two text fields at the top. At first, I wasn’t sure exactly what these were for as there wasn’t any explanation. I guessed that this was where you specified the library (on the left) and the source physical file (on the right) that you wanted to browse. Iit turned out that I was correct. If you type the library and filename and click the magnifying glass, it will show you all the members in that file.

I have a lot of source members that I keep in my personal library, and I often get impatient waiting for the member list to load in RDi. I was pleasantly surprised to see that the member browser in ILEditor loads considerably faster.

There is also a “hidden” feature where you can press Ctrl-P to search the list of recent members that you listed in the member browser. Just press Ctrl-P and start typing, and it’ll show the members that match the search string. This was a very convenient way to find members.

Once you’ve found the member (in either the regular member browser or the “search recent” dialog), you can double-click on the member name to open it.

Create or Open a Member Without Browsing

In the upper-left of the ILEditor window, there is a File menu that works like the file menus found in most other Windows programs. You can click File/New to create a new member or IFS file or File/Open to open an existing member or IFS file when you know the name and therefore don’t need to browse for it.

The File Menu also offers keyboard shortcuts to save time. You can press Ctrl-O for Open, or Ctrl-N for New to bypass the menu.

One thing that I found a little unusual is that you must specify the source type when you open an existing member. I expected this when creating a new member, since the system doesn’t know what it is. But when opening an existing member, I expected it to default to the source type of the member so that you don’t have to specify it every time. I discovered that if you do not specify the type, it will default to plain text. I spoke to Liam about this, and he assured me that this is something he plans to improve in the future. Thankfully, this is not the case when using the member browser. It only happens when opening the member directly.

Working with IFS Files

The IFS Browser can be used to browse the IFS on your IBM i and find the source code that you’d like to open. It will begin browsing the IFS in the directory that you’ve specified in the IFS tab in your connection settings. Any subdirectories found beneath that starting directory can be expanded as well to see the files inside of it.

Like the member browser, double-clicking on an IFS file will open it in the editor.

The File menu also has options for creating a new IFS file or opening an existing IFS file when you know the exact path name. In that case, you do have to type the entire IFS path. There is no option to browse folders as you’d find in the open dialogs of other Windows software. That didn’t seem like a problem to me. If I wanted to see the folders, I’d use the IFS browser instead.

The Source Editor

I found the editor to be very intuitive, since it works the same as you’d expect from a PC file editor. It provides syntax highlighting and an outline view that make the source code very easy to read. In the screenshot below, I’m using “dark mode”, so you’ll see that my source code has a black background.

 

Syntax highlighting worked very nicely in free format RPG, CL and C/C++ code, including code that used the embedded SQL preprocessor.

Unfortunately, it did not work in fixed format RPG code. Liam tells me that fixed format RPG is especially difficult to implement because he codes ILEditor’s syntax highlighting using regular expressions, and regular expressions are difficult to make work for position-dependent source. However, he assured me that he does plan to support fixed format RPG code and is working on solving this problem.

I noticed that I could still type fixed format code and make changes to it, and aside from the source not being colored correctly, it worked fine.

The Outline View was a pleasant surprise, because I wasn’t really expecting an editor other than RDi to have one. It does not have as many features as the RDi outline view, but it worked very nicely for what I needed it for. I was also pleasantly surprised that the Outline View worked with CL code.

Compiling Programs

The compile option can be run by using the Compile menu at the top of the screen, the compile icon (shown in the picture below) or by pressing Ctrl-Shift-C.

I discovered that the compile option does not ask for any parameters. Instead, it uses the options that you specified in your connection and compile settings options in the toolbar. So if you want to change one of the default compiler options, you need to change them in the compile settings each time.

There are advantages and disadvantages to this approach. The advantage is that it’s very quick and easy to compile a program. When you’re developing software, you often have to compile it many times, and it’s very nice to be able to skip the dialog and just have it compile. The disadvantage is when you want to do something different in a one-off situation. You have to go into the compile settings to change it, so that’s a little bit of extra work. However, I find that I don’t need to do that very often, so this wasn’t a big deal to me.

When an error occurs during the compile, an error listing will open showing you what went wrong, very similar to what you’d find in RDi. Like RDi, you can click on the error and it will position the editor to the exact line of code where the error was found.

One thing that surprised me about the compile and the error message dialog was that it is considerably faster than RDi. That seems strange to me, since both tools are connecting to the IBM i and running the same IBM compiler for RPG. However, I found that depending on the size of the member, the ILEditor compile was 10-20 seconds faster than the RDi one.

RPG Fixed Format to Free Format Converter

One feature of ILEditor that simply did not work well was the RPG converter. Some of the fixed format code in my program would convert, but other things (including things that should’ve converted easily) did not. Code that spanned multiple lines did not convert at all.

In my opinion, the converter needs a lot of work before it will be useful. I pointed this out to Liam, and he told me that he agrees and has a complete rewrite of the converter on his to-do list.

Other Features

I’d like to mention some of the other features of ILEditor that I did not have time to try out before writing this article. Since I didn’t have time, I can’t review them and give my opinion – but, I wanted to mention them. That way, if you’re looking for these features, you can give them a try yourself and see what you think.

  • Source Diff = compares two sources (members or IFS files) and highlights what is different about them.
  • Spooled File Viewer = Lets you view spooled files that are in an output queue
  • SQL Generator = Generates SQL DDL code from an existing database object
  • Offline mode = lets you download source from the IBM i to store on your PC and work on it while you are not connected (for example, when traveling on a plane or train without good internet access), uploading the results later.

My Conclusion

I was extremely impressed by ILEditor. RDi has more features, such as debugging, refactoring and screen/report design, but I was surprised at just how many features ILEditor has, considering it was written by one man in his free time and costing nothing. I was pleasantly surprised by the performance of ILEditor, which was consistently faster than RDi while using far less memory.

Unfortunately, the lack of syntax highlighting for fixed format RPG will be a problem for many RPG developers, and I sincerely hope that does not discourage them from at least trying ILEditor.

If a lot of people try it, and some of them donate money or give their time to help with development, this tool could easily become a serious competitor to RDi.

Study Shows IBM i Has Big Cost Advantage Over Alternatives

According to an August 2017 study conducted by Quark + Lepton, an independent research and management consulting firm, IBM i on Power Systems servers provides a substantial TCO (total cost of ownership) advantage over equivalent Windows or Linux platforms.

For the study, which was funded by IBM, Quark + Lepton used three different server/database configurations: an IBM Power Systems server running IBM i Operating System V7.3 with DB2, an x86 server running Windows Server 2016 and SQL Server 2016 and an x86 with Linux and Oracle Database 12c. TCO estimates were based on the costs of hardware acquisition and maintenance, OS and database licenses and support, system and database admin personnel salaries and facilities expenses. Several different use cases were analyzed.

A Big TCO Advantage

The results of the study showed the projected three-year TCO for the three setups to be as follows:

  • Power Systems/IBM i/DB2 – $430,815
  • x86/Windows/SQL Server – $1.18 million
  • x86/Linux/Oracle – $1.27 million

The study concludes that “costs for use of IBM i on Power Systems are lower across the board”. For example, initial hardware and software acquisition costs for the IBM i systems averaged 8% less than the Windows systems, and fully 24 % less than the Linux systems.

Perhaps the most surprising factor in the stark differential between the IBM i solution and the others was in the cost of required support staff. Based on a 300-user scenario, IBM i required 0.3 FTE (full time equivalent) support personnel, compared to 0.5 FTE for the Windows setup and 0.55 FTE for Linux.

But the biggest differential in staff costs arose from the fact that IBM i admins could handle both the OS and the database. Those double-duty IBM i personnel commanded salaries of about $86,000, while Windows and Linux sysadmins were paid $71,564 and $86,843 respectively. However, the Windows and Linux setups also required the support of separate database admins, adding $100,699 (SQL Server) and $103,283 (Oracle) to the personnel costs for those solutions.

Simplicity

In its conclusion the report notes that while the industry is trending toward ever-greater complexity, the simplicity of IBM i makes it by far the most cost-effective platform on which to base an organization’s IT infrastructure.

Hyperconverged Infrastructure (HCI) Now Runs on IBM Power Systems

Stefanie ChirasToday’s corporate data centers are using more and more compute and storage resources to meet rapidly increasing operational requirements. Because traditional data center architectures are experiencing great difficulty in meeting these new demands, an alternative technology is swiftly gaining acceptance. Hyperconverged Infrastructure, or HCI, is on what Stefanie Chrias, IBM’s VP Power Systems, calls “a rapid growth trajectory.” And now, for the first time, this new technology that is so swiftly penetrating enterprise data centers is available to run on IBM’s Power Systems platforms.

But what, exactly, is hyperconverged infrastructure?

HCI takes the fundamental elements of the data center, servers, data storage, and networking, and packages them together in a single unified appliance. The entire unit, as well as its component parts, is controlled entirely by sophisticated software under the direction of detailed policies established by IT administrators. Both the compute engine and the storage controller run on the same server platform, and each appliance functions as a node in a cluster.

The constituent parts of the HCI appliance are hidden behind a unified “single pane of glass” software interface. So, there is no need for users or applications to deal directly with the hardware or its particular characteristics. The software can automatically and transparently carry out tasks such as performing data backups, scaling out (simply by adding nodes) to provision additional storage as needed, or swapping out nodes that fail. This approach greatly simplifies the IT management task.

Part of the appeal of HCI is that it was designed to run on inexpensive industry-standard x86-compatible servers and storage devices. But that meant IBM’s RISC-based Power Systems line was shut out of this fast-growing market.

HCI and Power Systems

IBM Power SystemsNow, however, IBM has announced that it is partnering with Nutanix, which 451 Research has named as the leading HCI provider, to market appliances based on the Power Systems line rather than x86 servers. Because of the superior compute and data handling capabilities of the Power architecture, IBM believes this new platform will allow enterprise customers to “run any mission critical workload, at any scale, with world-class virtualization and automation capabilities.” The platform is particularly suited to running high performance database, analytics, machine learning, and artificial intelligence applications.

For IBM, HCI is “a fundamentally different approach to enterprise application needs.” It also represents an important emerging market that IBM didn’t want to be left out of.


Looking for Power Systems education? Take a look at COMMON’s online offerings.

IBM Power Systems Benefits

IBM Power Systems provides one of the leading IT management systems in the market. In the past, it was primarily focused on simply running a smooth operating system and solving key problems. However today it has migrated to new applications. In particular, the IBM Power Systems Linux based servers, called the OpenPOWER LC servers, are a hardware solution that many managers will find intriguing.

IBM Power Systems

Speed and Storage

OpenPOWER LC servers have two key benefits. The first is simply the speed and storage capabilities. The key specs are:

  • Up to 20 cores (2.9-3.3Ghz)
  • 2 sockets
  • 512 GB memory (16 DIMMs)
  • 115 GB/sec max sustained memory bandwidth
  • 12 3.5” SATA drives, 96 TB storage
  • 5 PCIe slots, 2 CAPI enabled
  • 2 Nvidia K80 GPU capable

That means that the analytical and big data capabilities are off the chart. In fact, MongoDB runs twice as fast and EDB Postgress runs 1.8 times as fast on the system.

Companies are dealing with more complex problems that require even more power than ever before. Firms are dealing with analytics issues such as supply chain optimization, agile asset management, fraud prevention and enterprise data management that only can be handled with a powerful big data server like the OpenPOWER LC.

Integration

The second benefit is that the server integrates nicely with existing data systems and other servers. This product is fully compatible to be plugged right into a server farm. There is no need to do extensive customization or back-end fixes.  Instead, IT managers can add it into the existing stock as a powerful new tool.

COMMON is a leading organization helping Power Systems professionals through educational events, certification, and ongoing training. For more information, please visit our website.

The Benefits of Certification

There’s still a lot of competition in today’s job market. Even skilled IT professionals will tend to struggle to get jobs today. Getting specialized certifications can make all the difference for people who are looking for a job or trying to succeed in a career.

Career Advancement

Having IT certification can mean the difference between getting the job and staying in the job market for even longer. Some employers will actually narrow down the applicant pool based on who has certification and who doesn’t. Certified individuals will also find it easier to network with people who have similar certifications, which can enhance their career opportunities.

Rising in a company and getting a higher salary often requires people to familiarize themselves with new technologies and acquire new skills. People will usually need to provide some documented evidence that they have such skills and knowledge, and earning new certifications can make that happen.

Career Stability

Employers are more likely to keep certified employees on staff during difficult times. People with high-level certifications have documented skills that other employees won’t have. People who earn new certifications will make themselves more valuable employees.

New certifications allow IT professionals to stay relevant. The technological world changes very quickly. Skills can become obsolete just as rapidly. The people who are progressing further with certifications immediately send the message that they’re able to adapt to this transitory environment. These are the people who will keep their jobs or get new jobs more easily. Certified people know their field and they can network within the field more effectively.

Learn more about COMMON Certification.

IBM Watson For Oncology Going Live in a U.S. Community Hospital

In a first for the U.S., IBM’s Watson For Oncology (WFO) is essentially joining the clinical staff at an American community hospital. After being trained at the Memorial Sloan Kettering Cancer Center, and being tested in hospitals in several parts of the world, Watson will assist doctors at the 327-bed Jupiter Medical Center of Jupiter, Florida, in developing personalized treatment plans for cancer patients.

Why Watson?

What Watson brings to the table is its ability to quickly sift through reams of data from medical journals, textbooks, and clinical trials in order to provide doctors with rankings of the most appropriate treatment options for a particular patient. Identifying the proper treatment regime for cancer patients has always been difficult. Now, with rapid advancements in cancer research and clinical practice, the amount of data available to doctors is far outstripping their ability to keep up with current best practices.

WFO can lift much of what is essentially an information processing task off the shoulders of physicians. By combining information from the medical literature with the patient’s own records and physicians’ notes, Watson can provide a ranked list of personalized treatment options. And if patient records don’t provide all the information it needs for its analysis, Watson will even prompt the physician for more data.

Humans Still Required

Of course WFO is not intended to in any way replace or supersede human physicians. Dr. Abraham Schwarzberg, chief of oncology at Jupiter, thinks of Watson as providing a “second opinion” in the examination room. Doctors can access Watson’s recommendations on a tablet device while the examination of the patient is in progress. “We want a tool that interacts with physicians on the front end as they are prospectively going into making decisions,” says Dr. Schwarzberg.

Results

HospitalIn a study of 638 breast cancer cases conducted at a hospital in Bengaluru, India, WFO’s treatment recommendations achieved an overall 90 percent rate of agreement with those of a human tumor board. Still, IBM acknowledges that it’s too early to claim that Watson will actually improve outcomes for cancer patients. But with the vastly improved ability to personalize treatment options for individual patients that Watson provides, there’s every reason for optimism. As Nancy Fabozzi, a health analyst at Frost & Sullivan puts it, “Watson for Oncology is fundamentally reshaping how oncologists derive insights that enable the best possible decision making and highest quality patient care.”