By Dana Boehler
The speed of business has never been faster. Product release cycles have shrunk to timelines inconceivable in the past. Some fashion retailers are now releasing new product every two weeks, a cycle that historically only happened 4-8 times a year, and certain retailers even have product available immediately after it is displayed on the runway.
The demand for immediate insight into the state of sales numbers, ad campaigns, and other business functions has made the continuous aggregation of data commonplace. And if those factors weren’t pressure enough, the threat of ever-evolving security hazards is generating mountains of updates, code changes, and configuration adjustments — all of which need to be properly vetted before entering a production environment.
All of this activity needs to run on infrastructure that administrators like ourselves must manage, often with fewer coworkers to assist. Thankfully, for those of us running IBM i on IBM Power Systems, IBM has provided a robust cloud management tool that allows us to quickly spin up and spin down systems: PowerVC.
PowerVC allows users to manage existing IBM Power System partitions, create images from those partitions, and deploy new partitions based on those images. More recent versions of PowerVC support IBM i management and deployment (earlier versions did not).
Over the past year, I have been using PowerVC to greatly reduce the amount of time it takes to bring a system into the environment. Typically, creating a new system would take several hours of hands-on keyboard work over the course of a few days of hurry-up-and-wait time. The first time I deployed a partition from PowerVC, however, I was able to reduce that to about an hour, and after more refinements in my deployment, images, and process, I am now down to under 25 minutes. That’s 25 minutes to have a fully deployed, PTF’d system up and running.
The full implications of this may not be readily apparent. Obviously, net new systems can be deployed much more quickly. But more importantly, new modes of development can be more easily be supported. PowerVC supports self-service system provisioning, which enables teams to create their own systems for development, test, and QA purposes, and then tear them down when no longer needed. Since the systems are focused on the task at hand, they do not need the resources a fully utilized environment would need.
There’s more: Templates can be created in PowerVC to give the self-service users different CPU and memory configurations, and additional disk volumes can be requested as well. Post-provisioning scripts are supported for making configuration changes after a deployed system is created. In our environment, we are taking this a step further by integrating PowerVC with Red Hat’s Ansible automation software, which has given us greater flexibility in pre- and post-provisioning task automation.
In practice, using PowerVC removes many of the barriers to efficient development inherent in traditional system deployment models and permits continuous deployment strategies. Using PowerVC, a developer tasked with fixing a piece of code can spin up a clean test partition with the application and datasets already installed, create the new code fix, spin up a QA environment that has all the scripted tests available for testing the code, and then promote the code to production and delete the partitions that were used for development and testing.
You do have to make some changes to the environment in order to support this model. Code needs to be stored in a repository, so it can be kept in sync between all systems involved. The use of VIOS is also required. Additionally, note that when using this type of environment, the administrator’s role becomes more centered around image/snapshot maintenance (used for deployment templates) and automation scripting rather than the provisioning and maintenance of systems.
For full information on the product and its installation, I recommend visiting IBM’s knowledge center.
By Dana Boehler
Unless you’re working in a very large shop as an administrator on IBM i, you are likely reporting to management that did not come from an IBM i background. This dynamic can be challenging, but there are ways to approach the situation that can lead to a more rewarding experience. Here are a few tips for making this situation work better for you, taken from my own experience. I can’t say I’ve always followed these recommendations, but I can say that things tend to go better when I do.
1. Be Patient
As an IBM i administrator, you’ve spent countless hours learning how these systems work – their strengths, their quirks, and idiosyncrasies. You likely take many of these traits for granted, but it’s important to acknowledge that those who do not have your experience will not. Nor will they necessarily make logical conclusions that you may see as obvious. An example that comes to mind is the many times I’ve been asked by an auditor for the list of database users for our “AS/400”. It may be tempting to tell the requestor, “We haven’t had an AS/400 for over 15 years, and our IBM i doesn’t have a separate database logon for the users!” But that will only make you seem like a curmudgeon. Additionally, if that attitude surfaces frequently enough, management will actively try to avoid you and exclude you from important project discussions that may affect your systems.
2. Be a Teacher
Very little of the opposition you will experience to IBM i is the result of a maniacal plot against the platform. Much of the push back is derived from a lack of understanding of how the systems work and what their benefits are. Taking the time to explain how things work, or better yet, hosting a lunch-and-learn session on an aspect of the system, can go a long way to removing your manager’s and coworkers’ lack of familiarity with the system.
3. Where Possible, Reduce Your Reliance on Jargon
There are many terms that may be misunderstood by a non-IBM i person. iASPs, TCP/IP servers, PTFs, and logical files are all things that someone familiar with the platform would understand, but other administrators may not. Wherever possible use language appropriate to the audience. Your manager may not know what a logical file is, but if they have used SQL they will understand what a view is, for instance.
4. Recommend the Right Tool for the Job
The IBM i platform can perform a multitude of functions, including being an application server, a web server, or even an email server. But what you do with it in your organization should answer to what is right for your business. By recommending solutions involving IBM i only where they make sense, you will foster a reputation as someone who does what is right for your company.
5. Allow Yourself to Learn from the Administrators of Other Platforms
Some of the most interesting things I have done with my IBM i systems were derived by learning from Windows, Linux, and Unix administrators. For example, we migrate non-production partitions at the SAN level using scripts to capture and recreate the HMC profiles, which saves a lot of time. This is a direct result of what I have learned from our non-IBM i staff.
By Dana Boehler
Securing an expansive platform like an IBM i system can be an intimidating task, a task that many times falls into the hands of a systems administrator when more specialized help is not available in-house. Deciding what tasks and projects will add value, while reducing administrative overhead, is also difficult. In this article I have chosen three things you can do in your environment that can get you started in ascending order of time and effort.
1. Run the ANZDFTPWD Command
Run the ANZDFTPWD command – This command checks the profiles on your system for passwords that are the same as the user profile name and outputs the list to a spooled file. Even on systems with well controlled *SECADM privileges (the special authority that allows a user to create and administer to user profiles), you will find user profiles that have either been created with or reset to have a password that is the same as the user profile name, which could provide an unauthorized user a method for gaining access to system resources. Additionally, the command has options to either disable or expire any user profiles found to have default passwords if desired.
2. Use SQL to Query Security Information from Library QSYS2
In recent updates to the supported IBM i OS versions, IBM made a very powerful set of tools available for querying live system and security data by using SQL statements. This allows users with the appropriate authority to create very specific reports on user profiles, group profiles, system values, audit journal data, authorization lists, PTF information and many other useful data points. These files in QSYS2 are table views directly accessing the information they are querying so the data is current every time a statement is run. One of the best things about creating output this way is there is no need for creating an outfile to query from or refresh re-querying. A detailed list of the information available and the necessary PTF and OS levels required to use these tools can be found here.
3. Implement a Role-based Security Scheme
The saying used to be the IBM i OS “is very secure”, but that statement has changed to the more accurate “is very securable”. This change in language reflects the reality that these systems are now very open to the world as shipped but can be one of the most secure systems when deployed with security in mind. For those who are not aware of role-based authority on IBM i, it is basically a way of restricting access to system resources using authorities derived from group profiles. Group profiles are created for functions within the organization, and then authorities are assigned to those group profiles. When a user profile is created it is configured with no direct access to objects on the system, instead group profiles are added to allow access to job functions. Although implementing role-based security may seem like a daunting task it pays huge dividends in ease of administration after the project is in place. For one thing having role-based security in place allows the administrator to quickly change security settings for whole groups of users at once when needed, instead of touching each user’s profile. It also allows for using group profiles as the object owners instead of individual user profiles, which means the process of removing users who create large numbers of objects or objects that are constantly locked is much easier. Using role-based security also relies on group profile for authority, so the likelihood of inadvertently granting a user too much or too little authority by copying another similar user is far less likely.
These a just a few of things you can do to get started securing your IBM i. In future posts, I intend to delve into more depth, especially regarding role-based security.
Today I’ll look at a powerful open source (and completely free!) IDE for ILE programs (CL, C/C++, Cobol or RPG) named ILEditor that is being actively developed by Liam Allan who is one of the brightest minds in the industry. In fact, last week Allan added a new GUI interface to the editor that makes it feel much more professional, while keeping it easy to use. I’ll also give you a quick overview of the announcement IBM made last week about updates to IBM i 7.2 and 7.3.
The IBM Announcement
On February 13th, just in time for Valentine’s Day (because IBM wants to be my valentine!), IBM announced new Technology Refreshes. These include support for POWER9 processors, which look incredible – but, alas, I’m not a hardware guy. They also include updates to Integrated Web Services (IWS), Access Client Solutions (ACS), RPG and more.
Here are links to the official announcements:
You should also check out Steve Will’s blog post.
The most exciting part of this announcement for me is the introduction of the new DATA-INTO opcode in RPG. Here’s the sample code that IBM provided in the announcement:
DATA-INTO myDs %DATA(‘myfile.json’ : ‘doc=file’) %PARSER(‘MYLIB/MYJSONPARS’);
It appears that this will work similarly to Open Access, where the RPG compiler will examine your data structure and other variables that it has all the details for and work together with a back-end handler that will map it into a structured format. Open Access refers to the back-end program as a “handler”, whereas DATA-INTO seems to call it a “parser”, but the general idea is the same.
As someone who has written multiple open source tools to help RPG developers work with XML and JSON documents, this looks great! One of the biggest challenges I face with these open source projects is that they don’t know the details of the calling program’s variables, so they can’t ever be as easy to use as a tool like XML-INTO. For example, the YAJL tools that I provide to help people read JSON documents require much more code than the XML-INTO opcode, because XML-INTO can read the layout of a data structure and map data into it, whereas with YAJL you must map this data yourself. However, DATA-INTO looks like it will solve this problem, so that once I’ve had time to write a DATA-INTO parser, you’ll be able to use YAJL the same way as XML-INTO.
Unfortunately, as I write this, the PTFs are not yet available, so I haven’t been able to try it. I’m very excited, however, and plan to blog about it as soon as I’ve had a chance to try it out!
What is ILEditor?
ILEditor (pronounced “I-L-Editor”) came from the mind of Liam Allan, who is one of the best and the brightest of the 2018 IBM Champions. I have the privilege of working with Liam at Profound Logic Software, and I can tell you that his enthusiasm for computer technology and IBM i programming know no bounds. In fact, one day last week after work, Liam sent me a text message about his new changes to ILEditor, sounding very excited. When I factored in the time zone difference, I realized it was 1:00 a.m. where he lives!
For many years, one of the most common laments in the IBM i programming community has been about the cost and performance of RDi. Please don’t misunderstand me. I love RDi, and I use it every day. I believe RDi is the best IDE for IBM i development that’s available today. That said, sometimes we need something else for various reasons. Some shops can’t get approval for the cost of RDi. Others might want something that uses fewer resources or something they can install anywhere without needing additional RDi licenses. Whatever the reason, ILEditor is very promising alternative! I wouldn’t be surprised if it eventually is able to compete with RDi.
Why Not Orion? Or SEU?
The concept of Orion is a great. It’s web-based, meaning that you don’t have to install it and it’s available wherever you go. Unfortunately, it’s not really a full IDE – at least not yet! I hope IBM is working to improve it. It does not know how to compile native ILE programs or show compile errors. Its interface is designed around the Git version control software, which makes it tricky to use unless you happen to store your code in Git. And quite frankly, it’s also a little bit buggy. I hope to see improvements in these areas, but right now it’s not a real option.
The most popular alternative to RDi today is SEU. In fact, historically this was the primary way that code was written for IBM i. So, you may think it’s still a good choice. However, I don’t think it’s viable today for two reasons:
- The green-screen nature makes it cumbersome to use. This is no problem for a veteran programmer, because they’re used to it. But for IT departments to survive, they need to bring in younger talent. Younger talent is almost always put off by SEU. I even know students who gave up the platform entirely because they thought SEU seemed so antiquated, and they wanted no part of it.
- SEU hasn’t received any updates since January 2008. That means all features added to RPG in the past 10 years – which includes three major releases of the operating system –will show as syntax errors in SEU.
ILEditor is open source, runs on Windows and was released as open source under the GNU GPL 3.0 license. That means it is free and can be used for both private and commercial use. If you like, you can even download the source code and make your own changes. It can read source from source members or IFS files. In addition to editing the source, it can compile programs, show you the errors in your programs, work with system objects and display spooled files. It even has an Outline View (like RDi does) that will show you the variables and routines in your program.
The main web site for ILEditor is: worksofbarry.com/ileditor/.
If you want to see the source code, you’ll find the Github project here.
You do not need to install any software on your IBM i to use ILEditor. Instead, the Windows program uses the standard FTP server that is provided with the IBM i operating system to get object and source information and to run compile commands. An FTPES (FTP over SSL) option is provided if a more secure connection is desired.
Connecting for the First Time
When you start ILEditor, it will present you with a box where you can select the host to connect to. Naturally, the first time you run it there will be no hosts defined, so the box will be empty. You can click “New Host” to define one.
Once you have a host defined, it will be visible as an icon, and double-clicking the icon will begin the connection.
When you set up a new system, there are five fields you must supply, as shown in the screenshot below:
Alias name = You can set this to whatever you wish. ILEditor will display this name when asking you the host to connect to, so pick something that is easy to remember.
Host name / IP address = the DNS name or IP address of the IBM i to connect to.
Username = Your IBM i user profile name.
Password = Your IBM i password – you can leave this blank if you want it to ask you every time you connect.
Use FTPES = This stands for FTP over Explicit SSL. Check this box if your IBM i FTP server has been configured to allow SSL and you’d like the additional security of using an encrypted connection.
The Main IDE Display
Once you’ve connected, you’ll be presented with a screen that shows the “Toolbox” on the left and a welcome screen containing getting started information and developer news, as shown in the screenshot below.
Any of the panels in ILEditor, including these two, can dragged to different places on the display or closed by clicking the “X” button in the corner of the panel. There is also an icon of a pin that you can click to toggle whether a panel is always open or whether it is hidden when you’re not using it. If you look carefully on the right edge of the window, you’ll see a bar titled “Outline View”. This is an example of a hidden panel. If you click on the panel title, the panel will open. If you click the pin, it will stay open. You can adjust the size of any panel by dragging its border.
When you open source code, it will be placed in tabs in the center of the display (just as the welcome screen is initially.) These can also be resized or moved with the mouse. This makes the UI very flexible and simple to rearrange to best fit your needs.
Perhaps the best place to start is with the toolbox. Here’s what that panel looks like:
Most of the options in this panel are self-explanatory. I will not explain them all but will point out a few interesting things that I discovered when using ILEditor:
- The “Library List” is primarily used when compiling a program. This is the library list to find file definitions and other dependencies that your program will need.
- The “Compile Settings” lets you customize your compile commands. Perhaps you have a custom command you use when compiling. Or perhaps you use the regular IBM commands but want to change some of the options used. In either case, you’ll want to look at the Compile Settings.
- As you might expect, “Connection Settings” has the host name, whether to use FTPES and other settings that are needed to connect to the host. In addition to that, there are some other useful options hidden away in the connection settings:
- On the IFS tab, you’ll find a place to configure where your IFS source code is stored and which library it should be compiled into.
- On the Editor tab, there is a setting to enable the “Outline View”. You’ll want to make sure this is checked, otherwise you’ll be missing out on this feature.
- On the ILEditor tab, there’s a setting called “Use Dark Mode”. This will change the colors when it displays your source code to use a black background (as opposed to the default white background), which many people, myself included, find easier on the eyes.
- When you change something in the “Connection Settings” (including the options described above), you will need to disconnect from the server and reconnect so that the new settings take effect.
Opening Source Code from a Member List
ILEditor allows you to open source code from either an IFS file or a traditional source member. You can use the Member Browser or IFS Browser options in the toolbox to browse your IBM i to find the source you wish to open and open it.
The Member Browser opens as a blank panel with two text fields at the top. At first, I wasn’t sure exactly what these were for as there wasn’t any explanation. I guessed that this was where you specified the library (on the left) and the source physical file (on the right) that you wanted to browse. Iit turned out that I was correct. If you type the library and filename and click the magnifying glass, it will show you all the members in that file.
I have a lot of source members that I keep in my personal library, and I often get impatient waiting for the member list to load in RDi. I was pleasantly surprised to see that the member browser in ILEditor loads considerably faster.
There is also a “hidden” feature where you can press Ctrl-P to search the list of recent members that you listed in the member browser. Just press Ctrl-P and start typing, and it’ll show the members that match the search string. This was a very convenient way to find members.
Once you’ve found the member (in either the regular member browser or the “search recent” dialog), you can double-click on the member name to open it.
Create or Open a Member Without Browsing
In the upper-left of the ILEditor window, there is a File menu that works like the file menus found in most other Windows programs. You can click File/New to create a new member or IFS file or File/Open to open an existing member or IFS file when you know the name and therefore don’t need to browse for it.
The File Menu also offers keyboard shortcuts to save time. You can press Ctrl-O for Open, or Ctrl-N for New to bypass the menu.
One thing that I found a little unusual is that you must specify the source type when you open an existing member. I expected this when creating a new member, since the system doesn’t know what it is. But when opening an existing member, I expected it to default to the source type of the member so that you don’t have to specify it every time. I discovered that if you do not specify the type, it will default to plain text. I spoke to Liam about this, and he assured me that this is something he plans to improve in the future. Thankfully, this is not the case when using the member browser. It only happens when opening the member directly.
Working with IFS Files
The IFS Browser can be used to browse the IFS on your IBM i and find the source code that you’d like to open. It will begin browsing the IFS in the directory that you’ve specified in the IFS tab in your connection settings. Any subdirectories found beneath that starting directory can be expanded as well to see the files inside of it.
Like the member browser, double-clicking on an IFS file will open it in the editor.
The File menu also has options for creating a new IFS file or opening an existing IFS file when you know the exact path name. In that case, you do have to type the entire IFS path. There is no option to browse folders as you’d find in the open dialogs of other Windows software. That didn’t seem like a problem to me. If I wanted to see the folders, I’d use the IFS browser instead.
The Source Editor
I found the editor to be very intuitive, since it works the same as you’d expect from a PC file editor. It provides syntax highlighting and an outline view that make the source code very easy to read. In the screenshot below, I’m using “dark mode”, so you’ll see that my source code has a black background.
Syntax highlighting worked very nicely in free format RPG, CL and C/C++ code, including code that used the embedded SQL preprocessor.
Unfortunately, it did not work in fixed format RPG code. Liam tells me that fixed format RPG is especially difficult to implement because he codes ILEditor’s syntax highlighting using regular expressions, and regular expressions are difficult to make work for position-dependent source. However, he assured me that he does plan to support fixed format RPG code and is working on solving this problem.
I noticed that I could still type fixed format code and make changes to it, and aside from the source not being colored correctly, it worked fine.
The Outline View was a pleasant surprise, because I wasn’t really expecting an editor other than RDi to have one. It does not have as many features as the RDi outline view, but it worked very nicely for what I needed it for. I was also pleasantly surprised that the Outline View worked with CL code.
The compile option can be run by using the Compile menu at the top of the screen, the compile icon (shown in the picture below) or by pressing Ctrl-Shift-C.
I discovered that the compile option does not ask for any parameters. Instead, it uses the options that you specified in your connection and compile settings options in the toolbar. So if you want to change one of the default compiler options, you need to change them in the compile settings each time.
There are advantages and disadvantages to this approach. The advantage is that it’s very quick and easy to compile a program. When you’re developing software, you often have to compile it many times, and it’s very nice to be able to skip the dialog and just have it compile. The disadvantage is when you want to do something different in a one-off situation. You have to go into the compile settings to change it, so that’s a little bit of extra work. However, I find that I don’t need to do that very often, so this wasn’t a big deal to me.
When an error occurs during the compile, an error listing will open showing you what went wrong, very similar to what you’d find in RDi. Like RDi, you can click on the error and it will position the editor to the exact line of code where the error was found.
One thing that surprised me about the compile and the error message dialog was that it is considerably faster than RDi. That seems strange to me, since both tools are connecting to the IBM i and running the same IBM compiler for RPG. However, I found that depending on the size of the member, the ILEditor compile was 10-20 seconds faster than the RDi one.
RPG Fixed Format to Free Format Converter
One feature of ILEditor that simply did not work well was the RPG converter. Some of the fixed format code in my program would convert, but other things (including things that should’ve converted easily) did not. Code that spanned multiple lines did not convert at all.
In my opinion, the converter needs a lot of work before it will be useful. I pointed this out to Liam, and he told me that he agrees and has a complete rewrite of the converter on his to-do list.
I’d like to mention some of the other features of ILEditor that I did not have time to try out before writing this article. Since I didn’t have time, I can’t review them and give my opinion – but, I wanted to mention them. That way, if you’re looking for these features, you can give them a try yourself and see what you think.
- Source Diff = compares two sources (members or IFS files) and highlights what is different about them.
- Spooled File Viewer = Lets you view spooled files that are in an output queue
- SQL Generator = Generates SQL DDL code from an existing database object
- Offline mode = lets you download source from the IBM i to store on your PC and work on it while you are not connected (for example, when traveling on a plane or train without good internet access), uploading the results later.
I was extremely impressed by ILEditor. RDi has more features, such as debugging, refactoring and screen/report design, but I was surprised at just how many features ILEditor has, considering it was written by one man in his free time and costing nothing. I was pleasantly surprised by the performance of ILEditor, which was consistently faster than RDi while using far less memory.
Unfortunately, the lack of syntax highlighting for fixed format RPG will be a problem for many RPG developers, and I sincerely hope that does not discourage them from at least trying ILEditor.
If a lot of people try it, and some of them donate money or give their time to help with development, this tool could easily become a serious competitor to RDi.
Perhaps the biggest area of growth in IBM i programming over the past several years has been the Open Source languages. There are thousands of utilities, mostly designed for Unix, that you can run in the QShell and PASE environments, and these have become very popular on IBM i! However, running these tools from your RPG and CL programs can be tricky. This post will introduce you to a free utility called UNIXCMD that makes it much easier.
Why is it Tricky?
It’s tricky because there’s a difference in the way IBM i and Unix systems run programs. When a program is called on Unix, a new “process” is created, this is very much like a new job on IBM i, except that it is created each time a program is called. This gives the calling program a choice, it can either stop and wait for the called program to finish, or it can continue and run simultaneously with the program it called. In a way, this is similar to submitting a batch job on i, but it’s different in that both programs can interact with the user’s display.
There are two common ways of running Unix programs, PASE via the QP2SHELL API and QShell via the STRQSH (or it’s alias QSH) CL command.
The QP2SHELL API runs a PASE program directly in the current job, which will cause problems if the job expects to be able to run simultaneously with the caller. Programs that use multiple threads, in particular, can have strange problems that are very hard to troubleshoot. The fix is to spawn a child job to run QP2SHELL so that it’s not in the same job as the caller. To enable input and output, you need to connect pipes to that child job. This is a lot of work, and often more than a programmer bargained for. (In fact, my description is somewhat oversimplified, to keep this brief!)
The STRQSH CL command solves this by always spawning its own child job and connecting pipes to it. It does all of this for you. The problem with this is that you are limited in how you can interact with the input and output streams. For a program to work with them, the only real option is to read and write from temporary files. This works, but it cumbersome, you have to create the file, clear it, redirect the I/O, and can only read the output once the whole process has finished.
My solution is to create tools that do the work of submitting the child job for you, and connecting the pipes to a simple interface that is easy to use from your program. For a CL program, I’ve provided simple commands that open, read, write and close the connection, allowing you to read and write in a natural way. In RPG, I’ve used the Open Access interface so that you can open the connection with RPG’s native file interface, and read and write using the standard open, read, write and close opcodes.
The RPG Interface
Let’s take a look at RPG first. My initial examples use the newer “all-free” approach to writing RPG, but if you’re unable to use newer RPG, don’t fret – see the section titled “Older RPG Code”, below.
This example switches to the /QIBM directory in the IFS, lists the files in that directory, and prints them to the spool:
Notice the HANDLER keyword on the DCL-F statement. This tells RPG to access this file through the Open Access interface. When you run any of RPG’s file opcodes against this file, it will (under the covers) call routines in the UNIXCMDOA program, that allows my tool to take control and handle all of the work for you.
The command to run is provided in the second parameter to the HANDLER keyword. Since I’m setting that command in calculations in my program, I do not open the file until the command variable is set. For that reason, the file is declared with the USROPN keyword, and I open it explicitly with the OPEN opcode.
Unix utilities allow you to run multiple commands on a single line if you separate them with a semicolon. In this example, the cd command is used to switch to the /QIBM directory, and then the ls command is run afterwards to get the output.
To get the output from the Unix ls (list directory) command, I simply use the READ opcode. Since this is a program-described file (there are never any defined fields in a Unix input or output stream) I am using RPG’s feature that lets me read program-described data into a data structure.
When I’m done, I use the CLOSE opcode to shut down the Unix process. One thing that surprises people who are new to UNIXCMD is that any errors that occur in the background Unix program will be reported on the CLOSE opcode rather than the OPEN, READ or WRITE. This is because the Unix program has the opportunity to write error messages, and will not report that it has failed until the program has ended. Catching errors can be done easily with RPG’s MONITOR and ON-ERROR opcodes.
By default, the UNIXCMD utility runs your program using the QShell interface. If you prefer to run the PASE interface and avoid the QShell environment, you can do that by prefixing the command string with “pase:”, as shown in the next example.
The examples so far have only read output from a Unix command. In the next example, I’d like to demonstrate sending data both ways. In this case, I’m calling a PHP script that calls a web service to Geocode and address. In other words, I pass an address as input, and the script returns the latitude and longitude coordinates where that address can be found. To do that, I’ve written a PHP script that receives the address from it’s “standard input” (that is, the pipe that is connected to its input stream) and write the coordinates to standard output. (If you’re interested in the PHP code, it is included in the downloadable examples on my web site.) To call it from RPG, I can simply write my data to it, and then use the READ opcode to get the results.
The first thing you’ll notice in this statement is that it sets the Unix PATH variable. This is done because many people don’t have the PHP command in their PATH. PATH is very much like a library list, except that it is a list of IFS directories that the Unix environment uses to find a command. The PATH statement adds the directory where we’ve placed the php-cli command (CLI stands for “command line interface”) so that QShell can find it.
Another important thing to note is that data sent as input through the pipe is not automatically converted from EBCDIC to ASCII or Unicode. To solve that problem, I added a call to the QShell iconv utility, which can translate between different CCSIDs. In this case, will convert between 0 (a special value that means “this job’s CCSID”) and iso-8859-1 which is CCSID 819 and is a flavor of ASCII.
A Note About Input and Output
UNIXCMD assumes that you will write all the data that is sent as input to the Unix command first, before the first time you read its output. When you read the Unix output, it will shut down the Unix input stream to signal the Unix program that no more data is coming. This works well in most applications.
However, if you would prefer that it not close the stream, this can be made with a very simple code change to the UNIXCMD utility. If this would be useful to you, please e-mail me at firstname.lastname@example.org and I’ll be glad to show you how to change it.
Older RPG Code
Sadly, not everyone has a current version of RPG. To help those people, I’ve provided a way of using UNIXCMD that’s compatible with even the oldest versions of RPG IV using the SPECIAL file interface. Here is the last example rewritten to use that approach.
Notice that the command is passed to the SPECIAL file through the PLIST. The second parameter in the PLIST is not shown in this example, but you can add a 1 character second parameter and set it to P if you want to run in the PASE environment. If you do not pass this parameter (or set it to a Q) it will run QShell instead. Here is an excerpt of code that does that:
Since the SPECIAL file approach does not require Open Access, it will work all the way back to V5R3.
Using the UNIXCMD Tool from CL
Since the CL programming language does not support Open Access, there is no way to use the standard IBM supplied SNDF or RCVF opcodes that you are used to using for a normal file. Instead, I have created my own CL commands named OPNPIPE, SNDPIPE, RCVPIPE and CLOPIPE that handle the open, send, receive and close functions, respectively. The OPNPIPE command accepts the command to run, and lets you designate whether it is PASE or QShell. Aside from these differences, the UNIXCMD utility works the same from CL as it does from RPG.
Here’s an example of listing files (like the first RPG example) using the CL interface:
Get the UNIXCMD Utility
UNIXCMD is an open source tool that is available at no charge. You can download it, the examples given in this article, and a few more examples from my web site at the following link:
According to an August 2017 study conducted by Quark + Lepton, an independent research and management consulting firm, IBM i on Power Systems servers provides a substantial TCO (total cost of ownership) advantage over equivalent Windows or Linux platforms.
For the study, which was funded by IBM, Quark + Lepton used three different server/database configurations: an IBM Power Systems server running IBM i Operating System V7.3 with DB2, an x86 server running Windows Server 2016 and SQL Server 2016 and an x86 with Linux and Oracle Database 12c. TCO estimates were based on the costs of hardware acquisition and maintenance, OS and database licenses and support, system and database admin personnel salaries and facilities expenses. Several different use cases were analyzed.
A Big TCO Advantage
The results of the study showed the projected three-year TCO for the three setups to be as follows:
- Power Systems/IBM i/DB2 – $430,815
- x86/Windows/SQL Server – $1.18 million
- x86/Linux/Oracle – $1.27 million
The study concludes that “costs for use of IBM i on Power Systems are lower across the board”. For example, initial hardware and software acquisition costs for the IBM i systems averaged 8% less than the Windows systems, and fully 24 % less than the Linux systems.
Perhaps the most surprising factor in the stark differential between the IBM i solution and the others was in the cost of required support staff. Based on a 300-user scenario, IBM i required 0.3 FTE (full time equivalent) support personnel, compared to 0.5 FTE for the Windows setup and 0.55 FTE for Linux.
But the biggest differential in staff costs arose from the fact that IBM i admins could handle both the OS and the database. Those double-duty IBM i personnel commanded salaries of about $86,000, while Windows and Linux sysadmins were paid $71,564 and $86,843 respectively. However, the Windows and Linux setups also required the support of separate database admins, adding $100,699 (SQL Server) and $103,283 (Oracle) to the personnel costs for those solutions.
In its conclusion the report notes that while the industry is trending toward ever-greater complexity, the simplicity of IBM i makes it by far the most cost-effective platform on which to base an organization’s IT infrastructure.
Speaker: Scott Forstie
During this 40 minute recording, Scott explains the new and enhanced DB2 for i features being delivered on March 31, 2017 to IBM i 7.2 and IBM i 7.3.
Database enhancements are delivered via the DB2 PTF Group SF99702 (IBM i 7.2) and SF99703 (IBM i 7.3), scheduled to coincide with Technology Refreshes (TRs).
DB2 for i continues to deliver new SQL programming capabilities, DBE improvements, IBM i Services and other high priority enhancements.
About the Speaker
Scott Forstie is the DB2 for i Business Architect at IBM. He has worked on IBM operating system development since joining the company in 1989. In addition to his development responsibilities, he is the IBM i developerWorks content manager and IBM i Technology Updates wiki owner.
With fintech companies moving toward the creation of new transaction models for blockchain support of payment and lending transactions, IBM has launched new developer tools, software, and training programs targeted at financial services industry software developers. Version 7.3 of IBM i was released in April 2016. Requiring little to no onsite IT administration during standard operations, IBM i is making blockchain programming endeavors possible.
IBM BlueMix Garage developers are using the Bluemix PaaS (platform as a service) capabilities to test network solutions on the cloud designed to unlock the potential of blockchain. The Hyperledger Project set up to advance blockchain technology as a cross-industry, enterprise-level open standard for distributed ledgers, will be critical to development of the latest in fintech services IaaS (infrastructure application as service) technologies as they emerge.
The collaboration of software developers on blockchain framework and platform projects, stands to promote the transparency and interoperability of fintech IaaS. Providing the support required to bring blockchain technologies into adoption by mainstream commercial entities, BlueMix Garage developers are keen on IBM i database software programming as turn-key solution to operating systems on PowerSystems and PureSystems servers.
Recent release of fintech and blockchain courses by the IBM Learning Lab, offers training and use cases for financial operations analysts and developers. Offered in partnership with blockchain education programs and coding communities, IBM is engaged with the best in cognitive developer talent to capture ideas for the next generation of APIs, artificial intelligence apps, and business process solutions from the IBM i community.
When it comes to your information, keeping it out of the hands of cyber thieves is a high priority. As technology regularly evolves, so should your IT security measures. You must have multiple layers of security protecting your systems. Below are five basic tips to aid you in keeping your information as safe as possible.
- Minimum privileges basically means deciding who has authority to what information. Limit access based on job duties and you limit the chances of your system being breached. For example, your receptionist usually won’t require access to payroll or your transportation manager doesn’t need to snoop in HR files. This is easily adjusted as job requirements change.
- Firewalls are your friend. Firewalls are meant to keep unauthorized users from accessing your systems. They are not infallible, but when used along with complex passwords and anti-spyware/anti-virus programs they can provide that extra level of security.
- Have a back-up plan. In Computers 101, you learn to back up your information. This is essential, but do you also have a back-up procedure to fall back on if your system is attacked? You need the ability to keep functioning during a system repair or replacement.
- Prioritize your systems, decide which are most vulnerable to attack and which are most valuable. You’ll want the heaviest measures deployed on the highest level, most vital systems and data. Accomplishing this without leaving the lower levels unprotected is important.
- Constantly evolve your defenses and grow with the changing threats. Just as hackers will continue to find ways to chip away at your security measures, your IT department will need to develop new ways of repelling them.
Taking these general tips farther, you can grow your knowledge of IBM i security by watching recordings of past COMMON webcasts:
Be secure out there!