COMMON Memories

Recently Anne Lucas, former President of the COMMON Board of Directors, shared some photos from 1992-1994 with us. Now, we want to share them with you.

If you have any memories of this time period, feel free to share them in the Comments section found at the bottom of this post.

San Antonio

COMMON's Anne Lucas and IBM executive Buell Duncan pose with a four-legged friend in San Antonio.
IBM CEO Lou Gerstner and COMMON's Anne Lucas mingle with the crowd at Opening Session.

COMMON International Meeting

The pictured individuals include Bib Anderson, IBM Liaison, Amiram Shore, COMMON Israel, and Anne Lucas, COMMON.

5 Tips for Working Under Management Unfamiliar with IBM i

By Dana Boehler

Unless you’re working in a very large shop as an administrator on IBM i, you are likely reporting to management that did not come from an IBM i background. This dynamic can be challenging, but there are ways to approach the situation that can lead to a more rewarding experience. Here are a few tips for making this situation work better for you, taken from my own experience. I can’t say I’ve always followed these recommendations, but I can say that things tend to go better when I do.

1. Be Patient

As an IBM i administrator, you’ve spent countless hours learning how these systems work – their strengths, their quirks, and idiosyncrasies. You likely take many of these traits for granted, but it’s important to acknowledge that those who do not have your experience will not. Nor will they necessarily make logical conclusions that you may see as obvious. An example that comes to mind is the many times I’ve been asked by an auditor for the list of database users for our “AS/400”. It may be tempting to tell the requestor, “We haven’t had an AS/400 for over 15 years, and our IBM i doesn’t have a separate database logon for the users!” But that will only make you seem like a curmudgeon. Additionally, if that attitude surfaces frequently enough, management will actively try to avoid you and exclude you from important project discussions that may affect your systems.

2. Be a Teacher

Very little of the opposition you will experience to IBM i is the result of a maniacal plot against the platform. Much of the push back is derived from a lack of understanding of how the systems work and what their benefits are. Taking the time to explain how things work, or better yet, hosting a lunch-and-learn session on an aspect of the system, can go a long way to removing your manager’s and coworkers’ lack of familiarity with the system.

3. Where Possible, Reduce Your Reliance on Jargon

There are many terms that may be misunderstood by a non-IBM i person. iASPs, TCP/IP servers, PTFs, and logical files are all things that someone familiar with the platform would understand, but other administrators may not. Wherever possible use language appropriate to the audience. Your manager may not know what a logical file is, but if they have used SQL they will understand what a view is, for instance.

4. Recommend the Right Tool for the Job

The IBM i platform can perform a multitude of functions, including being an application server, a web server, or even an email server. But what you do with it in your organization should answer to what is right for your business. By recommending solutions involving IBM i only where they make sense, you will foster a reputation as someone who does what is right for your company.

5. Allow Yourself to Learn from the Administrators of Other Platforms

Some of the most interesting things I have done with my IBM i systems were derived by learning from Windows, Linux, and Unix administrators. For example, we migrate non-production partitions at the SAN level using scripts to capture and recreate the HMC profiles, which saves a lot of time. This is a direct result of what I have learned from our non-IBM i staff.

Guest Blogger

Dana Boehler is a Senior Systems Engineer at Rocket Software.

Introducing the POWER9 Server Family

POWER9 is here. As many in our community will be looking to upgrade, we want to provide information on what these new servers offer you and your business.

According to IBM, POWER9-based servers are built for data intensive workloads, are enabled for cloud, and offer industry leading performance.

As you have experienced, Power Systems have the reputation of being reliable, and the POWER9-based servers are no exception. POWER9 gives you the reliability you’ve come to trust from IBM Power Systems, the security you need in today’s high-risk environment, and the innovation to propel your business into the future. They truly provide an infrastructure you can bet your business on. From a Total Cost of Ownership (TCO) standpoint, a savings of 50% can be realized in 3 to 5 years when moving to POWER9 per IBM calculations.

When compared to other systems, POWER9 outperforms the competition. IBM reports:

  • 2x performance per core on POWER9 vs. X86
  • Up to 4.6x better performance per core on POWER9 vs. previous generations

Learn more about POWER9 by visiting the new landing page. For more detailed data regarding POWER9 performance, be sure to click on the Meet the POWER9 Family link.

Attending the COMMON Fall Conference & Expo? Be sure to attend the POWER Panel session on POWER9. This will be your opportunity to learn more about the servers from experts.

Planning For IT Disaster Recovery with DRaaS

Of all the business areas, the IT infrastructure of an organization is more susceptible to the impact of disastrous events. What’s more, IT recovery can be exceedingly traumatic to a business.

IBM’s DRaaS (Disaster Recovery as a Service) – Offering Prompt Recovery from IT Outage

Analysts recommend different approaches to planning recovery from IT disasters, but IBM’s IT recovery service stands alone. IBM provides “rapid recovery”, followed by “continuous replication” of all the critical aspects of IT infrastructure. DRaaS eradicates redundancy of on-site recovery servers, and its optimized resiliency offers verifiable recovery with greater automation through end-to-end services. Besides, DRaaS facilitates business with dramatic minimization of RTO (Recovery Time Object) and RPO (Recovery Point Objective).

While it’s clear IBM’s recovery service can protect businesses from catastrophic disasters, it also offers protection from less dramatic but equally devastating disruptions. Results from this year’s IDG Enterprise Research Survey have shown ransomware attacks are climbing. While the threat to IT security is increasing, organizations can mitigate losses reducing RTO and RPO with DRaaS.

Here are some more DRaaS features that make it a powerhouse for IT recovery:

  • DRaaS can run disaster recovery tests without impeding the replication of data, while keeping costs low
  • DRaaS offers an “Alternate Work Area,” for businesses, so they don’t experience downtime due to disruptions to their locations

If you would like to learn more about IBM’s DRaaS, check out COMMON’s Webcast “DRaaS 101: Tips for Evaluating Cloud for Disaster Recovery as a Service.” Not a COMMON member? Join now.

Cloud Technologies – Containerizing Legacy Apps

Information technologies are continually in a state of transition and organizations often need tools to help them transition from one platform to another, especially with regard to legacy apps. Many companies either still find value in these apps or simply cannot make the transition to Cloud technologies fast enough due to budgetary concerns or other reasons. For these organizations, IBM is now offering the Cloud Private platform, which allows businesses to embrace the Cloud by not only containerizing their legacy apps but also containerizing the platform itself, along with other IBM tools and many of the notable open source databases.

Providing Bridges

Through their Cloud Private platform, IBM provides the bridge between current cloud services and an organization’s on-site data mechanism. In essence, it allows a company’s legacy apps to interact with cloud data. IBM understands the value of making a platform accessible to other technologies and they used this philosophy as well with their Cloud Private tools. Whether an organization uses Microsoft’s Azure cloud platform or Amazon Web Services, IBM’s Cloud Private is flexible enough to work with both.

A Comprehensive Package

IBM’s Cloud Private platform offers a comprehensive package of tools to help companies mix and mingle their legacy apps with other cloud services and data. The Cloud Private toolset includes components for:

  • Cloud management automation
  • Security and data encryption
  • The core cloud platform
  • Infrastructure options
  • Data and analytics
  • Support for applications
  • DevOps tools

Providing a comprehensive transitioning tool, such as the one IBM developed, should help companies make the most of their investment in their legacy apps. In addition, it will provide them with the time buffer they will need before eventually making a full transition to the Cloud.

Board of Directors Election Results

COMMON Announces 2018-2019 Board of Directors

SAN ANTONIO, Texas, May 23, 2018: COMMON, the largest association of IBM technology users, announced its new Board of Directors during the Meeting of the Members at the Marriott Rivercenter hotel today. With election results in, Amy Hoerle was reelected to the Board of Directors. She is joined by Pete Helgren and Scott Klement, all serving three-year terms.

At the meeting, COMMON also announced the new Board of Directors officers for 2018-2019:

  • President – Larry Bolhuis
  • Vice President – Amy Hoerle
  • Treasurer – Gordon Leary
  • Secretary – Yvonne Enselman
  • Immediate Past President – Justin Porter

Other members of the Board of Directors are Charles Guarino, Steve Pitcher and John Valance. Manzoor Siddiqui, Executive Director of COMMON, and Alison Butterill, World-Wide Product Offering Manager for IBM i, also serve on the Board.

The COMMON Board of Directors is made up of dedicated volunteers elected by the association’s membership to provide stewardship to the organization.

Artificial Intelligence – The Number One IT Career Path

There is good news for those who have decided to acquire artificial intelligence skills as part of their IT career path. IBM Watson gurus will be pleased to learn that AR and VR skills are in the top spot of in-demand skills for at least the next couple of years. According to IDC, in the near future, total spending on products and services that incorporate AR and/or VR concepts will soar from 11.4 billion recorded in 2017, to almost 215 billion by the year 2021 — a phenomenal amount of growth that is going to require a steady stream of IT professionals that can fill the need for these widely expanding fields and others. Read on to learn more about the top five IT careers that show nothing less than extreme promise for anyone willing to reach for the rising IT stars.

Computer Vision Engineer

According to the popular job search site Indeed, the top IT position most in demand for the next few years goes to computer vision engineers. These types of positions will require expertise in the creation and continued improvement in computer vision and machine learning algorithms, along with analytics designed to discover, classify, and monitor objects.

Machine Learning Engineer

If vision engineers are responsible for envisioning new ideas, then machine learning engineers are responsible for the actual creation and execution of the resulting products and services. Machine learning engineers actually develop the AI systems and machines that will understand and apply their built-in knowledge.

Network Analyst

As AI continues to grow exponentially, so does the IoT. This means an increased demand for network analysts who can apply their expertise to the expansion of networks required to support a variety of smart devices.

Security Analyst

As AR and VR configurations become more sophisticated, along with more opportunities for exploitation through more smart devices, cyber attacks will become more sophisticated as well. Security analysts will need strong skills in AR, VR and the IoT in order to protect an organization’s valuable assets.

Cloud Engineer

Behind all the scenes is the question of how these newer concepts will affect cloud services. The current expectations are that solutions will require a mixture of both in-house technology and outside sources. Cloud engineers will need to thoroughly familiarize themselves with AR and VR concepts in order to give them the necessary support.

Parsing JSON with DATA-INTO! (And, Vote for Me!)

Finally, RPG can parse JSON as easily as it can XML! This month, I’ll take a look at RPG’s brand-new DATA-INTO opcode and how you can use it today in your RPG programs. I’ve combined DATA-INTO with the free YAJL tool to let you parse JSON documents with DATA-INTO in a manner very similar to the way you’ve been using XML-INTO for XML documents.

I’ll also take a moment to ask you to vote in COMMON’s Board of Directors election. I’m running for the Board this year, alongside some other great leaders in our community. Please spread the word and vote!

Board of Directors Election

I need your vote! I am running for COMMON’s Board of Directors, and this is an elected position voted upon by the COMMON membership. Why am I running? I’ve had a love affair with the IBM i community for a long time, and since 2001, I have tried very hard to bring value through articles, free software, online help and giving talks on IBM i subjects. To me, COMMON is one of the main organizations that represents the community, and I feel that this is one more way that I can provide value and give back.

Voting is open from April 22nd to May 22nd. You can learn more about my position as well as the other candidates by clicking here.

To place your vote, you’ll need to log into Cosmo and click the “Board of Directors Election” at the top of the page.

Please vote and help me spread the word! It means a lot!

RPG’s DATA-INTO Opcode: The Concept

Years ago, IBM added the XML-INTO opcode to RPG, which greatly simplified reading XML documents by automatically loading them into a matching RPG variable. Unfortunately, it only works with XML documents, and today, JSON has become even more popular than XML. This led me and many other RPGers to ask IBM for a corresponding JSON-INTO opcode. Instead, IBM had a better idea: DATA-INTO. DATA-INTO is an opcode that can potentially read any data format (with a little help) and place it into an RPG variable. That way, if the industry decides to change to another format in the future, RPG already has it covered.

I expect that you’re thinking what I was thinking: “There are millions of data formats! No single opcode could possibly read them all!” DATA-INTO solves that problem by working together with a third-party routine called a “parser”. The parser is what reads and understands data formats such as JSON or whatever might replace it in the future. It then passes that data back to DATA-INTO as parameters, and DATA-INTO maps the data into RPG variables. Think of the parser is a plug-in for DATA-INTO. As long as you have the right plug-in (or “parser”), you can read a data format. In this article, I’ll present a parser for JSON, so you’ll be able to read JSON documents. Other people have already published parsers for property file format and CSV documents. When “the next big thing” happens, you’ll only need a parser for it, and then DATA-INTO will be able to handle it.

One more way to think of it: it is similar to Open Access. Open Access provides a way to use RPG’s native file support to read any sort of data. Where traditional file access called IBM database routines under the covers, Open Access lets you provide your own routine, so you can read any sort of data and aren’t limited to only IBM’s database. DATA-INTO works the same way, except instead of replacing the logic for files, it replaces the logic for XML-INTO with a general-purpose system.

DATA-INTO Requirements

DATA-INTO is part of the ILE RPG compiler and runtime. No additional products need to be installed besides the aforementioned parser. You will need IBM i 7.2 or newer, and you’ll need the latest PTFs installed.

The PTFs required for 7.2 and 7.3 are found here.

When IBM releases future versions of the operating system (newer than 7.3), the RPG compiler will automatically include DATA-INTO support without additional PTFs.

DATA-INTO will not work with the 7.1 or earlier RPG compilers and will not work with TGTRLS(V7R1M0) or earlier specified, even if you are running it on 7.2.

My JSON parser for DATA-INTO requires the open source YAJL software. You will need a copy from April 17, 2018 (or newer) to have DATA-INTO support. It is available at no charge from my website.

DATA-INTO Syntax

DATA-INTO works almost the same as the XML-INTO opcode that you may already be familiar with. The syntax is as follows:

DATA-INTO your-result-variable %DATA(document : rpg-options) %PARSER(parser : parser-options);

-or-

DATA-INTO %HANDLER(handler : commArea) %DATA(document : rpg-options) %PARSER(parser : parser-options);

If you’re familiar with XML-INTO, you’ll notice that the syntax is identical except for the %PARSER built-in function and its parameters. Here’s a description of what each part means:

your-result-variable = An RPG variable to receive the data. Most often, this will be a data structure that is formatted the same way as the document you are reading. Fields in the document will be mapped to corresponding fields in your variable, based on the variable names. I’ll explain this in more detail below.

%DATA(document : rpg-options) = Specifies the document to be read and the options that are understood and used by RPG (as opposed to the parser) when mapping fields into your variable. The document parameter is either a character string containing the document itself or is an IFS path to where the document can be read from disk. The rpg-options parameter is a character string containing configuration options of how the data should be mapped into variables. (It is the same as the second parameter to the %XML built-in function used with XML-INTO and works the same way.) The following is a summary of those options:

  • path option = specifies a location within the JSON document to begin parsing, and lets you parse only a subset of the document if desired
  • doc option = specify doc=string if the document parameter contains the JSON document itself, or doc=file if it contains an IFS path name
  • ccsid option = controls whether RPG does it’s processing in Unicode or EBCDIC
  • case option = controls how strictly a variable name must match the document field names, including whether it’s case sensitive or whether special characters get converted to underscores
  • trim option = controls whether blanks and other whitespace characters are trimmed from character strings
  • allow missing option = controls whether it is okay for the document to be missing fields that are in the RPG variable
  • allow extra option = controls whether it is okay for the document to have extra fields that are not in the RPG variable
  • count prefix option = a prefix to be added for RPG fields that should contain a count of the number of matching elements (vs the data of the element)
    • For example, the number of entries loaded into an array
    • Can also be used to determine whether an optional element was/wasn’t provided

I don’t want this post to get too bogged down with the details of each option, so if you’d like to read more details, please see the ILE RPG reference manual.

%PARSER(parser : parser-options) = Specifies a program or subprocedure to act as a parser (or “plugin” as I like to think of it) that will interpret the document. My parser will be a JSON parser, and it will know how to interpret a JSON document and pass its data to RPG. The parser-options parameter is a literal or variable string that’s intended to be used to configure the parser. The format of parser-options, and which options are available, is determined by the code in the parser routine.

%HANDLER(handler : commArea) = Specifies a handler routine to be used as an alternative to a variable. You use this when your data should be processed in “chunks” rather than reading the entire document at once. I consider this to be a more advanced usage of DATA-INTO (or XML-INTO) that is primarily used when it’s not possible to fit all the needed data into a variable. This was very common back in V5R4 when variables were limited to 64 KB but is not so common today. For that reason, I will not cover it in this article. If you’re interested in learning more, you can read about it in the ILE RPG Reference manual, or e-mail me to suggest this as a topic for a future blog post.

The YAJLINTO Parser

The current version of YAJL ships with a program named YAJLINTO, which is a parser that DATA-INTO can use to read JSON documents. You never need to call YAJLINTO directly. Instead you use it with the %PARSER function. Let’s look at an example!

In a production application, you’d get a JSON document from parameter, API or file. To keep this example simple, I’ve hard-coded in my RPG program by assigning it to a character string as follows:

dcl-s json varchar(5000);

json = ‘{ +
     “success”: true, +
     “errorMsg”: “No error reported”, +
     “customers”:[{ +
       “name”: “ACME, Inc.”, +
       “address”: { +
        “street”: “123 Main Street”, +
        “city”: “Anytown”, +
        “state”: “WI”, +
        “postal”: “53129” +
       } +
     }, +
     { +
       “name”: “Industrial Supply Limited.”, +
       “address”: { +
        “street”: “1000 Green Lawn Drive”, +
        “city”: “Fancyville”, +
        “state”: “CA”, +
        “postal”: “95811” +
       } +
     }] +
    }’;

In last month’s blog entry, I explained quite a bit about JSON and how to process it with YAJL. I won’t repeat all of the details about how it works, but just to refresh your memory, the curly braces (the { and } characters) start and end a JSON data structure. (They call them “objects”, but it is the same thing as a data structure in RPG.) The square brackets (the [ and ] characters) start and end an array.

Like XML-INTO, DATA-INTO requires that a variable is defined that exactly matches the layout of the JSON document. When RPGers write me complaining of problems with this sort of programming, the problem is almost always that their variable doesn’t quite match the document. So please take care to make them match exactly. In this case, the RPG definition should look like this:

dcl-ds myData qualified;
success ind;
errorMsg varchar(500);
num_customers int(10);
dcl-ds customers dim(50);
name varchar(30);
dcl-ds address;
street varchar(30);
city  varchar(20);
state char(2);
postal varchar(10);
end-ds;
end-ds;
end-ds;

Take a moment to compare the RPG data structure to the JSON one. You’ll see that the JSON document starts and ends with curly braces and is therefore a data structure – as is the RPG. Since the JSON structure has no name, it does not matter what I name my RPG structure. So, I called it “myData”. The JSON document contains three fields named success, errorMsg and customers. The RPG code must also use these names so that they match. The customers field in JSON is an array of data structures, as is the RPG version. The address field inside that array of data structures is also a data structure, and therefore the RPG version must also be.

The one field that is different is “num_customers”. To understand that, let’s take a look at how I’m using these definitions with the DATA-INTO opcode.

data-into myData %DATA(json: ‘case=any countprefix=num_’)
         %PARSER(‘YAJLINTO’);

The first parameter to DATA-INTO is the RPG variable to read the data into, in this case it is the myData data structure shown above.

The JSON data is specified using the %DATA built-in function, which receives the ‘json’ variable – the character string containing the JSON data. The second parameter to %DATA is the options for RPG to use when mapping the fields. I did not need to specify “doc=string” to get the JSON data from a variable because “doc=string” happens to be the default. I did specify two other options, however.

case=any – means that the upper/lowercase of the RPG fields do not have to match that of the JSON fields.

countprefix=num_ – means that if I code a variable starting with the prefix “num_” in my data structure, it should not come from the data, but instead, RPG should populate it with the count of elements. In this case, since I have defined “num_customers” in my data structure, RPG will count the number of elements in the “customers” array and place that count into the “num_customers” field.

That explains why the extra num_customers field is in the RPG data structure. Since customers is an array and I don’t know how many I’ll be sent, RPG can count it for me, and I can use this field to see how many I received. That’s very helpful!

Countprefix is also helpful in cases where a field may sometimes be in the JSON document and sometimes may be omitted. In that case, the count prefixed field will bypass the “allow_missing” check and allow the field to not exist without error. If the field didn’t exist in the JSON, the count will be set to zero, and my RPG code can detect it and handle it appropriately.

The %PARSER function tells DATA-INTO which parser program to call. In this case, it is YAJLINTO. The %PARSER function is capable of specifying either a program or a subprocedure and can also include a library if needed.

When specifying a program as the parser, it can be in any of the following formats:

‘MYPGM’
‘*LIBL/MYPGM’
‘MYLIB/MYPGM’

Notice that the program and library names are in capital letters. Unless you are using case-sensitive object names on your IBM i (which is very unusual), you should always put the program name in capital letters.

To specify a subprocedure in a service program, use one of the following formats:

‘MYSRVPGM(mySubprocedure)’
‘MYLIB/MYSRVPGM(mySubprocedure)’
‘*LIBL/MYSRVPGM(mySubprocedure)’

The subprocedure name must be in parenthesis to denote that it is a subprocedure rather than a program name. Like the program names above, the service program name should be in all capital letters. However, subprocedure names in ILE are case-sensitive, so you will need to be sure to match the case exactly as it is exported from the service program. Use the DSPSRVPGM command to see the how the procedures are exported.

After the DATA-INTO opcode runs successfully, the myData variable will be filled in, and I can use its data in my program just as I would use any other RPG variable. For example, if I wanted to loop through the customers and display their names and cities, I could do this:

dcl-sint(10);

for x = 1 to myData.num_customers;
  dsply myData.customers(x).name;
  dsply myData.customers(x).address.city;
endfor;

Naturally, you wouldn’t want to use the DSPLY opcode in a production program, but it’s a really easy way to try it and see that you can read the myData data structure and its subfields. Now that you have data in a normal RPG variable, you can proceed to use it in your business logic. Write it to a file if you wish, or print it, place it on a screen, whatever makes sense for your application.

Parser Options for YAJLINTO

Earlier I mentioned that %PARSER has a second parameter for “parser options.” This parameter is optional, and I didn’t use it in the above example. However, there are some options available with YAJLINTO that you might find helpful.

Unlike the standard DATA-INTO (or XML-INTO) options, YAJLINTO expects its options to be JSON formatted. I designed it this way because YAJL already understands JSON format, so it was very easy to code. But, it is also powerful. I can add new features in the future (without interfering with the existing ones) simply by adding new fields to the JSON document.

As I write this, there are three options. All of them are optional, and if not specified, the default value is used.

value_true = the value that will be placed in an RPG field when the JSON document specifies a Boolean value that is true. By default, this puts “1” in the field because it’s assumed that Booleans will be mapped to RPG indicators. You can set this option to any alternate value you’d like to map, up to 100 characters long.

value_false = value placed in an RPG field when JSON document specifies a Boolean value that is false. The default value is “0”.

value_null = value placed in an RPG variable when a JSON document specifies that a field is null. Unfortunately, DATA-INTO does not have the ability to set an RPG field’s null indicator, so a special value must be placed in the field instead. The default is “*NULL”.

For example, consider the following JSON document:

json = ‘{ “inspected”: true, “problems_found”: false, +
     “date_shipped”: null }’;

In this example, I prefer “yes” and “no” for the Boolean fields. It’ll say, “yes it was inspected” or “no problems were found”. The date shipped is a date-type field and therefore cannot be set to the default null value of *NULL. So, I want to map the special value of 0001-01-01 to my date. I can do that as follows:

dcl-ds status qualified;
inspected varchar(3);
problems_found varchar(3);
date_shipped date;
end-ds;

data-into status %data(json:‘case=any’)
%parser(‘YAJLINTO’ : ‘{ +
“value_true”: “yes”,+
“value_false”: “no”,+
“value_null”: “0001-01-01” +
}’);

When this code completes, the inspected field in the status data structure will be set to “yes”, and the problems_found field set to “no”. The date_shipped will be set to Jan 1, 0001 (0001-01-01.)

Debugging and Diagnostics

and the parser during their processing. To enable this, you’ll need to add an environment variable to the same job as the program that uses data-into. For example, you could type this:

ADDENVVAR ENVVAR(QIBM_RPG_DATA_INTO_TRACE_PARSER) VALUE(‘*STDOUT’)

In an interactive job, this will cause the trace information to scroll up the screen as data-into runs. In a batch job, it would be sent to the spool. Information will be printed about what character sets were used and which fields and values were found in the document.

For example, in the case of the “status” data structure example in the previous section, the trace output would look like this:

—————- Start —————
Data length 136 bytes
Data CCSID 13488
Converting data to UTF-8
Allocating YAJL stream parser
Parsing JSON data (yajl_parse)
No name provided for struct, assuming name of RPG variable
StartStruct
ReportName: ‘inspected’
ReportValue: ‘yes’
ReportName: ‘problems_found’
ReportValue: ‘no’
ReportName: ‘date_shipped’
ReportValue: ‘0001-01-01’
EndStruct
YAJL parser status 0 after 68 bytes
YAJL parser final status: 0
—————- Finish ————–

Writing Your Own Parser

When DATA-INTO is run, it loads the document into memory and then calls the parser. It passes a parameter that contains the document to read as well as information about all of the options the user specified. It is then responsible for interpreting the document and calling some subprocedures to tell DATA-INTO what was found.

Writing a parser is best done by someone who is good at systems-type coding. You will need to work with pointers, procedure pointers, CCSID conversions and system APIs. I suspect most RPGers will want to find (or maybe purchase) a third-party parser rather than write their own. For that reason, I will not teach you how to write a parser in this blog entry.

However, if you’d like to learn more about this in a future installment of Scott’s iLand, please leave a comment below. If enough people are interested, I’ll write one.

In the meantime, I suggest the following options:

The Rational Open Access: RPG Edition manual has a chapter on how to write a parser and provides an example of a properties document parser.

Jon Paris and Susan Gantner recently published some articles that explain how to use DATA-INTO as well as write a parser. They provide an example of reading a CSV file.

Attending POWERUp18? Be sure to check out Scott’s sessions.

Interested in learning more about DATA-INTO? Attend this session from Barbara Morris.

See you in San Antonio!

RPG and YAJL, as Well as DATA-INTO News

This month, I’ll give you a quick update about RPG’s new DATA-INTO opcode and then focus on the basics of how you can host your own custom REST web service API written in RPG with the help of the standard IBM HTTP Server (powered by Apache) and the YAJL JSON library.

DATA-INTO News

IBM has posted documentation online for the new DATA-INTO opcode, and they promise that PTFs will be available on March 19th, 2018. That means that by the time this blog entry goes live, it’ll be available for you to install and try.

Unfortunately, I’m writing this before the PTFs are available, so I haven’t been able to try it yet. I hope to do so in time for next month’s blog.

Providing a Web Service with RPG and YAJL

I’ve been creating quite a few JSON-based REST APIs using the YAJL toolkit lately, and with DATA-INTO on the horizon, I thought it’d be a good time to give you an example. If you’re not familiar with YAJL, it is a free tool for reading and writing JSON documents. Since YAJL is written in C, I was able to compile it as a native ILE C service program and create an RPG front-end for it that makes it both easy to use and extremely efficient.

Of course, IBM provides us with the wonderful Integrated Web Services (IWS) tool that can do both REST and SOAP APIs using both JSON and XML. The advantage that IWS has over YAJL is that IWS does most of the work for you, simplifying things tremendously. On the other hand, YAJL is faster and lets you deal with much more complex documents. For the work that I do, I usually find YAJL works better.

Prerequisites

Before you can use the techniques that I describe in this post, you’ll need to have the following items installed:

  • IBM HTTP Server (powered by Apache)
    • This is available on the OS installation media as licensed program option 57xx-DG1 – you’ll want to see the IBM Knowledge Center for documentation about how to install it and its prerequisites
  • YAJL (Yet Another JSON Library)
  • ILE RPG compiler at version 6.1 or newer

Initial Setup

I recommend that you create a new library in which to put your programs that implement REST APIs. It’s certainly possible to put them in an existing library, but using a separate one helps with security. If you set up the HTTP server to only access the new library, you won’t need to worry about someone trying to use your server to run programs that they shouldn’t.

In my example, I will create a library named YAJLREST.

CRTLIB LIB(YAJLREST)

You’ll also need to set up the IBM HTTP server to point to your new library. This is best done using the HTTP Administration option in the IBM Navigator for i.

  1. If it’s not already started, start IBM Navigator for i by typing: STRTCPSVR *HTTP HTTPSVR(*ADMIN)
  2. Login to Navigator in your web browser on http://your-system:2001
  3. Sign-in with a user id/password that has authority to change the configuration
  4. Expand: IBM i Management (if not already expanded)
  5. Click “Internet Configurations”
  6. Click “IBM Web Administration for i”
  7. Sign in again (sigh – I hope IBM changes that so signing in twice isn’t needed)
  8. Click on the “HTTP Servers” tab
  9. Click “Create HTTP Server”
    • NOTE: do not use Create Web Services server, that is for IWS only
  10. Give your server a name and a description
    • The name can be any valid IBM I object name – I’ll use YAJLSERVER
  11. Click Next
  12. Accept the default server root by clicking “Next”
  13. Accept the default document root by clicking “Next”
  14. Keep the default of “All IP Addresses”, but change the port number to one that you’re not using for anything else
    • In my example, I will use 12345
  15. Click “Next”
  16. For the “access log”, I typically turn them off unless I want to audit the use of my server
    • To do this, click “no” and then “Next”
  17. When it asks how long to keep the logs (meaning the error logs, since we just turned off the access logs) I take the default of 7 days
  18. Click “Next”.
  19. It will then show a summary screen – Click “Finish”

You have now created a basic HTTP server using the IBM wizard, and it will be placed on the configuration page for your new server. Click “Edit Configuration File” (on the left, towards the bottom) to make changes to the configuration.

The basic HTTP server provides access to download HTML, images, etc. from the IFS. Since that won’t be needed in a web service, I recommend deleting the following section of configuration (simply highlight the section and press the delete key):

<Directory /www/yajlrest/htdocs>
   Require all granted
</Directory>

To enable access to the web services in your library, add the following configuration directives to the end of the file:

ScriptAliasMatch /rest/([a-z0-9]+)/.* /qsys.lib/yajlrest.lib/$1.pgm
<Directory /qsys.lib/yajlrest.lib>
   SetEnv QIBM_CGI_LIBRARY_LIST “YAJL;MYDATA;MYSRV;YAJLREST”
   Require all granted
</Directory>

The ScriptAliasMatch maps a URL to a program call. In this case, a URL beginning with /rest/ will call a program in the YAJLREST library. The part that is in parenthesis says that the program name must be made from the letters a-z or digits 0-9, and must be at least one character long. This is a regular expression and can be replaced with a different one if you have particular needs for the program name. In any case, the part of the URL that matches the part in parenthesis will be used to replace the $1 in the program name. This way, you can call any program in the YAJLREST library. The phrase “require all granted” allows everyone access to the YAJLREST library and is appropriate for an API that is available to the public.

The QIBM_CGI_LIBRARY_LIST environment variable controls the library list that your program will run with. In this case, it is adding the YAJL, MYDATA, MYSRV and YAJLREST libraries to the library list. MYDATA and MYSRV are meant as placeholders for your own libraries. YAJL is the default library that the YAJL tool will be installed into, and YAJLREST is where your program will be. Make sure you replace these with the appropriate libraries for your environment.

What if you don’t want your API to be available to the public? One way is to ask the IBM HTTP server to require a user id and password, as follows:

ScriptAliasMatch /rest/([a-z0-9]*)/.* /qsys.lib/yajlrest.lib/$1.pgm
<Directory /qsys.lib/yajlrest.lib>
   SetEnv QIBM_CGI_LIBRARY_LIST “YAJL;MYDATA;MYSRV;YAJLREST”
   require valid-user
   AuthType basic
   AuthName “REST APIs”
   PasswdFile %%SYSTEM%%
   UserId %%CLIENT%%
</Directory>

Some of these options are the same as the previous example. What’s different is that it no longer does “all granted” but instead requires a valid user. Which users are valid is based on the system’s password file (aka your user profiles). When your RPG program runs, it will use the user id supplied by the client (aka the user id and password that it checked against the user profiles.) If this is run in an interactive program, the phrase “REST APIs” will be printed to tell the user what he or she is signing on to.

If you use this sign-on option, I recommend also setting up SSL to prevent people’s passwords from being sent over the Internet in plain text.

There are many variations on the way users are authorized and who is allowed to access your setup. This is just one common example. To learn more about the different options, and to learn how to configure SSL, I recommend reading more in the IBM Knowledge Center. If you get really stuck, e-mail me at commonblog@scottklement.com.

Once your configuration is set up, click the “start” button to start the server. The start button is near the top of the screen, colored green and looks similar to the play button you’d find on a tape or CD player. You now have an HTTP server that will launch your RPG programs. The next step is to write the web service itself using RPG and YAJL.

The Basics of REST

When I design a REST web service (also known as REST API), I try to think of the URL as being a unique identifier for the business object that I’m working with. For example, when working with an invoice, the URL might represent a specific invoice by ending with the invoice number. Or, likewise, when working with customer information, the URL might represent a unique customer by ending with their customer number.

Once I’ve decided upon what the URL identifies, I think of what types of things I might want to do with the data. The REST paradigm assumes that you will use some (or all) of the following HTTP methods:

GET = Used to retrieve data identified by the URL

PUT = Used to set data identified by the URL in an idempotent way

POST = Used to set data identified by the URL in a non-idempotent way

DELETE = Remove the data identified by the URL

The term “idempotent” means that multiple calls will result in the same thing. For example, if I set X=1 that is idempotent. I can set X=1 once or 10 times; the result will still be 1. On the other hand, if I coded X=X+1 and ran it once, X would be 1, but if I ran it 10 times, it would be 10. Since multiple calls do not result in the same value, X=X+1 would be considered a non-idempotent statement.

Once I’ve thought about which of these HTTP methods should be supported by my service, I decide how the data will be passed to it. I do that by figuring out a JSON format for the data that is sent to my API as input, as well as another JSON document that’s returned with the output.

Let’s take a look at an example:

The Customer Details Example

To create a simple example of a REST API that’s easy to understand, I will use customer details as a simple example. The idea is that a URL will not only tell the HTTP server which program to call, but it will also identify a unique customer by its customer number. With that in mind, the URL will look like this:

http://our-system:12345/rest/custdetail/1500

Please note:

  • If using SSL, the “http:” would be replaced with “https:”
  • The 1500 at the end is an example of a customer number
  • When using features that don’t require an existing customer (such as adding a new one), the customer number should not be added
    • In that case the URL would be http://our-system:12345/rest/custdetail/
  • Since our URL contains “custdetail” after the /rest/ part of the URL, the ScriptAlias we configured will effectively do the same thing as CALL PGM(YAJLREST/CUSTDETAIL)

In my example, I want the API to be able to retrieve a list of customer information, retrieve the details of a specific customer, update a customer, create a new customer and delete old customers. For that reason, I will use the following HTTP methods:

  • GET = If the URL represents a unique customer, GET will return the details for that customer
    • If it does not reference a specific customer, it will instead return a full list of the customers available
  • PUT = Set customer information in an idempotent way
    • This will be used with an existing customer to set details such as its address – it cannot be used to add new customers since that would be non-idempotent.
  • POST = Add a new customer
    • This creates a new customer – since multiple calls to the URL would result in multiple customer records being created, this is non-idempotent.
  • DELETE = Remove customer details

One easy way to understand these is to think of them the same way you would think of database operations. GET is like SELECT/FETCH/READ, PUT is like UPDATE, POST is like INSERT/WRITE and DELETE is like DELETE. This isn’t a perfect analogy since it’s possible for an update database operation to be non-idempotent, but aside from that detail, they are very similar.

Since we’ve now determined what the URL represents, and what the methods will do, the other important idea is to determine the format of the data that will be sent or received.  For this example, I chose this format:

{
   “success”: true,
   “errorMsg”: “Error message goes here when success=false”,
   “data”: {
      “custno”: 496,
      “name”: “Acme Foods”,
      “address”: {
         “street”: “123 Main Street”,
         “city”: “Boca Raton”,
         “state”: “FL”,
         “postal”: “12345-6789”,
      }
   }
}

When a GET operation is done with a URL that identifies a unique customer, the above JSON document will be returned with the customer details (or an error message if there was an error.)

When using GET to list all customers, the same format will be used except that the “data” element will be converted into an array so that multiple customers can be returned at once.

When using the POST or PUT method, this document will be sent from the caller to represent the new values that the customer details should be set to. The POST and PUT operations will also return a document in the same format to show the caller what the customer details look like after the changes have been made.

When using the DELETE method, the row will be deleted. However, we will still use this document to return what the customer data was before the DELETE and also as a way to send errors, if any were found.

Coding the Example in RPG

Now that I’ve decided what the service will do, it’s time to code it!

The first thing my RPG program needs to know is whether GET, PUT, POST or DELETE was used. It can retrieve that by getting the REQUEST_METHOD environment variable. The IBM HTTP server will always set that variable to let us know which HTTP method was used. The RPG code to retrieve it looks like this:

   dcl-c UPPER const(‘ABCDEFGHIJKLMNOPQRSTUVWXYZ’);
   dcl-c lower const(‘abcdefghijklmnopqrstuvwxyz’);

   dcl-s env pointer;
   dcl-s method varchar(10);

   dcl-pr getenv pointer extproc(*dclcase);
      var pointer value options(*string);
   end-pr;

   .
   .

   env = getenv(‘REQUEST_METHOD’);
   if env <> *null;
      method = %xlate(lower: UPPER: %str(env));
   endif;

The getenv() routine returns a pointer to a C-style string, so I use the %STR built-in function to convert it to an RPG varchar variable. I’m also using the %XLATE built-in function to convert from lowercase to uppercase to ensure that the data will always be uppercase when I use it later in the code.

The next thing it will need is to get the customer number out of the URL (if any was supplied). The HTTP server provides the URL in an environment variable named REQUEST_URI that can be retrieved with the following code:

   dcl-s url varchar(1000);

   env = getenv(‘REQUEST_URI’);
   if env <> *null;
      url = %xlate(UPPER: lower: %str(env));
   endif;

The result is a variable named “url” that I’ve also converted to all uppercase. From that URL, I can retrieve the customer number by looking for it after the string “/rest/custdetail/”. Since I know “/rest/custdetail/” will always be in the URL (unless something went wrong), I can just scan for it and use that to get the position where the customer number is found. If there is no customer number, I’ll simply set my internal customer id to 0 and a code later in the program will use that to determine if a customer number was provided.

   dcl-c REQUIRED_PART const(‘/rest/custdetail/’);
   dcl-s pos int(10);
   dcl-s custpart varchar(50);
   dcl-s custid packed(5: 0);

   monitor;
      pos = %scan(REQUIRED_PART:url) + %len(REQUIRED_PART);
      custpart = %subst(url: pos);
      custid = %int(custpart);
   on-error;
      custid = 0;
   endmon;

Now that I know the method and customer number, I can choose what to do with it.  The code to handle GET is different than the code to handle PUT, for example, so I will select which section of code to run using a SELECT/WHEN group.

   select;
   when method = ‘GET’ and custid = 0;
      // code to list all customers
   when method = ‘GET’;
      // code to retrieve one customer
   when method = ‘PUT’;
      // code to update a customer (idempotent)
   when method = ‘POST’;
      // code to write a new customer (non-idempotent)
   when method = ‘DELETE’;
      // code to delete a customer
   endsl;

I will not provide all the code to do each of these options in the blog post, but instead at the bottom of this article because most of it is just standard RPG code that reads and writes data from the database. If you do want to see the program in its entirety, I will provide a link at the end of the article that you can use to download all the source code.

Working with JSON Using YAJL

The other part of the RPG code that I’d like to explain in this article is the code that works with JSON data. Although the DATA-INTO opcode may make this simpler in the future, as I write this, I haven’t had a chance yet to try it out. What I can show you is the “old fashioned” way of using YAJL to read and JSON data in an RPG program.

Although there are other ways to read JSON, I recommend using the “tree” style approach. I’ve found this to be the easiest, and in my RPG tools for YAJL, I provide a routine called yajl_stdin_load_tree that gets the data from the HTTP server and loads it into a YAJL tree in one subprocedure call.

   dcl-s docNode like(yajl_val);
   dcl-s errMsg varchar(500);

   docNode = yajl_stdin_load_tree(*on: errMsg);
   if errMsg <> ;
      // JSON was invalid, return an error message
   endif;

Now the JSON data has been loaded into a “tree-like” structure that is inside YAJL. The docNode variable is a pointer that points to the “document node”, which is a fancy way of saying that it points to the { and } characters (JSON object element) that surrounds the entire document. I can use the yajl_object_find procedure to locate a particular field inside that object and return a pointer to it. For example:

   dcl-s node like(yajl_val);

   node = yajl_object_find(docNode: ‘errorMsg’);
   if node = *null;
      cust.errorMsg =;
   else;
      cust.errorMsg = yajl_get_string(node);
   endif;

In this example, I retrieve a pointer to the value of the “errorMsg” field that is inside the JSON object. If “errorMsg” was not found, the pointer will be set to *NULL, in which case I know there wasn’t an error message sent. If it was sent, I can use the yajl_get_string subprocedure to get an RPG character string from the pointer.

In this case, the errorMsg field will be a character string, but other JSON fields might be numeric or boolean. I can retrieve these by calling other YAJL routines, such as yajl_get_number, yajl_is_true, and yajl_is_false in place of the yajl_get_string in the preceding example.

When a subfield has been set to an object or array, there are several different options to process it. I’ve already shown an example of yajl_object_find to process a single field inside of an object, but in addition to that, there are the following routines:

  • yajl_object_loop = loops through all of the fields in an object, one at a time
  • yajl_array_loop = loops through all of the elements in an array, one at a time
  • yajl_array_elem = get a particular array element by its index

Since this article is already very long, I will not provide examples of these in the post. Instead I will refer you to the link at the end of the article where you can download the complete example.

Once you’ve read your JSON data into your RPG program, you’ll want to ask YAJL to free up the memory that it was using. Remember, YAJL loaded the entire document into a tree structure, and that is taking up some storage on your system.  t’s very easy to free up the storage by calling yajl_tree_free and passing the original document node.

   yajl_tree_free(docNode);

YAJL will also be used to generate the JSON document that is sent back. For example, when you ask for the details of a customer, it uses YAJL to generate the response. The code in RPG looks like this:

   yajl_genOpen(*on);
   yajl_beginObj();

   yajl_addBool(‘success’: cust.success);
   yajl_addChar(‘errorMsg’: cust.errorMsg);

   if cust.success = *on;

      yajl_beginObj(‘data’);

      yajl_addNum(‘custno’: %char(cust.data.custno));
      yajl_addChar(‘name’: %trim(cust.data.name));

      yajl_beginObj(‘address’);
      yajl_addChar(‘street’: %trim(cust.data.address.street));
      yajl_addChar(‘city’:   %trim(cust.data.address.city));
      yajl_addChar(‘state’%trim(cust.data.address.state));
      yajl_addChar(‘postal’: %trim(cust.data.address.postal));
      yajl_endObj();

      yajl_endObj();

   endif;

   yajl_endObj();

   if cust.success;
      yajl_writeStdout(200: errMsg);
   else;
      yajl_writeStdout(500: errMsg);
   endif;

   yajl_genClose();

Prior to running this routine, the RPG program has loaded all the customer information into the “cust” data structure. The preceding routine uses that database data together with YAJL’s generator routines to create a JSON document.

The yajl_genOpen procedure starts the YAJL generator. It accepts one parameter, which is an indicator that determines whether the generated document is “pretty” or not. When the indicator is on, it will format the data with line feeds and tabs so that it’s easy for a human being to read. When the indicator is off, it will use the smallest number of bytes possible by removing line feeds tabs and extra spacing so that the entire JSON document is one big line of text. The “not pretty” version is a little more efficient for the computer to process, but since it is harder to read, I typically pass *ON while testing my program and change it to *OFF for production use.

When the JSON document calls for a new object (identified by the { and } characters), you can create it by calling yajl_beginObj. This will output the { character as well as make any subsequent fields be subfields of the new object. When you’re done creating the object, you can use yajl_endObj to output the } character and end the object.

There is a similar set of routines named yajl_beginArray and yajl_endArray that create a JSON array (vs. an object) that I did not use in this example but are useful when an array is called for.

Adding a field to an object or array is done by calling the yajl_addNum, yajl_addChar and yajl_addBool routines for numeric, character and boolean JSON fields, respectively. These routines not only add the fields, but they take care of escaping any special characters for you so that they conform to the JSON standard.

A really good way to understand the generator is to compare the expected JSON document (as I described in the section titled “The Customer Details Example” above) to the statements that generate it. You’ll see that each time a new object is needed, it calls yajl_beginObj, and each time a field is added to an object, it calls the appropriate “add” routine for the data type. These statements all correspond exactly to the JSON data that is output.

Once I’ve called the YAJL routines to add the data, I’ll have a JSON document, but it will be stored internally inside YAJL’s memory. To write it out to the HTTP server (which, in turn, will send it on to the program that is consuming our REST API), I use the yajl_writeStdout routine. This routine lets me provide an HTTP status code and an error message as a parameter. I recommend using a status of 200 to indicate that everything ran correctly or 500 to indicate that an error occurred.

Finally, once the JSON document has been sent, I call yajl_genClose to free up the memory that was used by the JSON document that YAJL generated.

Conclusion

YAJL is a powerful way to write JSON-based REST APIs. Although some of the examples may seem a little confusing at first, once you’ve successfully written your first one, you’ll get the hang of it quickly. Then it’ll be no problem to write additional ones.

You can learn more about YAJL and download the code samples that I mention in this post from my website at:

www.scottklement.com/yajl

If you’ll be at COMMON’s PowerUp18 conference, be sure to check out my session, Providing Web Services on IBM i, where I will demonstrate these techniques as well as other ways to provide your own web services in RPG.

How Blockchain Is Relevant to Modern Business

It’s a digital world, and we’re all throwing around terms like “bitcoin” and “cloud computing” without really understanding what’s going on behind the scenes. But these background details are exactly what’s revolutionizing modern industry. Blockchain is one of these critical background features.

What is Blockchain?

Blockchain is a technology that has arrived to shake up digital records as we know them. A type of database, blockchain contains each record, transaction or dataset within a single block, linking all of these blocks to each other with a peer-to-peer network. Each block is dependent on the one before it, and therefore no block can be retroactively edited without drastically editing the rest of the chain, an impossible task.

This is critical because the ledger is both public and completely secure, making it ideal for storage of medical records, monetary transactions or account details. Bitcoin was the first to successfully integrate a blockchain and triumphantly solved the double spending problem (in which a digital currency file is duplicated and counterfeited).

Blockchain Revolution

Currently, the most promising applications of the blockchain are finance applications such as digital wallets and identities. Banks and digital transaction providers would benefit from cutting out middle men, and users would place greater trust in a system that can’t be corrupted.

The beauty of blockchain lies in this: imagine a stock payment. The money can change hands within seconds, but the actual ownership takes longer to determine since the two parties are unable to access each other’s ledger and must instead rely on a middle man to confirm the existence of the stock and update the individual ledgers. But with blockchain, each involved party is part of a larger ledger and confirms ownership immediately. It opens the door to a world of possibilities.


IBM is one of many companies stepping into the world of public ledgers. It currently offers the ability to form an IBM Blockchain network and create ways to take advantage of its offerings through business solutions.

Blockchain