Cognitive Computing Overview

The field of cognitive computing continues to grow rapidly, best represented by IBM’s Watson platform. Cognitive computing includes elements of artificial intelligence (AI), machine learning and signal processing. It generally requires a large amount of processing power and software with learning capabilities. There are a number of important considerations about this technology.

Uses

Cognitive computing is used for a variety of different purposes. One popular use that is spreading wildly is for natural language chatbots and virtual assistants. These learn natural ways to respond to questions and find the best possible answer when humans answer them. Another important use is for cyber threat detection. AI software learn the rapidly evolving hacking techniques and pro-actively take steps to defend the network without any manual intervention. There are a number of other uses in the areas of medicine, finance, law and other areas.

New Solutions and Products

Cognitive solutions continue to evolve and transform. Companies like IBM, Amazon, Apple, HP, Google, Microsoft and others are creating powerful new features. The most common use is natural language processing. This has evolved into products like Amazon’s Echo, Microsoft’s Cortana, Apple’s Siri, Google HomePod and others. These devices and software have a deep database that is constantly evolving based on the input from tens of millions of users. Over time, they become more sophisticated and useful at answering questions, providing services and organizing information.

What’s Next

This technology will continue to make more significant advances. Right now, scientists are using cognitive computing to try to predict ground breaking medical formulations. This will hopefully radically reduce the time and money required to create a new drug to cure disease. Physicists are also using this technology to compress huge amounts of data gathered from observing the universe. They are hoping the AI can make insights on the origins of galaxies, composition of dark matter and many other questions.

Stop Limiting Yourself to 10 Character Field Names in RPG Files

At POWERUp18 this May, someone asked me “When will RPG’s native file access support long names from databases and display files?” The answer is: it already does and has for quite some time! Originally it required reading into data structures, but that’s not even true anymore. Today, RPG’s native file access fully supports long names!

Since having that conversation, I’ve been asking around. It seems that not many RPGers know about this very useful feature. So in today’s post, I’ll show you how it works.

What I Found Out When Asking Around

I didn’t do any sort of official survey, and these statistics weren’t achieved scientifically, but…while I was informally asking around, here’s what I discovered:

  • “Everyone” knows you can create column (field) names longer than 10 characters using SQL
  • “Everyone” knows you can read, insert, update and delete using long field names in SQL
  • About half of the people know that DDS also supports long field names
  • About 25% thought RPG could use long names if you use data structures for I/O
  • Only a few people (maybe 5%) knew RPG supports long names without data structures

We can do it with or without data structures! The data structure support has been around for a long time, and perhaps that’s why more people are familiar with it. (But, it can be awkward to code.)

The ability to use long names without data structures was added in 2014. If you have the latest PTFs for RPG on IBM i 7.1 or 7.2, then you have this support already. It was also included in the RPG compiler that was shipped with IBM i 7.3 in 2016. If you’re running 7.3, PTFs aren’t even needed.

Defining Tables with Long Names in SQL

Defining a field name longer than 10 characters is very natural in SQL. You simply define your table with a statement like this:

What may not be obvious, however, is that when a field is longer than 10 characters, there are actually two names assigned to it. The regular “column name” and what SQL refers to as the “system name”, which is limited to 10 characters for compatibility with older tools like CPYF, OPM languages or CL programs. In the above example, the system will generate names like CUSTO00001 for CUSTOMER_NUMBER and COMPA00001 for COMPANY_NAME.

Since there are still a lot of places where you might want to use the shorter system names, I recommend explicitly naming them. That way your short names can be meaningful. For example:

When you are working on modernizing your databases, this dual-name approach is really handy because it lets you keep the old, short (and often ugly) names that you’ve always used, keeping the files compatible with older programs. New programs can use the longer names and be more readable. Eventually, you can rewrite all the programs to use the long names, so this is a great tool for modernizing!

Long Names in DDS (Including Displays and Printers!)

DDS also supports long names using the ALIAS keyword. The name “alias” makes sense if you think about it, since any time you have a field longer than 10 characters, there are two names and either can be used. In other words, they are aliases for each other.

The DDS equivalent would be coded like this:

Mind you, I don’t recommend defining your new physical files using DDS, but for existing files, it may be easier to simply add the ALIAS keyword than to re-do the file using SQL.

Another reason why it’s important to know about the ALIAS keyword in DDS is because it also supports externally defined display, printer and even ICF files. Since all of these types of files support long names and have a great mechanism for backward compatibility with programs that must use short names, there’s no reason not to take advantage of this!

Long Names in RPG Programs

Like DDS, RPG uses the term “alias” when referring to long names. Starting with IBM i 7.1, RPG was able to use this long name support in qualified external data structures. This original support also required you to qualify the record format names (by using the QUALIFIED keyword on the F-spec) because without this keyword, RPG generates input specs. At the time 7.1 was released, I-specs did not support long names.

This support that shipped with 7.1 could be somewhat awkward. For example, the code to update the last paid date for customer 1000 would look like this:

I found this awkward because you had to use separate data structures for input and output (though, a PTF was released later that relaxed that rule) and you had to qualify both the field and record format names. I don’t know if everyone agrees, but I found that cumbersome!

The 2014 update added full support for long names, without any need to use data structures at all! So, if you’re up-to-date on PTFs and are running IBM I 7.1 or newer (which you really should be), you can do this instead:

Naturally, the alias keyword works in free format code as well. Here’s an example:

Don’t forget that alias support extends to display files and printer files, too!  There’s no reason to use short cryptic field names anymore. Take advantage of the long ones. You’ll love it!

Artificial Intelligence Is Creating More Jobs

Contrary to popular belief, artificial intelligence (AI) is actually creating many new jobs. According to Forbes, eighty percent of companies said that AI led to the creation of new jobs. Although this obviously varies based on industry, and some industries will be adversely affected, two-thirds of respondents said that AI did not lead to a decrease in jobs.

So which jobs are being created by AI? Following are just some of them, according to the MIT Sloan Management Review.

Trainers will be needed to teach artificial intelligence how to operate. This is especially important when it comes to customer service. Empathy trainers will be needed to teach Alexa and other chat bots how to respond with the proper empathy in certain situations. Explainers will also be needed to help explain to nontechnical business leaders how artificial intelligence operates and how it can be used.

Of course, there will also be a need for sustainers. Humans will be needed to make sure that artificial intelligence is working the way it is supposed to be and to make the necessary changes when needed. Many businesses do not have full confidence in letting artificial intelligence run on its own course without any monitoring.

In addition, according to the CEO of Microsoft, artificial intelligence can help people with certain disabilities do more jobs than they were previously able to.

Of course, there will be a need for specific professionals to operate AI. There will be a need for engineers. There will be a need for copywriters to write the scripts for chatbots.

The Internet of Things and New Information Sources

Information

The Internet of Things will only become more relevant in the near future. It is a phenomenon that is influencing the flow of information and the manner in which information moves.

Moving Beyond Traditional Information Networks

People are very used to the idea that they have to look for information in specific places, and that information is only found within databases or through the Internet.

Many users will only associate a small number of objects and services with information collection and retrieval. In the era of the Internet of Things, this will be very different.

Broad Information Gathering

Today, seemingly random objects are equipped with the sorts of embedded sensors that can make them devices capable of gathering information. Of course, what’s really bringing the Internet of Things to life is the fact that all of these physical devices can share information through conventional wireless networks.

Micro-cameras, lights, pacemakers, thermostats, billboards, and appliances are now capable of being part of the Internet of Things. If these devices have the appropriate sensors, they can share all of the information that they have absorbed partly by being in the right location at the right time.

Volume of Information

Since so many objects are part of the Internet of Things, data gathering is possible on a level that many experts could never have imagined. Objects with their own sensors can more or less gather information independently and with a consistency that would be impractical otherwise.

This means that experts will ultimately have much more information to analyze and to understand. The fact that it is now possible for them to gather data from a distance and from multiple sources truly makes all the difference in the world.

Disaster Recovery and Preparing For the Worst

Disaster recovery is largely about preparation. If the correct procedures and protocols are not in place, information technology departments will find themselves losing a lot of data. All information technology departments need to set priorities when it comes to disaster recovery procedures, which will help them solve these problems with efficiency.

Making Sure All Employees Are Prepared

Information technology departments are often large enough that many different people will be involved in the disaster recovery procedures. All employees need to know in advance what they personally need to do in the event of a major disaster.

Disaster recovery needs to be part of their training right from the start, but it should also be part of the training that they receive as long-term employees.

Disaster Recovery Specialization

It is true that many employees will be involved in the disaster recovery procedures themselves. However, those procedures need to be established in advance by a committee that includes disaster recovery specialists.

Disaster recovery is complex enough in the modern world that it is possible to be an information technology professional who primarily focuses on it. People like this need to be involved in the planning stage.

Focusing On Certain Functions

Some functions will be more important than others in different organizations. Concentrating on the most crucial functions first will create the best results. This can be complicated, since some functions might be particularly important at certain points during the year and less important at other times.

As such, the specific actions of a disaster recovery team will actually vary according to the month. When the team knows all these details in advance, the results will be that much better.

Find disaster recovery and high availability videos in the COMMON Webcast Library.

IT in the Insurance Industry Now

The effects of IT in the insurance industry have been very broad. People who interact with the insurance industry at all levels will see how it has been influenced by the rise of information technology.

Customer Service

All businesses need to have high customer service standards, and it’s especially important for insurance companies to emphasize customer service. Information technology has certainly made this easier.

In the modern world, customers can purchase insurance online. For a lot of people, this is much easier than trying to do the same thing in person. This process is paperless and can be conducted from any location.

Customers can more or less manage everything related to their insurance policies online in the modern world, which makes it easier for them to work with the insurance companies in question every step of the way.

Getting New Leads

In the insurance industry, information technology is particularly important when it comes to lead generation. With modern information technology, it’s possible to generate leads on a broad level and in a particularly convenient manner.

Targeted Marketing

Information technology has also made it easier for insurance companies to target very specific demographics. People will have very different needs when it comes to insurance based on their family structure, age range, health status, and many other factors.

As such, the fact that information technology makes it even easier to reach out to groups of individuals selectively can truly make all the difference for the companies that are trying to use the money that they have set aside for marketing wisely.

Insurance Types

Cloud Technologies – Containerizing Legacy Apps

Information technologies are continually in a state of transition and organizations often need tools to help them transition from one platform to another, especially with regard to legacy apps. Many companies either still find value in these apps or simply cannot make the transition to Cloud technologies fast enough due to budgetary concerns or other reasons. For these organizations, IBM is now offering the Cloud Private platform, which allows businesses to embrace the Cloud by not only containerizing their legacy apps but also containerizing the platform itself, along with other IBM tools and many of the notable open source databases.

Providing Bridges

Through their Cloud Private platform, IBM provides the bridge between current cloud services and an organization’s on-site data mechanism. In essence, it allows a company’s legacy apps to interact with cloud data. IBM understands the value of making a platform accessible to other technologies and they used this philosophy as well with their Cloud Private tools. Whether an organization uses Microsoft’s Azure cloud platform or Amazon Web Services, IBM’s Cloud Private is flexible enough to work with both.

A Comprehensive Package

IBM’s Cloud Private platform offers a comprehensive package of tools to help companies mix and mingle their legacy apps with other cloud services and data. The Cloud Private toolset includes components for:

  • Cloud management automation
  • Security and data encryption
  • The core cloud platform
  • Infrastructure options
  • Data and analytics
  • Support for applications
  • DevOps tools

Providing a comprehensive transitioning tool, such as the one IBM developed, should help companies make the most of their investment in their legacy apps. In addition, it will provide them with the time buffer they will need before eventually making a full transition to the Cloud.

Board of Directors Election Results

COMMON Announces 2018-2019 Board of Directors

SAN ANTONIO, Texas, May 23, 2018: COMMON, the largest association of IBM technology users, announced its new Board of Directors during the Meeting of the Members at the Marriott Rivercenter hotel today. With election results in, Amy Hoerle was reelected to the Board of Directors. She is joined by Pete Helgren and Scott Klement, all serving three-year terms.

At the meeting, COMMON also announced the new Board of Directors officers for 2018-2019:

  • President – Larry Bolhuis
  • Vice President – Amy Hoerle
  • Treasurer – Gordon Leary
  • Secretary – Yvonne Enselman
  • Immediate Past President – Justin Porter

Other members of the Board of Directors are Charles Guarino, Steve Pitcher and John Valance. Manzoor Siddiqui, Executive Director of COMMON, and Alison Butterill, World-Wide Product Offering Manager for IBM i, also serve on the Board.

The COMMON Board of Directors is made up of dedicated volunteers elected by the association’s membership to provide stewardship to the organization.

Artificial Intelligence – The Number One IT Career Path

There is good news for those who have decided to acquire artificial intelligence skills as part of their IT career path. IBM Watson gurus will be pleased to learn that AR and VR skills are in the top spot of in-demand skills for at least the next couple of years. According to IDC, in the near future, total spending on products and services that incorporate AR and/or VR concepts will soar from 11.4 billion recorded in 2017, to almost 215 billion by the year 2021 — a phenomenal amount of growth that is going to require a steady stream of IT professionals that can fill the need for these widely expanding fields and others. Read on to learn more about the top five IT careers that show nothing less than extreme promise for anyone willing to reach for the rising IT stars.

Computer Vision Engineer

According to the popular job search site Indeed, the top IT position most in demand for the next few years goes to computer vision engineers. These types of positions will require expertise in the creation and continued improvement in computer vision and machine learning algorithms, along with analytics designed to discover, classify, and monitor objects.

Machine Learning Engineer

If vision engineers are responsible for envisioning new ideas, then machine learning engineers are responsible for the actual creation and execution of the resulting products and services. Machine learning engineers actually develop the AI systems and machines that will understand and apply their built-in knowledge.

Network Analyst

As AI continues to grow exponentially, so does the IoT. This means an increased demand for network analysts who can apply their expertise to the expansion of networks required to support a variety of smart devices.

Security Analyst

As AR and VR configurations become more sophisticated, along with more opportunities for exploitation through more smart devices, cyber attacks will become more sophisticated as well. Security analysts will need strong skills in AR, VR and the IoT in order to protect an organization’s valuable assets.

Cloud Engineer

Behind all the scenes is the question of how these newer concepts will affect cloud services. The current expectations are that solutions will require a mixture of both in-house technology and outside sources. Cloud engineers will need to thoroughly familiarize themselves with AR and VR concepts in order to give them the necessary support.

Parsing JSON with DATA-INTO! (And, Vote for Me!)

Finally, RPG can parse JSON as easily as it can XML! This month, I’ll take a look at RPG’s brand-new DATA-INTO opcode and how you can use it today in your RPG programs. I’ve combined DATA-INTO with the free YAJL tool to let you parse JSON documents with DATA-INTO in a manner very similar to the way you’ve been using XML-INTO for XML documents.

I’ll also take a moment to ask you to vote in COMMON’s Board of Directors election. I’m running for the Board this year, alongside some other great leaders in our community. Please spread the word and vote!

Board of Directors Election

I need your vote! I am running for COMMON’s Board of Directors, and this is an elected position voted upon by the COMMON membership. Why am I running? I’ve had a love affair with the IBM i community for a long time, and since 2001, I have tried very hard to bring value through articles, free software, online help and giving talks on IBM i subjects. To me, COMMON is one of the main organizations that represents the community, and I feel that this is one more way that I can provide value and give back.

Voting is open from April 22nd to May 22nd. You can learn more about my position as well as the other candidates by clicking here.

To place your vote, you’ll need to log into Cosmo and click the “Board of Directors Election” at the top of the page.

Please vote and help me spread the word! It means a lot!

RPG’s DATA-INTO Opcode: The Concept

Years ago, IBM added the XML-INTO opcode to RPG, which greatly simplified reading XML documents by automatically loading them into a matching RPG variable. Unfortunately, it only works with XML documents, and today, JSON has become even more popular than XML. This led me and many other RPGers to ask IBM for a corresponding JSON-INTO opcode. Instead, IBM had a better idea: DATA-INTO. DATA-INTO is an opcode that can potentially read any data format (with a little help) and place it into an RPG variable. That way, if the industry decides to change to another format in the future, RPG already has it covered.

I expect that you’re thinking what I was thinking: “There are millions of data formats! No single opcode could possibly read them all!” DATA-INTO solves that problem by working together with a third-party routine called a “parser”. The parser is what reads and understands data formats such as JSON or whatever might replace it in the future. It then passes that data back to DATA-INTO as parameters, and DATA-INTO maps the data into RPG variables. Think of the parser is a plug-in for DATA-INTO. As long as you have the right plug-in (or “parser”), you can read a data format. In this article, I’ll present a parser for JSON, so you’ll be able to read JSON documents. Other people have already published parsers for property file format and CSV documents. When “the next big thing” happens, you’ll only need a parser for it, and then DATA-INTO will be able to handle it.

One more way to think of it: it is similar to Open Access. Open Access provides a way to use RPG’s native file support to read any sort of data. Where traditional file access called IBM database routines under the covers, Open Access lets you provide your own routine, so you can read any sort of data and aren’t limited to only IBM’s database. DATA-INTO works the same way, except instead of replacing the logic for files, it replaces the logic for XML-INTO with a general-purpose system.

DATA-INTO Requirements

DATA-INTO is part of the ILE RPG compiler and runtime. No additional products need to be installed besides the aforementioned parser. You will need IBM i 7.2 or newer, and you’ll need the latest PTFs installed.

The PTFs required for 7.2 and 7.3 are found here.

When IBM releases future versions of the operating system (newer than 7.3), the RPG compiler will automatically include DATA-INTO support without additional PTFs.

DATA-INTO will not work with the 7.1 or earlier RPG compilers and will not work with TGTRLS(V7R1M0) or earlier specified, even if you are running it on 7.2.

My JSON parser for DATA-INTO requires the open source YAJL software. You will need a copy from April 17, 2018 (or newer) to have DATA-INTO support. It is available at no charge from my website.

DATA-INTO Syntax

DATA-INTO works almost the same as the XML-INTO opcode that you may already be familiar with. The syntax is as follows:

DATA-INTO your-result-variable %DATA(document : rpg-options) %PARSER(parser : parser-options);

-or-

DATA-INTO %HANDLER(handler : commArea) %DATA(document : rpg-options) %PARSER(parser : parser-options);

If you’re familiar with XML-INTO, you’ll notice that the syntax is identical except for the %PARSER built-in function and its parameters. Here’s a description of what each part means:

your-result-variable = An RPG variable to receive the data. Most often, this will be a data structure that is formatted the same way as the document you are reading. Fields in the document will be mapped to corresponding fields in your variable, based on the variable names. I’ll explain this in more detail below.

%DATA(document : rpg-options) = Specifies the document to be read and the options that are understood and used by RPG (as opposed to the parser) when mapping fields into your variable. The document parameter is either a character string containing the document itself or is an IFS path to where the document can be read from disk. The rpg-options parameter is a character string containing configuration options of how the data should be mapped into variables. (It is the same as the second parameter to the %XML built-in function used with XML-INTO and works the same way.) The following is a summary of those options:

  • path option = specifies a location within the JSON document to begin parsing, and lets you parse only a subset of the document if desired
  • doc option = specify doc=string if the document parameter contains the JSON document itself, or doc=file if it contains an IFS path name
  • ccsid option = controls whether RPG does it’s processing in Unicode or EBCDIC
  • case option = controls how strictly a variable name must match the document field names, including whether it’s case sensitive or whether special characters get converted to underscores
  • trim option = controls whether blanks and other whitespace characters are trimmed from character strings
  • allow missing option = controls whether it is okay for the document to be missing fields that are in the RPG variable
  • allow extra option = controls whether it is okay for the document to have extra fields that are not in the RPG variable
  • count prefix option = a prefix to be added for RPG fields that should contain a count of the number of matching elements (vs the data of the element)
    • For example, the number of entries loaded into an array
    • Can also be used to determine whether an optional element was/wasn’t provided

I don’t want this post to get too bogged down with the details of each option, so if you’d like to read more details, please see the ILE RPG reference manual.

%PARSER(parser : parser-options) = Specifies a program or subprocedure to act as a parser (or “plugin” as I like to think of it) that will interpret the document. My parser will be a JSON parser, and it will know how to interpret a JSON document and pass its data to RPG. The parser-options parameter is a literal or variable string that’s intended to be used to configure the parser. The format of parser-options, and which options are available, is determined by the code in the parser routine.

%HANDLER(handler : commArea) = Specifies a handler routine to be used as an alternative to a variable. You use this when your data should be processed in “chunks” rather than reading the entire document at once. I consider this to be a more advanced usage of DATA-INTO (or XML-INTO) that is primarily used when it’s not possible to fit all the needed data into a variable. This was very common back in V5R4 when variables were limited to 64 KB but is not so common today. For that reason, I will not cover it in this article. If you’re interested in learning more, you can read about it in the ILE RPG Reference manual, or e-mail me to suggest this as a topic for a future blog post.

The YAJLINTO Parser

The current version of YAJL ships with a program named YAJLINTO, which is a parser that DATA-INTO can use to read JSON documents. You never need to call YAJLINTO directly. Instead you use it with the %PARSER function. Let’s look at an example!

In a production application, you’d get a JSON document from parameter, API or file. To keep this example simple, I’ve hard-coded in my RPG program by assigning it to a character string as follows:

dcl-s json varchar(5000);

json = ‘{ +
     “success”: true, +
     “errorMsg”: “No error reported”, +
     “customers”:[{ +
       “name”: “ACME, Inc.”, +
       “address”: { +
        “street”: “123 Main Street”, +
        “city”: “Anytown”, +
        “state”: “WI”, +
        “postal”: “53129” +
       } +
     }, +
     { +
       “name”: “Industrial Supply Limited.”, +
       “address”: { +
        “street”: “1000 Green Lawn Drive”, +
        “city”: “Fancyville”, +
        “state”: “CA”, +
        “postal”: “95811” +
       } +
     }] +
    }’;

In last month’s blog entry, I explained quite a bit about JSON and how to process it with YAJL. I won’t repeat all of the details about how it works, but just to refresh your memory, the curly braces (the { and } characters) start and end a JSON data structure. (They call them “objects”, but it is the same thing as a data structure in RPG.) The square brackets (the [ and ] characters) start and end an array.

Like XML-INTO, DATA-INTO requires that a variable is defined that exactly matches the layout of the JSON document. When RPGers write me complaining of problems with this sort of programming, the problem is almost always that their variable doesn’t quite match the document. So please take care to make them match exactly. In this case, the RPG definition should look like this:

dcl-ds myData qualified;
success ind;
errorMsg varchar(500);
num_customers int(10);
dcl-ds customers dim(50);
name varchar(30);
dcl-ds address;
street varchar(30);
city  varchar(20);
state char(2);
postal varchar(10);
end-ds;
end-ds;
end-ds;

Take a moment to compare the RPG data structure to the JSON one. You’ll see that the JSON document starts and ends with curly braces and is therefore a data structure – as is the RPG. Since the JSON structure has no name, it does not matter what I name my RPG structure. So, I called it “myData”. The JSON document contains three fields named success, errorMsg and customers. The RPG code must also use these names so that they match. The customers field in JSON is an array of data structures, as is the RPG version. The address field inside that array of data structures is also a data structure, and therefore the RPG version must also be.

The one field that is different is “num_customers”. To understand that, let’s take a look at how I’m using these definitions with the DATA-INTO opcode.

data-into myData %DATA(json: ‘case=any countprefix=num_’)
         %PARSER(‘YAJLINTO’);

The first parameter to DATA-INTO is the RPG variable to read the data into, in this case it is the myData data structure shown above.

The JSON data is specified using the %DATA built-in function, which receives the ‘json’ variable – the character string containing the JSON data. The second parameter to %DATA is the options for RPG to use when mapping the fields. I did not need to specify “doc=string” to get the JSON data from a variable because “doc=string” happens to be the default. I did specify two other options, however.

case=any – means that the upper/lowercase of the RPG fields do not have to match that of the JSON fields.

countprefix=num_ – means that if I code a variable starting with the prefix “num_” in my data structure, it should not come from the data, but instead, RPG should populate it with the count of elements. In this case, since I have defined “num_customers” in my data structure, RPG will count the number of elements in the “customers” array and place that count into the “num_customers” field.

That explains why the extra num_customers field is in the RPG data structure. Since customers is an array and I don’t know how many I’ll be sent, RPG can count it for me, and I can use this field to see how many I received. That’s very helpful!

Countprefix is also helpful in cases where a field may sometimes be in the JSON document and sometimes may be omitted. In that case, the count prefixed field will bypass the “allow_missing” check and allow the field to not exist without error. If the field didn’t exist in the JSON, the count will be set to zero, and my RPG code can detect it and handle it appropriately.

The %PARSER function tells DATA-INTO which parser program to call. In this case, it is YAJLINTO. The %PARSER function is capable of specifying either a program or a subprocedure and can also include a library if needed.

When specifying a program as the parser, it can be in any of the following formats:

‘MYPGM’
‘*LIBL/MYPGM’
‘MYLIB/MYPGM’

Notice that the program and library names are in capital letters. Unless you are using case-sensitive object names on your IBM i (which is very unusual), you should always put the program name in capital letters.

To specify a subprocedure in a service program, use one of the following formats:

‘MYSRVPGM(mySubprocedure)’
‘MYLIB/MYSRVPGM(mySubprocedure)’
‘*LIBL/MYSRVPGM(mySubprocedure)’

The subprocedure name must be in parenthesis to denote that it is a subprocedure rather than a program name. Like the program names above, the service program name should be in all capital letters. However, subprocedure names in ILE are case-sensitive, so you will need to be sure to match the case exactly as it is exported from the service program. Use the DSPSRVPGM command to see the how the procedures are exported.

After the DATA-INTO opcode runs successfully, the myData variable will be filled in, and I can use its data in my program just as I would use any other RPG variable. For example, if I wanted to loop through the customers and display their names and cities, I could do this:

dcl-sint(10);

for x = 1 to myData.num_customers;
  dsply myData.customers(x).name;
  dsply myData.customers(x).address.city;
endfor;

Naturally, you wouldn’t want to use the DSPLY opcode in a production program, but it’s a really easy way to try it and see that you can read the myData data structure and its subfields. Now that you have data in a normal RPG variable, you can proceed to use it in your business logic. Write it to a file if you wish, or print it, place it on a screen, whatever makes sense for your application.

Parser Options for YAJLINTO

Earlier I mentioned that %PARSER has a second parameter for “parser options.” This parameter is optional, and I didn’t use it in the above example. However, there are some options available with YAJLINTO that you might find helpful.

Unlike the standard DATA-INTO (or XML-INTO) options, YAJLINTO expects its options to be JSON formatted. I designed it this way because YAJL already understands JSON format, so it was very easy to code. But, it is also powerful. I can add new features in the future (without interfering with the existing ones) simply by adding new fields to the JSON document.

As I write this, there are three options. All of them are optional, and if not specified, the default value is used.

value_true = the value that will be placed in an RPG field when the JSON document specifies a Boolean value that is true. By default, this puts “1” in the field because it’s assumed that Booleans will be mapped to RPG indicators. You can set this option to any alternate value you’d like to map, up to 100 characters long.

value_false = value placed in an RPG field when JSON document specifies a Boolean value that is false. The default value is “0”.

value_null = value placed in an RPG variable when a JSON document specifies that a field is null. Unfortunately, DATA-INTO does not have the ability to set an RPG field’s null indicator, so a special value must be placed in the field instead. The default is “*NULL”.

For example, consider the following JSON document:

json = ‘{ “inspected”: true, “problems_found”: false, +
     “date_shipped”: null }’;

In this example, I prefer “yes” and “no” for the Boolean fields. It’ll say, “yes it was inspected” or “no problems were found”. The date shipped is a date-type field and therefore cannot be set to the default null value of *NULL. So, I want to map the special value of 0001-01-01 to my date. I can do that as follows:

dcl-ds status qualified;
inspected varchar(3);
problems_found varchar(3);
date_shipped date;
end-ds;

data-into status %data(json:‘case=any’)
%parser(‘YAJLINTO’ : ‘{ +
“value_true”: “yes”,+
“value_false”: “no”,+
“value_null”: “0001-01-01” +
}’);

When this code completes, the inspected field in the status data structure will be set to “yes”, and the problems_found field set to “no”. The date_shipped will be set to Jan 1, 0001 (0001-01-01.)

Debugging and Diagnostics

and the parser during their processing. To enable this, you’ll need to add an environment variable to the same job as the program that uses data-into. For example, you could type this:

ADDENVVAR ENVVAR(QIBM_RPG_DATA_INTO_TRACE_PARSER) VALUE(‘*STDOUT’)

In an interactive job, this will cause the trace information to scroll up the screen as data-into runs. In a batch job, it would be sent to the spool. Information will be printed about what character sets were used and which fields and values were found in the document.

For example, in the case of the “status” data structure example in the previous section, the trace output would look like this:

—————- Start —————
Data length 136 bytes
Data CCSID 13488
Converting data to UTF-8
Allocating YAJL stream parser
Parsing JSON data (yajl_parse)
No name provided for struct, assuming name of RPG variable
StartStruct
ReportName: ‘inspected’
ReportValue: ‘yes’
ReportName: ‘problems_found’
ReportValue: ‘no’
ReportName: ‘date_shipped’
ReportValue: ‘0001-01-01’
EndStruct
YAJL parser status 0 after 68 bytes
YAJL parser final status: 0
—————- Finish ————–

Writing Your Own Parser

When DATA-INTO is run, it loads the document into memory and then calls the parser. It passes a parameter that contains the document to read as well as information about all of the options the user specified. It is then responsible for interpreting the document and calling some subprocedures to tell DATA-INTO what was found.

Writing a parser is best done by someone who is good at systems-type coding. You will need to work with pointers, procedure pointers, CCSID conversions and system APIs. I suspect most RPGers will want to find (or maybe purchase) a third-party parser rather than write their own. For that reason, I will not teach you how to write a parser in this blog entry.

However, if you’d like to learn more about this in a future installment of Scott’s iLand, please leave a comment below. If enough people are interested, I’ll write one.

In the meantime, I suggest the following options:

The Rational Open Access: RPG Edition manual has a chapter on how to write a parser and provides an example of a properties document parser.

Jon Paris and Susan Gantner recently published some articles that explain how to use DATA-INTO as well as write a parser. They provide an example of reading a CSV file.

Attending POWERUp18? Be sure to check out Scott’s sessions.

Interested in learning more about DATA-INTO? Attend this session from Barbara Morris.

See you in San Antonio!