How the Retail Industry Is Benefiting from Improved IT

Improved information technology and software programs continue to have a major impact on many different industries across the world. One industry that continues to be improved by IT services and products is the retail industry. Retail benefits in a number of different ways.

Improved Security

One of the main ways that the retail industry benefits from improved IT is through the improved security programs now available. Over the past few years, several major retailers have announced that their systems have been hacked and data for millions of consumers has been lost. Today, there are more IT security software programs and services that are geared to help retailers prevent these risks from occurring. This is accomplished through secure cloud-based networks and other systems that are challenging to access illegally.

Inventory Management

One of the biggest challenges that retailers have always had to deal with is inventory management. Those retailers that are not able to manage their inventory will often have too much of an unpopular product and not enough of the best-selling items. Today, through the use of a variety of IT programs and inventory management systems, companies are able to get better real-time inventory reports that can allow them to automatically modify orders from suppliers.

Mobile Shopping

Mobile apps are also gaining in popularity with consumers and retailers today. These enhanced applications provide a user with the ability to complete entire transactions from their phones while also ensuring that their data is going to be properly protected. This provides a more convenient and enjoyable shopping experience for all consumers. The increased use of mobile applications has also helped to reduce the need for as many brick-and-mortar stores, which has helped many retailers reduce their operating costs.

Understanding Different Disaster Recovery Strategies and Methods

Many information technology professionals come to understand that disaster recovery has several different elements. Categorizing different disaster recovery methods can help information technology departments protect what they have.

Precautionary Procedures

Disasters can strike at any time, and information technology departments need to be ready before there is any indication that one will happen. Part of the process is having solid off-site copies of important data available in several locations.

Making sure information technology departments are equipped with generators and surge protectors can also stop departments from losing massive amounts of data on a basic level. It’s also a good idea to monitor the department regularly, thus giving professionals the opportunity to recognize the warning signs of problems.

Identification of Threats

Even the most carefully maintained information technology departments will face threats eventually, and they need to be skilled at finding them. Antivirus software is used to find threats that are already in place. However, information technology departments can potentially face many different threats. Even something as simple as safety alarms can help protect these organizations.

Restorative Methods

Information technology departments have to prepare for the possibility that they will not be able to catch all threats, and this is a reality for almost all of them. Having the right insurance policies is part of the picture here, especially given the importance of data in the modern world. Working with data recovery professionals who can fix damaged systems is also important.

Departments that have all of these different methods in place, or more, will be less likely to face truly devastating problems at any point.

3 Tasks You Can Take to Improve Your IBM i’s Security and Ease of Administration

By Dana Boehler

Securing an expansive platform like an IBM i system can be an intimidating task, a task that many times falls into the hands of a systems administrator when more specialized help is not available in-house. Deciding what tasks and projects will add value, while reducing administrative overhead, is also difficult. In this article I have chosen three things you can do in your environment that can get you started in ascending order of time and effort.

1. Run the ANZDFTPWD Command

Run the ANZDFTPWD command – This command checks the profiles on your system for passwords that are the same as the user profile name and outputs the list to a spooled file. Even on systems with well controlled *SECADM privileges (the special authority that allows a user to create and administer to user profiles), you will find user profiles that have either been created with or reset to have a password that is the same as the user profile name, which could provide an unauthorized user a method for gaining access to system resources. Additionally, the command has options to either disable or expire any user profiles found to have default passwords if desired.

2. Use SQL to Query Security Information from Library QSYS2

In recent updates to the supported IBM i OS versions, IBM made a very powerful set of tools available for querying live system and security data by using SQL statements. This allows users with the appropriate authority to create very specific reports on user profiles, group profiles, system values, audit journal data, authorization lists, PTF information and many other useful data points. These files in QSYS2 are table views directly accessing the information they are querying so the data is current every time a statement is run. One of the best things about creating output this way is there is no need for creating an outfile to query from or refresh re-querying. A detailed list of the information available and the necessary PTF and OS levels required to use these tools can be found here.

3. Implement a Role-based Security Scheme

The saying used to be the IBM i OS “is very secure”, but that statement has changed to the more accurate “is very securable”. This change in language reflects the reality that these systems are now very open to the world as shipped but can be one of the most secure systems when deployed with security in mind. For those who are not aware of role-based authority on IBM i, it is basically a way of restricting access to system resources using authorities derived from group profiles. Group profiles are created for functions within the organization, and then authorities are assigned to those group profiles. When a user profile is created it is configured with no direct access to objects on the system, instead group profiles are added to allow access to job functions. Although implementing role-based security may seem like a daunting task it pays huge dividends in ease of administration after the project is in place. For one thing having role-based security in place allows the administrator to quickly change security settings for whole groups of users at once when needed, instead of touching each user’s profile. It also allows for using group profiles as the object owners instead of individual user profiles, which means the process of removing users who create large numbers of objects or objects that are constantly locked is much easier. Using role-based security also relies on group profile for authority, so the likelihood of inadvertently granting a user too much or too little authority by copying another similar user is far less likely.

These a just a few of things you can do to get started securing your IBM i. In future posts, I intend to delve into more depth, especially regarding role-based security.

Guest Blogger

Dana Boehler is a Senior Systems Engineer at Rocket Software.

Disaster Recovery and Preparing For the Worst

Disaster recovery is largely about preparation. If the correct procedures and protocols are not in place, information technology departments will find themselves losing a lot of data. All information technology departments need to set priorities when it comes to disaster recovery procedures, which will help them solve these problems with efficiency.

Making Sure All Employees Are Prepared

Information technology departments are often large enough that many different people will be involved in the disaster recovery procedures. All employees need to know in advance what they personally need to do in the event of a major disaster.

Disaster recovery needs to be part of their training right from the start, but it should also be part of the training that they receive as long-term employees.

Disaster Recovery Specialization

It is true that many employees will be involved in the disaster recovery procedures themselves. However, those procedures need to be established in advance by a committee that includes disaster recovery specialists.

Disaster recovery is complex enough in the modern world that it is possible to be an information technology professional who primarily focuses on it. People like this need to be involved in the planning stage.

Focusing On Certain Functions

Some functions will be more important than others in different organizations. Concentrating on the most crucial functions first will create the best results. This can be complicated, since some functions might be particularly important at certain points during the year and less important at other times.

As such, the specific actions of a disaster recovery team will actually vary according to the month. When the team knows all these details in advance, the results will be that much better.

Find disaster recovery and high availability videos in the COMMON Webcast Library.

IT in the Insurance Industry Now

The effects of IT in the insurance industry have been very broad. People who interact with the insurance industry at all levels will see how it has been influenced by the rise of information technology.

Customer Service

All businesses need to have high customer service standards, and it’s especially important for insurance companies to emphasize customer service. Information technology has certainly made this easier.

In the modern world, customers can purchase insurance online. For a lot of people, this is much easier than trying to do the same thing in person. This process is paperless and can be conducted from any location.

Customers can more or less manage everything related to their insurance policies online in the modern world, which makes it easier for them to work with the insurance companies in question every step of the way.

Getting New Leads

In the insurance industry, information technology is particularly important when it comes to lead generation. With modern information technology, it’s possible to generate leads on a broad level and in a particularly convenient manner.

Targeted Marketing

Information technology has also made it easier for insurance companies to target very specific demographics. People will have very different needs when it comes to insurance based on their family structure, age range, health status, and many other factors.

As such, the fact that information technology makes it even easier to reach out to groups of individuals selectively can truly make all the difference for the companies that are trying to use the money that they have set aside for marketing wisely.

Insurance Types

Cloud Technologies – Containerizing Legacy Apps

Information technologies are continually in a state of transition and organizations often need tools to help them transition from one platform to another, especially with regard to legacy apps. Many companies either still find value in these apps or simply cannot make the transition to Cloud technologies fast enough due to budgetary concerns or other reasons. For these organizations, IBM is now offering the Cloud Private platform, which allows businesses to embrace the Cloud by not only containerizing their legacy apps but also containerizing the platform itself, along with other IBM tools and many of the notable open source databases.

Providing Bridges

Through their Cloud Private platform, IBM provides the bridge between current cloud services and an organization’s on-site data mechanism. In essence, it allows a company’s legacy apps to interact with cloud data. IBM understands the value of making a platform accessible to other technologies and they used this philosophy as well with their Cloud Private tools. Whether an organization uses Microsoft’s Azure cloud platform or Amazon Web Services, IBM’s Cloud Private is flexible enough to work with both.

A Comprehensive Package

IBM’s Cloud Private platform offers a comprehensive package of tools to help companies mix and mingle their legacy apps with other cloud services and data. The Cloud Private toolset includes components for:

  • Cloud management automation
  • Security and data encryption
  • The core cloud platform
  • Infrastructure options
  • Data and analytics
  • Support for applications
  • DevOps tools

Providing a comprehensive transitioning tool, such as the one IBM developed, should help companies make the most of their investment in their legacy apps. In addition, it will provide them with the time buffer they will need before eventually making a full transition to the Cloud.

Artificial Intelligence – The Number One IT Career Path

There is good news for those who have decided to acquire artificial intelligence skills as part of their IT career path. IBM Watson gurus will be pleased to learn that AR and VR skills are in the top spot of in-demand skills for at least the next couple of years. According to IDC, in the near future, total spending on products and services that incorporate AR and/or VR concepts will soar from 11.4 billion recorded in 2017, to almost 215 billion by the year 2021 — a phenomenal amount of growth that is going to require a steady stream of IT professionals that can fill the need for these widely expanding fields and others. Read on to learn more about the top five IT careers that show nothing less than extreme promise for anyone willing to reach for the rising IT stars.

Computer Vision Engineer

According to the popular job search site Indeed, the top IT position most in demand for the next few years goes to computer vision engineers. These types of positions will require expertise in the creation and continued improvement in computer vision and machine learning algorithms, along with analytics designed to discover, classify, and monitor objects.

Machine Learning Engineer

If vision engineers are responsible for envisioning new ideas, then machine learning engineers are responsible for the actual creation and execution of the resulting products and services. Machine learning engineers actually develop the AI systems and machines that will understand and apply their built-in knowledge.

Network Analyst

As AI continues to grow exponentially, so does the IoT. This means an increased demand for network analysts who can apply their expertise to the expansion of networks required to support a variety of smart devices.

Security Analyst

As AR and VR configurations become more sophisticated, along with more opportunities for exploitation through more smart devices, cyber attacks will become more sophisticated as well. Security analysts will need strong skills in AR, VR and the IoT in order to protect an organization’s valuable assets.

Cloud Engineer

Behind all the scenes is the question of how these newer concepts will affect cloud services. The current expectations are that solutions will require a mixture of both in-house technology and outside sources. Cloud engineers will need to thoroughly familiarize themselves with AR and VR concepts in order to give them the necessary support.

Parsing JSON with DATA-INTO! (And, Vote for Me!)

Finally, RPG can parse JSON as easily as it can XML! This month, I’ll take a look at RPG’s brand-new DATA-INTO opcode and how you can use it today in your RPG programs. I’ve combined DATA-INTO with the free YAJL tool to let you parse JSON documents with DATA-INTO in a manner very similar to the way you’ve been using XML-INTO for XML documents.

I’ll also take a moment to ask you to vote in COMMON’s Board of Directors election. I’m running for the Board this year, alongside some other great leaders in our community. Please spread the word and vote!

Board of Directors Election

I need your vote! I am running for COMMON’s Board of Directors, and this is an elected position voted upon by the COMMON membership. Why am I running? I’ve had a love affair with the IBM i community for a long time, and since 2001, I have tried very hard to bring value through articles, free software, online help and giving talks on IBM i subjects. To me, COMMON is one of the main organizations that represents the community, and I feel that this is one more way that I can provide value and give back.

Voting is open from April 22nd to May 22nd. You can learn more about my position as well as the other candidates by clicking here.

To place your vote, you’ll need to log into Cosmo and click the “Board of Directors Election” at the top of the page.

Please vote and help me spread the word! It means a lot!

RPG’s DATA-INTO Opcode: The Concept

Years ago, IBM added the XML-INTO opcode to RPG, which greatly simplified reading XML documents by automatically loading them into a matching RPG variable. Unfortunately, it only works with XML documents, and today, JSON has become even more popular than XML. This led me and many other RPGers to ask IBM for a corresponding JSON-INTO opcode. Instead, IBM had a better idea: DATA-INTO. DATA-INTO is an opcode that can potentially read any data format (with a little help) and place it into an RPG variable. That way, if the industry decides to change to another format in the future, RPG already has it covered.

I expect that you’re thinking what I was thinking: “There are millions of data formats! No single opcode could possibly read them all!” DATA-INTO solves that problem by working together with a third-party routine called a “parser”. The parser is what reads and understands data formats such as JSON or whatever might replace it in the future. It then passes that data back to DATA-INTO as parameters, and DATA-INTO maps the data into RPG variables. Think of the parser is a plug-in for DATA-INTO. As long as you have the right plug-in (or “parser”), you can read a data format. In this article, I’ll present a parser for JSON, so you’ll be able to read JSON documents. Other people have already published parsers for property file format and CSV documents. When “the next big thing” happens, you’ll only need a parser for it, and then DATA-INTO will be able to handle it.

One more way to think of it: it is similar to Open Access. Open Access provides a way to use RPG’s native file support to read any sort of data. Where traditional file access called IBM database routines under the covers, Open Access lets you provide your own routine, so you can read any sort of data and aren’t limited to only IBM’s database. DATA-INTO works the same way, except instead of replacing the logic for files, it replaces the logic for XML-INTO with a general-purpose system.

DATA-INTO Requirements

DATA-INTO is part of the ILE RPG compiler and runtime. No additional products need to be installed besides the aforementioned parser. You will need IBM i 7.2 or newer, and you’ll need the latest PTFs installed.

The PTFs required for 7.2 and 7.3 are found here.

When IBM releases future versions of the operating system (newer than 7.3), the RPG compiler will automatically include DATA-INTO support without additional PTFs.

DATA-INTO will not work with the 7.1 or earlier RPG compilers and will not work with TGTRLS(V7R1M0) or earlier specified, even if you are running it on 7.2.

My JSON parser for DATA-INTO requires the open source YAJL software. You will need a copy from April 17, 2018 (or newer) to have DATA-INTO support. It is available at no charge from my website.

DATA-INTO Syntax

DATA-INTO works almost the same as the XML-INTO opcode that you may already be familiar with. The syntax is as follows:

DATA-INTO your-result-variable %DATA(document : rpg-options) %PARSER(parser : parser-options);

-or-

DATA-INTO %HANDLER(handler : commArea) %DATA(document : rpg-options) %PARSER(parser : parser-options);

If you’re familiar with XML-INTO, you’ll notice that the syntax is identical except for the %PARSER built-in function and its parameters. Here’s a description of what each part means:

your-result-variable = An RPG variable to receive the data. Most often, this will be a data structure that is formatted the same way as the document you are reading. Fields in the document will be mapped to corresponding fields in your variable, based on the variable names. I’ll explain this in more detail below.

%DATA(document : rpg-options) = Specifies the document to be read and the options that are understood and used by RPG (as opposed to the parser) when mapping fields into your variable. The document parameter is either a character string containing the document itself or is an IFS path to where the document can be read from disk. The rpg-options parameter is a character string containing configuration options of how the data should be mapped into variables. (It is the same as the second parameter to the %XML built-in function used with XML-INTO and works the same way.) The following is a summary of those options:

  • path option = specifies a location within the JSON document to begin parsing, and lets you parse only a subset of the document if desired
  • doc option = specify doc=string if the document parameter contains the JSON document itself, or doc=file if it contains an IFS path name
  • ccsid option = controls whether RPG does it’s processing in Unicode or EBCDIC
  • case option = controls how strictly a variable name must match the document field names, including whether it’s case sensitive or whether special characters get converted to underscores
  • trim option = controls whether blanks and other whitespace characters are trimmed from character strings
  • allow missing option = controls whether it is okay for the document to be missing fields that are in the RPG variable
  • allow extra option = controls whether it is okay for the document to have extra fields that are not in the RPG variable
  • count prefix option = a prefix to be added for RPG fields that should contain a count of the number of matching elements (vs the data of the element)
    • For example, the number of entries loaded into an array
    • Can also be used to determine whether an optional element was/wasn’t provided

I don’t want this post to get too bogged down with the details of each option, so if you’d like to read more details, please see the ILE RPG reference manual.

%PARSER(parser : parser-options) = Specifies a program or subprocedure to act as a parser (or “plugin” as I like to think of it) that will interpret the document. My parser will be a JSON parser, and it will know how to interpret a JSON document and pass its data to RPG. The parser-options parameter is a literal or variable string that’s intended to be used to configure the parser. The format of parser-options, and which options are available, is determined by the code in the parser routine.

%HANDLER(handler : commArea) = Specifies a handler routine to be used as an alternative to a variable. You use this when your data should be processed in “chunks” rather than reading the entire document at once. I consider this to be a more advanced usage of DATA-INTO (or XML-INTO) that is primarily used when it’s not possible to fit all the needed data into a variable. This was very common back in V5R4 when variables were limited to 64 KB but is not so common today. For that reason, I will not cover it in this article. If you’re interested in learning more, you can read about it in the ILE RPG Reference manual, or e-mail me to suggest this as a topic for a future blog post.

The YAJLINTO Parser

The current version of YAJL ships with a program named YAJLINTO, which is a parser that DATA-INTO can use to read JSON documents. You never need to call YAJLINTO directly. Instead you use it with the %PARSER function. Let’s look at an example!

In a production application, you’d get a JSON document from parameter, API or file. To keep this example simple, I’ve hard-coded in my RPG program by assigning it to a character string as follows:

dcl-s json varchar(5000);

json = ‘{ +
     “success”: true, +
     “errorMsg”: “No error reported”, +
     “customers”:[{ +
       “name”: “ACME, Inc.”, +
       “address”: { +
        “street”: “123 Main Street”, +
        “city”: “Anytown”, +
        “state”: “WI”, +
        “postal”: “53129” +
       } +
     }, +
     { +
       “name”: “Industrial Supply Limited.”, +
       “address”: { +
        “street”: “1000 Green Lawn Drive”, +
        “city”: “Fancyville”, +
        “state”: “CA”, +
        “postal”: “95811” +
       } +
     }] +
    }’;

In last month’s blog entry, I explained quite a bit about JSON and how to process it with YAJL. I won’t repeat all of the details about how it works, but just to refresh your memory, the curly braces (the { and } characters) start and end a JSON data structure. (They call them “objects”, but it is the same thing as a data structure in RPG.) The square brackets (the [ and ] characters) start and end an array.

Like XML-INTO, DATA-INTO requires that a variable is defined that exactly matches the layout of the JSON document. When RPGers write me complaining of problems with this sort of programming, the problem is almost always that their variable doesn’t quite match the document. So please take care to make them match exactly. In this case, the RPG definition should look like this:

dcl-ds myData qualified;
success ind;
errorMsg varchar(500);
num_customers int(10);
dcl-ds customers dim(50);
name varchar(30);
dcl-ds address;
street varchar(30);
city  varchar(20);
state char(2);
postal varchar(10);
end-ds;
end-ds;
end-ds;

Take a moment to compare the RPG data structure to the JSON one. You’ll see that the JSON document starts and ends with curly braces and is therefore a data structure – as is the RPG. Since the JSON structure has no name, it does not matter what I name my RPG structure. So, I called it “myData”. The JSON document contains three fields named success, errorMsg and customers. The RPG code must also use these names so that they match. The customers field in JSON is an array of data structures, as is the RPG version. The address field inside that array of data structures is also a data structure, and therefore the RPG version must also be.

The one field that is different is “num_customers”. To understand that, let’s take a look at how I’m using these definitions with the DATA-INTO opcode.

data-into myData %DATA(json: ‘case=any countprefix=num_’)
         %PARSER(‘YAJLINTO’);

The first parameter to DATA-INTO is the RPG variable to read the data into, in this case it is the myData data structure shown above.

The JSON data is specified using the %DATA built-in function, which receives the ‘json’ variable – the character string containing the JSON data. The second parameter to %DATA is the options for RPG to use when mapping the fields. I did not need to specify “doc=string” to get the JSON data from a variable because “doc=string” happens to be the default. I did specify two other options, however.

case=any – means that the upper/lowercase of the RPG fields do not have to match that of the JSON fields.

countprefix=num_ – means that if I code a variable starting with the prefix “num_” in my data structure, it should not come from the data, but instead, RPG should populate it with the count of elements. In this case, since I have defined “num_customers” in my data structure, RPG will count the number of elements in the “customers” array and place that count into the “num_customers” field.

That explains why the extra num_customers field is in the RPG data structure. Since customers is an array and I don’t know how many I’ll be sent, RPG can count it for me, and I can use this field to see how many I received. That’s very helpful!

Countprefix is also helpful in cases where a field may sometimes be in the JSON document and sometimes may be omitted. In that case, the count prefixed field will bypass the “allow_missing” check and allow the field to not exist without error. If the field didn’t exist in the JSON, the count will be set to zero, and my RPG code can detect it and handle it appropriately.

The %PARSER function tells DATA-INTO which parser program to call. In this case, it is YAJLINTO. The %PARSER function is capable of specifying either a program or a subprocedure and can also include a library if needed.

When specifying a program as the parser, it can be in any of the following formats:

‘MYPGM’
‘*LIBL/MYPGM’
‘MYLIB/MYPGM’

Notice that the program and library names are in capital letters. Unless you are using case-sensitive object names on your IBM i (which is very unusual), you should always put the program name in capital letters.

To specify a subprocedure in a service program, use one of the following formats:

‘MYSRVPGM(mySubprocedure)’
‘MYLIB/MYSRVPGM(mySubprocedure)’
‘*LIBL/MYSRVPGM(mySubprocedure)’

The subprocedure name must be in parenthesis to denote that it is a subprocedure rather than a program name. Like the program names above, the service program name should be in all capital letters. However, subprocedure names in ILE are case-sensitive, so you will need to be sure to match the case exactly as it is exported from the service program. Use the DSPSRVPGM command to see the how the procedures are exported.

After the DATA-INTO opcode runs successfully, the myData variable will be filled in, and I can use its data in my program just as I would use any other RPG variable. For example, if I wanted to loop through the customers and display their names and cities, I could do this:

dcl-sint(10);

for x = 1 to myData.num_customers;
  dsply myData.customers(x).name;
  dsply myData.customers(x).address.city;
endfor;

Naturally, you wouldn’t want to use the DSPLY opcode in a production program, but it’s a really easy way to try it and see that you can read the myData data structure and its subfields. Now that you have data in a normal RPG variable, you can proceed to use it in your business logic. Write it to a file if you wish, or print it, place it on a screen, whatever makes sense for your application.

Parser Options for YAJLINTO

Earlier I mentioned that %PARSER has a second parameter for “parser options.” This parameter is optional, and I didn’t use it in the above example. However, there are some options available with YAJLINTO that you might find helpful.

Unlike the standard DATA-INTO (or XML-INTO) options, YAJLINTO expects its options to be JSON formatted. I designed it this way because YAJL already understands JSON format, so it was very easy to code. But, it is also powerful. I can add new features in the future (without interfering with the existing ones) simply by adding new fields to the JSON document.

As I write this, there are three options. All of them are optional, and if not specified, the default value is used.

value_true = the value that will be placed in an RPG field when the JSON document specifies a Boolean value that is true. By default, this puts “1” in the field because it’s assumed that Booleans will be mapped to RPG indicators. You can set this option to any alternate value you’d like to map, up to 100 characters long.

value_false = value placed in an RPG field when JSON document specifies a Boolean value that is false. The default value is “0”.

value_null = value placed in an RPG variable when a JSON document specifies that a field is null. Unfortunately, DATA-INTO does not have the ability to set an RPG field’s null indicator, so a special value must be placed in the field instead. The default is “*NULL”.

For example, consider the following JSON document:

json = ‘{ “inspected”: true, “problems_found”: false, +
     “date_shipped”: null }’;

In this example, I prefer “yes” and “no” for the Boolean fields. It’ll say, “yes it was inspected” or “no problems were found”. The date shipped is a date-type field and therefore cannot be set to the default null value of *NULL. So, I want to map the special value of 0001-01-01 to my date. I can do that as follows:

dcl-ds status qualified;
inspected varchar(3);
problems_found varchar(3);
date_shipped date;
end-ds;

data-into status %data(json:‘case=any’)
%parser(‘YAJLINTO’ : ‘{ +
“value_true”: “yes”,+
“value_false”: “no”,+
“value_null”: “0001-01-01” +
}’);

When this code completes, the inspected field in the status data structure will be set to “yes”, and the problems_found field set to “no”. The date_shipped will be set to Jan 1, 0001 (0001-01-01.)

Debugging and Diagnostics

and the parser during their processing. To enable this, you’ll need to add an environment variable to the same job as the program that uses data-into. For example, you could type this:

ADDENVVAR ENVVAR(QIBM_RPG_DATA_INTO_TRACE_PARSER) VALUE(‘*STDOUT’)

In an interactive job, this will cause the trace information to scroll up the screen as data-into runs. In a batch job, it would be sent to the spool. Information will be printed about what character sets were used and which fields and values were found in the document.

For example, in the case of the “status” data structure example in the previous section, the trace output would look like this:

—————- Start —————
Data length 136 bytes
Data CCSID 13488
Converting data to UTF-8
Allocating YAJL stream parser
Parsing JSON data (yajl_parse)
No name provided for struct, assuming name of RPG variable
StartStruct
ReportName: ‘inspected’
ReportValue: ‘yes’
ReportName: ‘problems_found’
ReportValue: ‘no’
ReportName: ‘date_shipped’
ReportValue: ‘0001-01-01’
EndStruct
YAJL parser status 0 after 68 bytes
YAJL parser final status: 0
—————- Finish ————–

Writing Your Own Parser

When DATA-INTO is run, it loads the document into memory and then calls the parser. It passes a parameter that contains the document to read as well as information about all of the options the user specified. It is then responsible for interpreting the document and calling some subprocedures to tell DATA-INTO what was found.

Writing a parser is best done by someone who is good at systems-type coding. You will need to work with pointers, procedure pointers, CCSID conversions and system APIs. I suspect most RPGers will want to find (or maybe purchase) a third-party parser rather than write their own. For that reason, I will not teach you how to write a parser in this blog entry.

However, if you’d like to learn more about this in a future installment of Scott’s iLand, please leave a comment below. If enough people are interested, I’ll write one.

In the meantime, I suggest the following options:

The Rational Open Access: RPG Edition manual has a chapter on how to write a parser and provides an example of a properties document parser.

Jon Paris and Susan Gantner recently published some articles that explain how to use DATA-INTO as well as write a parser. They provide an example of reading a CSV file.

Attending POWERUp18? Be sure to check out Scott’s sessions.

Interested in learning more about DATA-INTO? Attend this session from Barbara Morris.

See you in San Antonio!

The Strange World of Bug Bounty Hunters

How do software companies find dangerous bugs in their code? Ideally, their own QA departments discover them before it’s released. Sometimes they find out only when customers have problems. That might mean after there’s been a breach. But sometimes they hear about bugs from freelancers who find them in return for a reward. These people are called bug bounty hunters.

Some companies find it worthwhile to offer payment for bug reports. Learning about security holes before anyone can exploit them can save the company’s reputation, which is worth a lot of money. Recently Google paid $112,500 to a researcher for discovering a flaw that could have let a website push arbitrary code into an Android device. Having to deal with it after criminals found out could have been far more expensive.

The Mind of the Bounty Hunter

Bug hunting makes up half or more of some people’s income. They spend hours every day looking for flaws in websites. How different are they, really, from those who do the same thing and use their discoveries to steal information? Sometimes the same person plays both sides of the fence, depending on which one is paying better.

It’s the challenge, perhaps even more than the money, which motivates them. Anyone with those skills could get a well-paying job in QA. But they’d rather be on their own, chasing down bugs without reporting to a boss. Their attitude is, “So you think I can’t break this code? I’ll show you!”

The Benefit to Users

A bounty may encourage hackers to stay within the law. It can even motivate them to work harder at what they like to do. It’s easier to explain their income when it comes from Google rather than the Shadow Brokers, and there’s less chance of blackmail afterward. When they report bugs, the software publisher can fix them before anyone is harmed.

Software bounty hunters are a strange breed, there’s no question. But they do all of us some good.

Need to learn more about IT security? Take a look at POWERUp18 security sessions.

Completely Free ILEditor and IBM Technology Refresh Recap

Today I’ll look at a powerful open source (and completely free!) IDE for ILE programs (CL, C/C++, Cobol or RPG) named ILEditor that is being actively developed by Liam Allan who is one of the brightest minds in the industry. In fact, last week Allan added a new GUI interface to the editor that makes it feel much more professional, while keeping it easy to use. I’ll also give you a quick overview of the announcement IBM made last week about updates to IBM i 7.2 and 7.3.

The IBM Announcement

On February 13th, just in time for Valentine’s Day (because IBM wants to be my valentine!), IBM announced new Technology Refreshes. These include support for POWER9 processors, which look incredible – but, alas, I’m not a hardware guy. They also include updates to Integrated Web Services (IWS), Access Client Solutions (ACS), RPG and more.
Here are links to the official announcements:

IBM i 7.2 Technology Refresh 8 (TR8)

IBM i 7.3 Technology Refresh 4 (TR4)

You should also check out Steve Will’s blog post.

My Thoughts

The most exciting part of this announcement for me is the introduction of the new DATA-INTO opcode in RPG. Here’s the sample code that IBM provided in the announcement:

DATA-INTO myDs %DATA(‘myfile.json’ : ‘doc=file’) %PARSER(‘MYLIB/MYJSONPARS’);

It appears that this will work similarly to Open Access, where the RPG compiler will examine your data structure and other variables that it has all the details for and work together with a back-end handler that will map it into a structured format. Open Access refers to the back-end program as a “handler”, whereas DATA-INTO seems to call it a “parser”, but the general idea is the same.

As someone who has written multiple open source tools to help RPG developers work with XML and JSON documents, this looks great! One of the biggest challenges I face with these open source projects is that they don’t know the details of the calling program’s variables, so they can’t ever be as easy to use as a tool like XML-INTO. For example, the YAJL tools that I provide to help people read JSON documents require much more code than the XML-INTO opcode, because XML-INTO can read the layout of a data structure and map data into it, whereas with YAJL you must map this data yourself. However, DATA-INTO looks like it will solve this problem, so that once I’ve had time to write a DATA-INTO parser, you’ll be able to use YAJL the same way as XML-INTO.

Unfortunately, as I write this, the PTFs are not yet available, so I haven’t been able to try it. I’m very excited, however, and plan to blog about it as soon as I’ve had a chance to try it out!

What is ILEditor?

ILEditor (pronounced “I-L-Editor”) came from the mind of Liam Allan, who is one of the best and the brightest of the 2018 IBM Champions. I have the privilege of working with Liam at Profound Logic Software, and I can tell you that his enthusiasm for computer technology and IBM i programming know no bounds. In fact, one day last week after work, Liam sent me a text message about his new changes to ILEditor, sounding very excited. When I factored in the time zone difference, I realized it was 1:00 a.m. where he lives!

For many years, one of the most common laments in the IBM i programming community has been about the cost and performance of RDi. Please don’t misunderstand me. I love RDi, and I use it every day. I believe RDi is the best IDE for IBM i development that’s available today. That said, sometimes we need something else for various reasons. Some shops can’t get approval for the cost of RDi. Others might want something that uses fewer resources or something they can install anywhere without needing additional RDi licenses. Whatever the reason, ILEditor is very promising alternative! I wouldn’t be surprised if it eventually is able to compete with RDi.

Why Not Orion? Or SEU?

The concept of Orion is a great. It’s web-based, meaning that you don’t have to install it and it’s available wherever you go. Unfortunately, it’s not really a full IDE – at least not yet! I hope IBM is working to improve it. It does not know how to compile native ILE programs or show compile errors. Its interface is designed around the Git version control software, which makes it tricky to use unless you happen to store your code in Git. And quite frankly, it’s also a little bit buggy. I hope to see improvements in these areas, but right now it’s not a real option.

The most popular alternative to RDi today is SEU. In fact, historically this was the primary way that code was written for IBM i. So, you may think it’s still a good choice. However, I don’t think it’s viable today for two reasons:

  1. The green-screen nature makes it cumbersome to use. This is no problem for a veteran programmer, because they’re used to it. But for IT departments to survive, they need to bring in younger talent. Younger talent is almost always put off by SEU. I even know students who gave up the platform entirely because they thought SEU seemed so antiquated, and they wanted no part of it.
  2. SEU hasn’t received any updates since January 2008. That means all features added to RPG in the past 10 years – which includes three major releases of the operating system –will show as syntax errors in SEU.

About ILEditor

ILEditor is open source, runs on Windows and was released as open source under the GNU GPL 3.0 license. That means it is free and can be used for both private and commercial use. If you like, you can even download the source code and make your own changes. It can read source from source members or IFS files. In addition to editing the source, it can compile programs, show you the errors in your programs, work with system objects and display spooled files. It even has an Outline View (like RDi does) that will show you the variables and routines in your program.

The main web site for ILEditor is: worksofbarry.com/ileditor/.

If you want to see the source code, you’ll find the Github project here.

You do not need to install any software on your IBM i to use ILEditor. Instead, the Windows program uses the standard FTP server that is provided with the IBM i operating system to get object and source information and to run compile commands. An FTPES (FTP over SSL) option is provided if a more secure connection is desired.

Connecting for the First Time

When you start ILEditor, it will present you with a box where you can select the host to connect to. Naturally, the first time you run it there will be no hosts defined, so the box will be empty. You can click “New Host” to define one.

Once you have a host defined, it will be visible as an icon, and double-clicking the icon will begin the connection.

When you set up a new system, there are five fields you must supply, as shown in the screenshot below:

Alias name = You can set this to whatever you wish. ILEditor will display this name when asking you the host to connect to, so pick something that is easy to remember.

Host name / IP address = the DNS name or IP address of the IBM i to connect to.

Username = Your IBM i user profile name.

Password = Your IBM i password – you can leave this blank if you want it to ask you every time you connect.

Use FTPES = This stands for FTP over Explicit SSL. Check this box if your IBM i FTP server has been configured to allow SSL and you’d like the additional security of using an encrypted connection.

The Main IDE Display

Once you’ve connected, you’ll be presented with a screen that shows the “Toolbox” on the left and a welcome screen containing getting started information and developer news, as shown in the screenshot below.

Any of the panels in ILEditor, including these two, can dragged to different places on the display or closed by clicking the “X” button in the corner of the panel. There is also an icon of a pin that you can click to toggle whether a panel is always open or whether it is hidden when you’re not using it. If you look carefully on the right edge of the window, you’ll see a bar titled “Outline View”. This is an example of a hidden panel. If you click on the panel title, the panel will open. If you click the pin, it will stay open. You can adjust the size of any panel by dragging its border.

When you open source code, it will be placed in tabs in the center of the display (just as the welcome screen is initially.) These can also be resized or moved with the mouse. This makes the UI very flexible and simple to rearrange to best fit your needs.

The Toolbox

Perhaps the best place to start is with the toolbox.  Here’s what that panel looks like:

Most of the options in this panel are self-explanatory. I will not explain them all but will point out a few interesting things that I discovered when using ILEditor:

  • The “Library List” is primarily used when compiling a program. This is the library list to find file definitions and other dependencies that your program will need.
  • The “Compile Settings” lets you customize your compile commands. Perhaps you have a custom command you use when compiling. Or perhaps you use the regular IBM commands but want to change some of the options used. In either case, you’ll want to look at the Compile Settings.
  • As you might expect, “Connection Settings” has the host name, whether to use FTPES and other settings that are needed to connect to the host. In addition to that, there are some other useful options hidden away in the connection settings:
    • On the IFS tab, you’ll find a place to configure where your IFS source code is stored and which library it should be compiled into.
    • On the Editor tab, there is a setting to enable the “Outline View”. You’ll want to make sure this is checked, otherwise you’ll be missing out on this feature.
    • On the ILEditor tab, there’s a setting called “Use Dark Mode”. This will change the colors when it displays your source code to use a black background (as opposed to the default white background), which many people, myself included, find easier on the eyes.
  • When you change something in the “Connection Settings” (including the options described above), you will need to disconnect from the server and reconnect so that the new settings take effect.

Opening Source Code from a Member List

ILEditor allows you to open source code from either an IFS file or a traditional source member. You can use the Member Browser or IFS Browser options in the toolbox to browse your IBM i to find the source you wish to open and open it.

The Member Browser opens as a blank panel with two text fields at the top. At first, I wasn’t sure exactly what these were for as there wasn’t any explanation. I guessed that this was where you specified the library (on the left) and the source physical file (on the right) that you wanted to browse. Iit turned out that I was correct. If you type the library and filename and click the magnifying glass, it will show you all the members in that file.

I have a lot of source members that I keep in my personal library, and I often get impatient waiting for the member list to load in RDi. I was pleasantly surprised to see that the member browser in ILEditor loads considerably faster.

There is also a “hidden” feature where you can press Ctrl-P to search the list of recent members that you listed in the member browser. Just press Ctrl-P and start typing, and it’ll show the members that match the search string. This was a very convenient way to find members.

Once you’ve found the member (in either the regular member browser or the “search recent” dialog), you can double-click on the member name to open it.

Create or Open a Member Without Browsing

In the upper-left of the ILEditor window, there is a File menu that works like the file menus found in most other Windows programs. You can click File/New to create a new member or IFS file or File/Open to open an existing member or IFS file when you know the name and therefore don’t need to browse for it.

The File Menu also offers keyboard shortcuts to save time. You can press Ctrl-O for Open, or Ctrl-N for New to bypass the menu.

One thing that I found a little unusual is that you must specify the source type when you open an existing member. I expected this when creating a new member, since the system doesn’t know what it is. But when opening an existing member, I expected it to default to the source type of the member so that you don’t have to specify it every time. I discovered that if you do not specify the type, it will default to plain text. I spoke to Liam about this, and he assured me that this is something he plans to improve in the future. Thankfully, this is not the case when using the member browser. It only happens when opening the member directly.

Working with IFS Files

The IFS Browser can be used to browse the IFS on your IBM i and find the source code that you’d like to open. It will begin browsing the IFS in the directory that you’ve specified in the IFS tab in your connection settings. Any subdirectories found beneath that starting directory can be expanded as well to see the files inside of it.

Like the member browser, double-clicking on an IFS file will open it in the editor.

The File menu also has options for creating a new IFS file or opening an existing IFS file when you know the exact path name. In that case, you do have to type the entire IFS path. There is no option to browse folders as you’d find in the open dialogs of other Windows software. That didn’t seem like a problem to me. If I wanted to see the folders, I’d use the IFS browser instead.

The Source Editor

I found the editor to be very intuitive, since it works the same as you’d expect from a PC file editor. It provides syntax highlighting and an outline view that make the source code very easy to read. In the screenshot below, I’m using “dark mode”, so you’ll see that my source code has a black background.

 

Syntax highlighting worked very nicely in free format RPG, CL and C/C++ code, including code that used the embedded SQL preprocessor.

Unfortunately, it did not work in fixed format RPG code. Liam tells me that fixed format RPG is especially difficult to implement because he codes ILEditor’s syntax highlighting using regular expressions, and regular expressions are difficult to make work for position-dependent source. However, he assured me that he does plan to support fixed format RPG code and is working on solving this problem.

I noticed that I could still type fixed format code and make changes to it, and aside from the source not being colored correctly, it worked fine.

The Outline View was a pleasant surprise, because I wasn’t really expecting an editor other than RDi to have one. It does not have as many features as the RDi outline view, but it worked very nicely for what I needed it for. I was also pleasantly surprised that the Outline View worked with CL code.

Compiling Programs

The compile option can be run by using the Compile menu at the top of the screen, the compile icon (shown in the picture below) or by pressing Ctrl-Shift-C.

I discovered that the compile option does not ask for any parameters. Instead, it uses the options that you specified in your connection and compile settings options in the toolbar. So if you want to change one of the default compiler options, you need to change them in the compile settings each time.

There are advantages and disadvantages to this approach. The advantage is that it’s very quick and easy to compile a program. When you’re developing software, you often have to compile it many times, and it’s very nice to be able to skip the dialog and just have it compile. The disadvantage is when you want to do something different in a one-off situation. You have to go into the compile settings to change it, so that’s a little bit of extra work. However, I find that I don’t need to do that very often, so this wasn’t a big deal to me.

When an error occurs during the compile, an error listing will open showing you what went wrong, very similar to what you’d find in RDi. Like RDi, you can click on the error and it will position the editor to the exact line of code where the error was found.

One thing that surprised me about the compile and the error message dialog was that it is considerably faster than RDi. That seems strange to me, since both tools are connecting to the IBM i and running the same IBM compiler for RPG. However, I found that depending on the size of the member, the ILEditor compile was 10-20 seconds faster than the RDi one.

RPG Fixed Format to Free Format Converter

One feature of ILEditor that simply did not work well was the RPG converter. Some of the fixed format code in my program would convert, but other things (including things that should’ve converted easily) did not. Code that spanned multiple lines did not convert at all.

In my opinion, the converter needs a lot of work before it will be useful. I pointed this out to Liam, and he told me that he agrees and has a complete rewrite of the converter on his to-do list.

Other Features

I’d like to mention some of the other features of ILEditor that I did not have time to try out before writing this article. Since I didn’t have time, I can’t review them and give my opinion – but, I wanted to mention them. That way, if you’re looking for these features, you can give them a try yourself and see what you think.

  • Source Diff = compares two sources (members or IFS files) and highlights what is different about them.
  • Spooled File Viewer = Lets you view spooled files that are in an output queue
  • SQL Generator = Generates SQL DDL code from an existing database object
  • Offline mode = lets you download source from the IBM i to store on your PC and work on it while you are not connected (for example, when traveling on a plane or train without good internet access), uploading the results later.

My Conclusion

I was extremely impressed by ILEditor. RDi has more features, such as debugging, refactoring and screen/report design, but I was surprised at just how many features ILEditor has, considering it was written by one man in his free time and costing nothing. I was pleasantly surprised by the performance of ILEditor, which was consistently faster than RDi while using far less memory.

Unfortunately, the lack of syntax highlighting for fixed format RPG will be a problem for many RPG developers, and I sincerely hope that does not discourage them from at least trying ILEditor.

If a lot of people try it, and some of them donate money or give their time to help with development, this tool could easily become a serious competitor to RDi.