Integrating PowerVC in an IBM i Shop

By Dana Boehler

The speed of business has never been faster. Product release cycles have shrunk to timelines inconceivable in the past. Some fashion retailers are now releasing new product every two weeks, a cycle that historically only happened 4-8 times a year, and certain retailers even have product available immediately after it is displayed on the runway.

The demand for immediate insight into the state of sales numbers, ad campaigns, and other business functions has made the continuous aggregation of data commonplace. And if those factors weren’t pressure enough, the threat of ever-evolving security hazards is generating mountains of updates, code changes, and configuration adjustments — all of which need to be properly vetted before entering a production environment.

All of this activity needs to run on infrastructure that administrators like ourselves must manage, often with fewer coworkers to assist. Thankfully, for those of us running IBM i on IBM Power Systems, IBM has provided a robust cloud management tool that allows us to quickly spin up and spin down systems: PowerVC.

PowerVC allows users to manage existing IBM Power System partitions, create images from those partitions, and deploy new partitions based on those images. More recent versions of PowerVC support IBM i management and deployment (earlier versions did not).

Over the past year, I have been using PowerVC to greatly reduce the amount of time it takes to bring a system into the environment. Typically, creating a new system would take several hours of hands-on keyboard work over the course of a few days of hurry-up-and-wait time. The first time I deployed a partition from PowerVC, however, I was able to reduce that to about an hour, and after more refinements in my deployment, images, and process, I am now down to under 25 minutes. That’s 25 minutes to have a fully deployed, PTF’d system up and running.

The full implications of this may not be readily apparent. Obviously, net new systems can be deployed much more quickly. But more importantly, new modes of development can be more easily be supported. PowerVC supports self-service system provisioning, which enables teams to create their own systems for development, test, and QA purposes, and then tear them down when no longer needed. Since the systems are focused on the task at hand, they do not need the resources a fully utilized environment would need.

There’s more: Templates can be created in PowerVC to give the self-service users different CPU and memory configurations, and additional disk volumes can be requested as well. Post-provisioning scripts are supported for making configuration changes after a deployed system is created. In our environment, we are taking this a step further by integrating PowerVC with Red Hat’s Ansible automation software, which has given us greater flexibility in pre- and post-provisioning task automation.

In practice, using PowerVC removes many of the barriers to efficient development inherent in traditional system deployment models and permits continuous deployment strategies. Using PowerVC, a developer tasked with fixing a piece of code can spin up a clean test partition with the application and datasets already installed, create the new code fix, spin up a QA environment that has all the scripted tests available for testing the code, and then promote the code to production and delete the partitions that were used for development and testing.

You do have to make some changes to the environment in order to support this model. Code needs to be stored in a repository, so it can be kept in sync between all systems involved. The use of VIOS is also required. Additionally, note that when using this type of environment, the administrator’s role becomes more centered around image/snapshot maintenance (used for deployment templates) and automation scripting rather than the provisioning and maintenance of systems.

For full information on the product and its installation, I recommend visiting IBM’s knowledge center.

Guest Blogger

Dana Boehler is a Systems Engineer and Security Analyst at Rocket Software, specializing in IBM i.

Use Single Sign-on for Better Password Compliance

Most employees hate changing their passwords. There are too many requirements, the passwords have to be changed too often, or there are too many systems – all of which require unique passwords – to remember them all. Workarounds can range from password-saving cookies to barely modified passwords. But whenever your co-workers follow the letter of the password law instead of the spirit, it just ends with little in the way of either security or goodwill.

How Can You Achieve Better Security through Password Compliance?

Most people know that passwords improve security, and they also know that more complex passwords are better. But they don’t know the specifics of why passwords are so important. Instead of trying to force company-wide behavioral changes through new rules and system set-ups, give a reason. Even one example or horror story of a corporate data leak is enough, though a general overview of how passwords work is good, too. As a general rule, people are more likely to adopt any new policy if there’s a reason why.

But even a reason might not be persuasive enough for full compliance. Instead, meet them in the middle with more convenience by implementing single sign-on. This addresses one of the three most common complaints when it comes to password resets (too many passwords to juggle), and it also gets rid of people’s tendency to create almost identical passwords so they’re easier to remember.

Single sign-on has all of the security of multiple logins, especially if you link the SaaS and databases through the intranet that you control. It might even offer more if you link the time-out rules. You can also use the intranet to keep data that flows between programs entirely contained in your system without downloads or copied files.

The more you can positively encourage good security practices, the more likely people are to adopt them.

Learn more about single sign-on at the 2018 Fall Conference & Expo. Check out this session from Thom Haze.

Minimize Employee Use of Local Storage

Saving files in local folders and even on the desktop is an easy option. Whenever you open a new file or download an attachment, it saves to a local ‘Download’ folder by default and edited files try to save themselves in ‘My Documents.’ But using local storage on individual devices can slow down your business.

Why Should You Reduce Local (Device-based) Storage?

Central or cloud-based storage is beneficial for multiple reasons. Easy security, universal access, and consistent back-ups are a few, and the inverse is true for local storage.

Only the Employee and the System Administrator Have Access

Locally stored files are easy for an employee to save and open, but only that specific employee. No one else has easy access, including managers or co-workers involved in the project. Only a network administrator with remote access to the drive can access the files. Not only is this inconvenient if the employee is out of the office that day, it also provides no protection against long-term loss of access. If the employee leaves the company and the drive is wiped (or the employee was using a personal device), any progress is lost. Hard drive malfunctions can also wipe out files without backup or a reparable file.

There Is No Version Control

If you’ve recently emailed a large group of people, the conversation probably segued into a couple of different email threads. This can be tricky to get back on track, and it always ends with not everyone having all the information they need. This is even more true with in-progress documents. If one employee is making updates based on a local file, other parties can’t see the changes until it’s manually shared. If two employees are making separate changes, then some work will be irreparably lost or there will be more confusion and frustration down the line. But if files are stored in working software, where changes are made live and saved continuously (especially if edits are marked by author), then there’s more collaboration and less overwriting or wasted effort.

hanging-files-1920437_640

3 Tasks You Can Take to Improve Your IBM i’s Security and Ease of Administration

By Dana Boehler

Securing an expansive platform like an IBM i system can be an intimidating task, a task that many times falls into the hands of a systems administrator when more specialized help is not available in-house. Deciding what tasks and projects will add value, while reducing administrative overhead, is also difficult. In this article I have chosen three things you can do in your environment that can get you started in ascending order of time and effort.

1. Run the ANZDFTPWD Command

Run the ANZDFTPWD command – This command checks the profiles on your system for passwords that are the same as the user profile name and outputs the list to a spooled file. Even on systems with well controlled *SECADM privileges (the special authority that allows a user to create and administer to user profiles), you will find user profiles that have either been created with or reset to have a password that is the same as the user profile name, which could provide an unauthorized user a method for gaining access to system resources. Additionally, the command has options to either disable or expire any user profiles found to have default passwords if desired.

2. Use SQL to Query Security Information from Library QSYS2

In recent updates to the supported IBM i OS versions, IBM made a very powerful set of tools available for querying live system and security data by using SQL statements. This allows users with the appropriate authority to create very specific reports on user profiles, group profiles, system values, audit journal data, authorization lists, PTF information and many other useful data points. These files in QSYS2 are table views directly accessing the information they are querying so the data is current every time a statement is run. One of the best things about creating output this way is there is no need for creating an outfile to query from or refresh re-querying. A detailed list of the information available and the necessary PTF and OS levels required to use these tools can be found here.

3. Implement a Role-based Security Scheme

The saying used to be the IBM i OS “is very secure”, but that statement has changed to the more accurate “is very securable”. This change in language reflects the reality that these systems are now very open to the world as shipped but can be one of the most secure systems when deployed with security in mind. For those who are not aware of role-based authority on IBM i, it is basically a way of restricting access to system resources using authorities derived from group profiles. Group profiles are created for functions within the organization, and then authorities are assigned to those group profiles. When a user profile is created it is configured with no direct access to objects on the system, instead group profiles are added to allow access to job functions. Although implementing role-based security may seem like a daunting task it pays huge dividends in ease of administration after the project is in place. For one thing having role-based security in place allows the administrator to quickly change security settings for whole groups of users at once when needed, instead of touching each user’s profile. It also allows for using group profiles as the object owners instead of individual user profiles, which means the process of removing users who create large numbers of objects or objects that are constantly locked is much easier. Using role-based security also relies on group profile for authority, so the likelihood of inadvertently granting a user too much or too little authority by copying another similar user is far less likely.

These a just a few of things you can do to get started securing your IBM i. In future posts, I intend to delve into more depth, especially regarding role-based security.

Guest Blogger

Dana Boehler is a Senior Systems Engineer at Rocket Software.