Cognitive Computing Overview

The field of cognitive computing continues to grow rapidly, best represented by IBM’s Watson platform. Cognitive computing includes elements of artificial intelligence (AI), machine learning and signal processing. It generally requires a large amount of processing power and software with learning capabilities. There are a number of important considerations about this technology.


Cognitive computing is used for a variety of different purposes. One popular use that is spreading wildly is for natural language chatbots and virtual assistants. These learn natural ways to respond to questions and find the best possible answer when humans answer them. Another important use is for cyber threat detection. AI software learn the rapidly evolving hacking techniques and pro-actively take steps to defend the network without any manual intervention. There are a number of other uses in the areas of medicine, finance, law and other areas.

New Solutions and Products

Cognitive solutions continue to evolve and transform. Companies like IBM, Amazon, Apple, HP, Google, Microsoft and others are creating powerful new features. The most common use is natural language processing. This has evolved into products like Amazon’s Echo, Microsoft’s Cortana, Apple’s Siri, Google HomePod and others. These devices and software have a deep database that is constantly evolving based on the input from tens of millions of users. Over time, they become more sophisticated and useful at answering questions, providing services and organizing information.

What’s Next

This technology will continue to make more significant advances. Right now, scientists are using cognitive computing to try to predict ground breaking medical formulations. This will hopefully radically reduce the time and money required to create a new drug to cure disease. Physicists are also using this technology to compress huge amounts of data gathered from observing the universe. They are hoping the AI can make insights on the origins of galaxies, composition of dark matter and many other questions.

The Internet of Things and New Information Sources


The Internet of Things will only become more relevant in the near future. It is a phenomenon that is influencing the flow of information and the manner in which information moves.

Moving Beyond Traditional Information Networks

People are very used to the idea that they have to look for information in specific places, and that information is only found within databases or through the Internet.

Many users will only associate a small number of objects and services with information collection and retrieval. In the era of the Internet of Things, this will be very different.

Broad Information Gathering

Today, seemingly random objects are equipped with the sorts of embedded sensors that can make them devices capable of gathering information. Of course, what’s really bringing the Internet of Things to life is the fact that all of these physical devices can share information through conventional wireless networks.

Micro-cameras, lights, pacemakers, thermostats, billboards, and appliances are now capable of being part of the Internet of Things. If these devices have the appropriate sensors, they can share all of the information that they have absorbed partly by being in the right location at the right time.

Volume of Information

Since so many objects are part of the Internet of Things, data gathering is possible on a level that many experts could never have imagined. Objects with their own sensors can more or less gather information independently and with a consistency that would be impractical otherwise.

This means that experts will ultimately have much more information to analyze and to understand. The fact that it is now possible for them to gather data from a distance and from multiple sources truly makes all the difference in the world.

RPG and YAJL, as Well as DATA-INTO News

This month, I’ll give you a quick update about RPG’s new DATA-INTO opcode and then focus on the basics of how you can host your own custom REST web service API written in RPG with the help of the standard IBM HTTP Server (powered by Apache) and the YAJL JSON library.


IBM has posted documentation online for the new DATA-INTO opcode, and they promise that PTFs will be available on March 19th, 2018. That means that by the time this blog entry goes live, it’ll be available for you to install and try.

Unfortunately, I’m writing this before the PTFs are available, so I haven’t been able to try it yet. I hope to do so in time for next month’s blog.

Providing a Web Service with RPG and YAJL

I’ve been creating quite a few JSON-based REST APIs using the YAJL toolkit lately, and with DATA-INTO on the horizon, I thought it’d be a good time to give you an example. If you’re not familiar with YAJL, it is a free tool for reading and writing JSON documents. Since YAJL is written in C, I was able to compile it as a native ILE C service program and create an RPG front-end for it that makes it both easy to use and extremely efficient.

Of course, IBM provides us with the wonderful Integrated Web Services (IWS) tool that can do both REST and SOAP APIs using both JSON and XML. The advantage that IWS has over YAJL is that IWS does most of the work for you, simplifying things tremendously. On the other hand, YAJL is faster and lets you deal with much more complex documents. For the work that I do, I usually find YAJL works better.


Before you can use the techniques that I describe in this post, you’ll need to have the following items installed:

  • IBM HTTP Server (powered by Apache)
    • This is available on the OS installation media as licensed program option 57xx-DG1 – you’ll want to see the IBM Knowledge Center for documentation about how to install it and its prerequisites
  • YAJL (Yet Another JSON Library)
  • ILE RPG compiler at version 6.1 or newer

Initial Setup

I recommend that you create a new library in which to put your programs that implement REST APIs. It’s certainly possible to put them in an existing library, but using a separate one helps with security. If you set up the HTTP server to only access the new library, you won’t need to worry about someone trying to use your server to run programs that they shouldn’t.

In my example, I will create a library named YAJLREST.


You’ll also need to set up the IBM HTTP server to point to your new library. This is best done using the HTTP Administration option in the IBM Navigator for i.

  1. If it’s not already started, start IBM Navigator for i by typing: STRTCPSVR *HTTP HTTPSVR(*ADMIN)
  2. Login to Navigator in your web browser on http://your-system:2001
  3. Sign-in with a user id/password that has authority to change the configuration
  4. Expand: IBM i Management (if not already expanded)
  5. Click “Internet Configurations”
  6. Click “IBM Web Administration for i”
  7. Sign in again (sigh – I hope IBM changes that so signing in twice isn’t needed)
  8. Click on the “HTTP Servers” tab
  9. Click “Create HTTP Server”
    • NOTE: do not use Create Web Services server, that is for IWS only
  10. Give your server a name and a description
    • The name can be any valid IBM I object name – I’ll use YAJLSERVER
  11. Click Next
  12. Accept the default server root by clicking “Next”
  13. Accept the default document root by clicking “Next”
  14. Keep the default of “All IP Addresses”, but change the port number to one that you’re not using for anything else
    • In my example, I will use 12345
  15. Click “Next”
  16. For the “access log”, I typically turn them off unless I want to audit the use of my server
    • To do this, click “no” and then “Next”
  17. When it asks how long to keep the logs (meaning the error logs, since we just turned off the access logs) I take the default of 7 days
  18. Click “Next”.
  19. It will then show a summary screen – Click “Finish”

You have now created a basic HTTP server using the IBM wizard, and it will be placed on the configuration page for your new server. Click “Edit Configuration File” (on the left, towards the bottom) to make changes to the configuration.

The basic HTTP server provides access to download HTML, images, etc. from the IFS. Since that won’t be needed in a web service, I recommend deleting the following section of configuration (simply highlight the section and press the delete key):

<Directory /www/yajlrest/htdocs>
   Require all granted

To enable access to the web services in your library, add the following configuration directives to the end of the file:

ScriptAliasMatch /rest/([a-z0-9]+)/.* /qsys.lib/yajlrest.lib/$1.pgm
<Directory /qsys.lib/yajlrest.lib>
   Require all granted

The ScriptAliasMatch maps a URL to a program call. In this case, a URL beginning with /rest/ will call a program in the YAJLREST library. The part that is in parenthesis says that the program name must be made from the letters a-z or digits 0-9, and must be at least one character long. This is a regular expression and can be replaced with a different one if you have particular needs for the program name. In any case, the part of the URL that matches the part in parenthesis will be used to replace the $1 in the program name. This way, you can call any program in the YAJLREST library. The phrase “require all granted” allows everyone access to the YAJLREST library and is appropriate for an API that is available to the public.

The QIBM_CGI_LIBRARY_LIST environment variable controls the library list that your program will run with. In this case, it is adding the YAJL, MYDATA, MYSRV and YAJLREST libraries to the library list. MYDATA and MYSRV are meant as placeholders for your own libraries. YAJL is the default library that the YAJL tool will be installed into, and YAJLREST is where your program will be. Make sure you replace these with the appropriate libraries for your environment.

What if you don’t want your API to be available to the public? One way is to ask the IBM HTTP server to require a user id and password, as follows:

ScriptAliasMatch /rest/([a-z0-9]*)/.* /qsys.lib/yajlrest.lib/$1.pgm
<Directory /qsys.lib/yajlrest.lib>
   require valid-user
   AuthType basic
   AuthName “REST APIs”
   PasswdFile %%SYSTEM%%
   UserId %%CLIENT%%

Some of these options are the same as the previous example. What’s different is that it no longer does “all granted” but instead requires a valid user. Which users are valid is based on the system’s password file (aka your user profiles). When your RPG program runs, it will use the user id supplied by the client (aka the user id and password that it checked against the user profiles.) If this is run in an interactive program, the phrase “REST APIs” will be printed to tell the user what he or she is signing on to.

If you use this sign-on option, I recommend also setting up SSL to prevent people’s passwords from being sent over the Internet in plain text.

There are many variations on the way users are authorized and who is allowed to access your setup. This is just one common example. To learn more about the different options, and to learn how to configure SSL, I recommend reading more in the IBM Knowledge Center. If you get really stuck, e-mail me at

Once your configuration is set up, click the “start” button to start the server. The start button is near the top of the screen, colored green and looks similar to the play button you’d find on a tape or CD player. You now have an HTTP server that will launch your RPG programs. The next step is to write the web service itself using RPG and YAJL.

The Basics of REST

When I design a REST web service (also known as REST API), I try to think of the URL as being a unique identifier for the business object that I’m working with. For example, when working with an invoice, the URL might represent a specific invoice by ending with the invoice number. Or, likewise, when working with customer information, the URL might represent a unique customer by ending with their customer number.

Once I’ve decided upon what the URL identifies, I think of what types of things I might want to do with the data. The REST paradigm assumes that you will use some (or all) of the following HTTP methods:

GET = Used to retrieve data identified by the URL

PUT = Used to set data identified by the URL in an idempotent way

POST = Used to set data identified by the URL in a non-idempotent way

DELETE = Remove the data identified by the URL

The term “idempotent” means that multiple calls will result in the same thing. For example, if I set X=1 that is idempotent. I can set X=1 once or 10 times; the result will still be 1. On the other hand, if I coded X=X+1 and ran it once, X would be 1, but if I ran it 10 times, it would be 10. Since multiple calls do not result in the same value, X=X+1 would be considered a non-idempotent statement.

Once I’ve thought about which of these HTTP methods should be supported by my service, I decide how the data will be passed to it. I do that by figuring out a JSON format for the data that is sent to my API as input, as well as another JSON document that’s returned with the output.

Let’s take a look at an example:

The Customer Details Example

To create a simple example of a REST API that’s easy to understand, I will use customer details as a simple example. The idea is that a URL will not only tell the HTTP server which program to call, but it will also identify a unique customer by its customer number. With that in mind, the URL will look like this:


Please note:

  • If using SSL, the “http:” would be replaced with “https:”
  • The 1500 at the end is an example of a customer number
  • When using features that don’t require an existing customer (such as adding a new one), the customer number should not be added
    • In that case the URL would be http://our-system:12345/rest/custdetail/
  • Since our URL contains “custdetail” after the /rest/ part of the URL, the ScriptAlias we configured will effectively do the same thing as CALL PGM(YAJLREST/CUSTDETAIL)

In my example, I want the API to be able to retrieve a list of customer information, retrieve the details of a specific customer, update a customer, create a new customer and delete old customers. For that reason, I will use the following HTTP methods:

  • GET = If the URL represents a unique customer, GET will return the details for that customer
    • If it does not reference a specific customer, it will instead return a full list of the customers available
  • PUT = Set customer information in an idempotent way
    • This will be used with an existing customer to set details such as its address – it cannot be used to add new customers since that would be non-idempotent.
  • POST = Add a new customer
    • This creates a new customer – since multiple calls to the URL would result in multiple customer records being created, this is non-idempotent.
  • DELETE = Remove customer details

One easy way to understand these is to think of them the same way you would think of database operations. GET is like SELECT/FETCH/READ, PUT is like UPDATE, POST is like INSERT/WRITE and DELETE is like DELETE. This isn’t a perfect analogy since it’s possible for an update database operation to be non-idempotent, but aside from that detail, they are very similar.

Since we’ve now determined what the URL represents, and what the methods will do, the other important idea is to determine the format of the data that will be sent or received.  For this example, I chose this format:

   “success”: true,
   “errorMsg”: “Error message goes here when success=false”,
   “data”: {
      “custno”: 496,
      “name”: “Acme Foods”,
      “address”: {
         “street”: “123 Main Street”,
         “city”: “Boca Raton”,
         “state”: “FL”,
         “postal”: “12345-6789”,

When a GET operation is done with a URL that identifies a unique customer, the above JSON document will be returned with the customer details (or an error message if there was an error.)

When using GET to list all customers, the same format will be used except that the “data” element will be converted into an array so that multiple customers can be returned at once.

When using the POST or PUT method, this document will be sent from the caller to represent the new values that the customer details should be set to. The POST and PUT operations will also return a document in the same format to show the caller what the customer details look like after the changes have been made.

When using the DELETE method, the row will be deleted. However, we will still use this document to return what the customer data was before the DELETE and also as a way to send errors, if any were found.

Coding the Example in RPG

Now that I’ve decided what the service will do, it’s time to code it!

The first thing my RPG program needs to know is whether GET, PUT, POST or DELETE was used. It can retrieve that by getting the REQUEST_METHOD environment variable. The IBM HTTP server will always set that variable to let us know which HTTP method was used. The RPG code to retrieve it looks like this:

   dcl-c lower const(‘abcdefghijklmnopqrstuvwxyz’);

   dcl-s env pointer;
   dcl-s method varchar(10);

   dcl-pr getenv pointer extproc(*dclcase);
      var pointer value options(*string);


   env = getenv(‘REQUEST_METHOD’);
   if env <> *null;
      method = %xlate(lower: UPPER: %str(env));

The getenv() routine returns a pointer to a C-style string, so I use the %STR built-in function to convert it to an RPG varchar variable. I’m also using the %XLATE built-in function to convert from lowercase to uppercase to ensure that the data will always be uppercase when I use it later in the code.

The next thing it will need is to get the customer number out of the URL (if any was supplied). The HTTP server provides the URL in an environment variable named REQUEST_URI that can be retrieved with the following code:

   dcl-s url varchar(1000);

   env = getenv(‘REQUEST_URI’);
   if env <> *null;
      url = %xlate(UPPER: lower: %str(env));

The result is a variable named “url” that I’ve also converted to all uppercase. From that URL, I can retrieve the customer number by looking for it after the string “/rest/custdetail/”. Since I know “/rest/custdetail/” will always be in the URL (unless something went wrong), I can just scan for it and use that to get the position where the customer number is found. If there is no customer number, I’ll simply set my internal customer id to 0 and a code later in the program will use that to determine if a customer number was provided.

   dcl-c REQUIRED_PART const(‘/rest/custdetail/’);
   dcl-s pos int(10);
   dcl-s custpart varchar(50);
   dcl-s custid packed(5: 0);

      pos = %scan(REQUIRED_PART:url) + %len(REQUIRED_PART);
      custpart = %subst(url: pos);
      custid = %int(custpart);
      custid = 0;

Now that I know the method and customer number, I can choose what to do with it.  The code to handle GET is different than the code to handle PUT, for example, so I will select which section of code to run using a SELECT/WHEN group.

   when method = ‘GET’ and custid = 0;
      // code to list all customers
   when method = ‘GET’;
      // code to retrieve one customer
   when method = ‘PUT’;
      // code to update a customer (idempotent)
   when method = ‘POST’;
      // code to write a new customer (non-idempotent)
   when method = ‘DELETE’;
      // code to delete a customer

I will not provide all the code to do each of these options in the blog post, but instead at the bottom of this article because most of it is just standard RPG code that reads and writes data from the database. If you do want to see the program in its entirety, I will provide a link at the end of the article that you can use to download all the source code.

Working with JSON Using YAJL

The other part of the RPG code that I’d like to explain in this article is the code that works with JSON data. Although the DATA-INTO opcode may make this simpler in the future, as I write this, I haven’t had a chance yet to try it out. What I can show you is the “old fashioned” way of using YAJL to read and JSON data in an RPG program.

Although there are other ways to read JSON, I recommend using the “tree” style approach. I’ve found this to be the easiest, and in my RPG tools for YAJL, I provide a routine called yajl_stdin_load_tree that gets the data from the HTTP server and loads it into a YAJL tree in one subprocedure call.

   dcl-s docNode like(yajl_val);
   dcl-s errMsg varchar(500);

   docNode = yajl_stdin_load_tree(*on: errMsg);
   if errMsg <> ;
      // JSON was invalid, return an error message

Now the JSON data has been loaded into a “tree-like” structure that is inside YAJL. The docNode variable is a pointer that points to the “document node”, which is a fancy way of saying that it points to the { and } characters (JSON object element) that surrounds the entire document. I can use the yajl_object_find procedure to locate a particular field inside that object and return a pointer to it. For example:

   dcl-s node like(yajl_val);

   node = yajl_object_find(docNode: ‘errorMsg’);
   if node = *null;
      cust.errorMsg =;
      cust.errorMsg = yajl_get_string(node);

In this example, I retrieve a pointer to the value of the “errorMsg” field that is inside the JSON object. If “errorMsg” was not found, the pointer will be set to *NULL, in which case I know there wasn’t an error message sent. If it was sent, I can use the yajl_get_string subprocedure to get an RPG character string from the pointer.

In this case, the errorMsg field will be a character string, but other JSON fields might be numeric or boolean. I can retrieve these by calling other YAJL routines, such as yajl_get_number, yajl_is_true, and yajl_is_false in place of the yajl_get_string in the preceding example.

When a subfield has been set to an object or array, there are several different options to process it. I’ve already shown an example of yajl_object_find to process a single field inside of an object, but in addition to that, there are the following routines:

  • yajl_object_loop = loops through all of the fields in an object, one at a time
  • yajl_array_loop = loops through all of the elements in an array, one at a time
  • yajl_array_elem = get a particular array element by its index

Since this article is already very long, I will not provide examples of these in the post. Instead I will refer you to the link at the end of the article where you can download the complete example.

Once you’ve read your JSON data into your RPG program, you’ll want to ask YAJL to free up the memory that it was using. Remember, YAJL loaded the entire document into a tree structure, and that is taking up some storage on your system.  t’s very easy to free up the storage by calling yajl_tree_free and passing the original document node.


YAJL will also be used to generate the JSON document that is sent back. For example, when you ask for the details of a customer, it uses YAJL to generate the response. The code in RPG looks like this:


   yajl_addBool(‘success’: cust.success);
   yajl_addChar(‘errorMsg’: cust.errorMsg);

   if cust.success = *on;


      yajl_addNum(‘custno’: %char(;
      yajl_addChar(‘name’: %trim(;

      yajl_addChar(‘street’: %trim(;
      yajl_addChar(‘city’:   %trim(;
      yajl_addChar(‘postal’: %trim(;




   if cust.success;
      yajl_writeStdout(200: errMsg);
      yajl_writeStdout(500: errMsg);


Prior to running this routine, the RPG program has loaded all the customer information into the “cust” data structure. The preceding routine uses that database data together with YAJL’s generator routines to create a JSON document.

The yajl_genOpen procedure starts the YAJL generator. It accepts one parameter, which is an indicator that determines whether the generated document is “pretty” or not. When the indicator is on, it will format the data with line feeds and tabs so that it’s easy for a human being to read. When the indicator is off, it will use the smallest number of bytes possible by removing line feeds tabs and extra spacing so that the entire JSON document is one big line of text. The “not pretty” version is a little more efficient for the computer to process, but since it is harder to read, I typically pass *ON while testing my program and change it to *OFF for production use.

When the JSON document calls for a new object (identified by the { and } characters), you can create it by calling yajl_beginObj. This will output the { character as well as make any subsequent fields be subfields of the new object. When you’re done creating the object, you can use yajl_endObj to output the } character and end the object.

There is a similar set of routines named yajl_beginArray and yajl_endArray that create a JSON array (vs. an object) that I did not use in this example but are useful when an array is called for.

Adding a field to an object or array is done by calling the yajl_addNum, yajl_addChar and yajl_addBool routines for numeric, character and boolean JSON fields, respectively. These routines not only add the fields, but they take care of escaping any special characters for you so that they conform to the JSON standard.

A really good way to understand the generator is to compare the expected JSON document (as I described in the section titled “The Customer Details Example” above) to the statements that generate it. You’ll see that each time a new object is needed, it calls yajl_beginObj, and each time a field is added to an object, it calls the appropriate “add” routine for the data type. These statements all correspond exactly to the JSON data that is output.

Once I’ve called the YAJL routines to add the data, I’ll have a JSON document, but it will be stored internally inside YAJL’s memory. To write it out to the HTTP server (which, in turn, will send it on to the program that is consuming our REST API), I use the yajl_writeStdout routine. This routine lets me provide an HTTP status code and an error message as a parameter. I recommend using a status of 200 to indicate that everything ran correctly or 500 to indicate that an error occurred.

Finally, once the JSON document has been sent, I call yajl_genClose to free up the memory that was used by the JSON document that YAJL generated.


YAJL is a powerful way to write JSON-based REST APIs. Although some of the examples may seem a little confusing at first, once you’ve successfully written your first one, you’ll get the hang of it quickly. Then it’ll be no problem to write additional ones.

You can learn more about YAJL and download the code samples that I mention in this post from my website at:

If you’ll be at COMMON’s PowerUp18 conference, be sure to check out my session, Providing Web Services on IBM i, where I will demonstrate these techniques as well as other ways to provide your own web services in RPG.

Transamerica – How One Company Is Capitalizing on Big Data Analytics

Companies who want to have continued growth understand a key component of their success will occur when they are able to personalize their customer’s experience, rather than a one-size-fits-all approach. Not only is the personal approach ultimately more efficient for organizations, it also helps increase customer satisfaction, which in turn leads to more growth.

TransamericaOne company, Transamerica, decided to harness big data concepts in order to provide the best possible personal service to its 27 million customer base. Transamerica pulled in data from over 40 sources, including social media, third-party data, customer voice response systems and all its investment, retirement and insurance data sources. By using big data analytics and machine learning concepts, Transamerica was able to identify new data patterns and insights in order to quickly and efficiently develop, test and deploy its predictive models. Now building models takes hours instead of days, allowing for fewer processing cycles and thereby reduced infrastructure demands, while at the same time increasing the personal experience for its customer base.

So what are some of the tools that Transamerica used to accomplish its goals? Transamerica takes advantage of Hadoop, part of the open-source Apache project. By using a Hadoop-based data lake along with a host of other tools, Transamerica found it was able to store vast amounts of data in their distributed computing environment. Of course, security risks are always a factor when dealing with large stores of potentially personal data, so Transamerica’s entire platform was designed in such a way as to maximize appropriate use of data and adherence to legal and regulatory obligations, while minimizing security risks.

Want to know more about how Transamerica and its customers win big with big data analytics? Click here.

Cloud Technologies and Handling Ransomware

Cloud computing is one of the best technologies to have in the workplace. Not only can you store your data quickly and efficiently, but it’s also easier for you to access any data. With that said, when it comes to your business security, especially the malicious tool known as ransomware, why are cloud services so important?

Cloud = Virtual Storage

One reason why, is because cloud computing allows you to store your data virtually over the Internet. This makes it untouchable in the event of a disaster. Let’s say a ransomware attack happened on your device, and it affected the data on your hard drive. Despite this, none of your virtual data would be affected, especially since this isn’t what most hackers are banking on. However, since ransomware locks your computer, you wouldn’t be able to access any of your virtual files, right? As a matter of fact, you can. Cloud computing not only keeps your files safe in the event of a disaster, but your data is also accessible from any device with an Internet connection. Whether it’s another computer in the workplace or even your mobile phone, the sky’s the limit to where you can access your personal data.

For more information about cloud computing, COMMON offers educational opportunities throughout the year. Stay in touch to see when the next cloud-related sessions become available.

IT Education – Preparing for a Career in Data Science

Data Science

Do you love mathematics? Do phrases like “data warehousing” and “v-lookups” bring out the inner nerd in you? If any of this sounds familiar, you might want to steer your direction toward a career in Data Science.

According to Patrick Circelli, a senior recruiter for the IT recruiting firm Mondo, “Data Science is all about mathematics, so having that type of degree — mathematics, information science, computer science, etc. — is especially key for these roles. Hiring managers really love that.” Circelli goes on to describe the must-haves for anyone preparing a resume for a career as a data scientist. His list includes:

  1. A degree in Information Science, Computer Science or Mathematics
  2. Microsoft Excel, specifically the use of pivot tables and v-lookups, and knowledge of SQL queries and stored procedures
  3. Programming skills in any of the following languages: C++, Java, Python, R, or SAS
  4. Concepts such as predictive analysis, visualization and pattern recognition, i.e. understanding how data operates, and skills that could come from data visualization tools like Tableau
  5. NoSQL database environments like MongoDB, CouchDB or HBase
  6. Data warehousing

Although there are many areas in IT that require data science skills, thanks to relentless cyber attacks, the growth rate in security data science specifically, is booming at 26%, with the security analytics market set to reach $8 billion by the year 2023. Anyone who can create a resume listing Circelli’s recommendations, along with a desire to focus pointedly on data security to combat hackers and cyber attacks, can probably write their own ticket in the tech world for decades to come.

IBM Watson For Oncology Going Live in a U.S. Community Hospital

In a first for the U.S., IBM’s Watson For Oncology (WFO) is essentially joining the clinical staff at an American community hospital. After being trained at the Memorial Sloan Kettering Cancer Center, and being tested in hospitals in several parts of the world, Watson will assist doctors at the 327-bed Jupiter Medical Center of Jupiter, Florida, in developing personalized treatment plans for cancer patients.

Why Watson?

What Watson brings to the table is its ability to quickly sift through reams of data from medical journals, textbooks, and clinical trials in order to provide doctors with rankings of the most appropriate treatment options for a particular patient. Identifying the proper treatment regime for cancer patients has always been difficult. Now, with rapid advancements in cancer research and clinical practice, the amount of data available to doctors is far outstripping their ability to keep up with current best practices.

WFO can lift much of what is essentially an information processing task off the shoulders of physicians. By combining information from the medical literature with the patient’s own records and physicians’ notes, Watson can provide a ranked list of personalized treatment options. And if patient records don’t provide all the information it needs for its analysis, Watson will even prompt the physician for more data.

Humans Still Required

Of course WFO is not intended to in any way replace or supersede human physicians. Dr. Abraham Schwarzberg, chief of oncology at Jupiter, thinks of Watson as providing a “second opinion” in the examination room. Doctors can access Watson’s recommendations on a tablet device while the examination of the patient is in progress. “We want a tool that interacts with physicians on the front end as they are prospectively going into making decisions,” says Dr. Schwarzberg.


HospitalIn a study of 638 breast cancer cases conducted at a hospital in Bengaluru, India, WFO’s treatment recommendations achieved an overall 90 percent rate of agreement with those of a human tumor board. Still, IBM acknowledges that it’s too early to claim that Watson will actually improve outcomes for cancer patients. But with the vastly improved ability to personalize treatment options for individual patients that Watson provides, there’s every reason for optimism. As Nancy Fabozzi, a health analyst at Frost & Sullivan puts it, “Watson for Oncology is fundamentally reshaping how oncologists derive insights that enable the best possible decision making and highest quality patient care.”

IT in Manufacturing: Are You Making Good Use of Your Data?

Recently, The Manufacturer came out with an article on what it means for the manufacturing industry to embrace the digital revolution.

One of the key points is that success depends in large part on how you make use of the data you collect. The article mentions a report showing that roughly 99% of business data gets disregarded or tossed out before any analysis can be performed.

IT in manufacturing should involve helping your company make optimal use of data.

The following are some of the benefits:

  • Important feedback on how your machinery and computer systems are operating – whether or not they’re:
    • Efficient and productive
    • Performing consistently within desired parameters
    • Free from signs of impending malfunctions or unauthorized activity
  • Insights into how you can streamline various operations and processes:
    • Saving you money
    • Connecting different parts of your company seamlessly
    • Reaching your customers in a timely and effective way
  • A stronger basis on which to plan for the future, including:
    • Anticipating what you’ll need down the road
    • Embracing new developments in technology

When it comes to making good use of your data, consider the following:

  • The data you need to collect vs. what you can allow yourself to discard
  • How to collect the data accurately
  • Where to store the data securely and how to keep it organized and well-managed
  • The programs to use for analyzing the data, presenting it comprehensibly, and extracting important insights from it

Data is the foundation on which your manufacturing enterprise rests. Letting it slip by without any meaningful analyses puts you at a disadvantage relative to competitors and causes you to miss out on opportunities to refine your operations and further your business objectives.


How Predictive Analytics Can Help Businesses Secure Funding and Boost Sales

There are several key factors that contribute to business model success. When businesses are looking for investor financing or when professionals seek to salvage a failing business, they look to see if these factors are in place and working well together.

Once business owners understand what these factors are, they can strengthen any weak areas they find in their business.

Zbig Skiba discussed successful business modeling and the book, Business Model Generation, in a LinkedIn article.

The 9 key elements of a good business model the book outlined were:

  • Knowledge of your customer segment and, “distinct target market;” people with similar desires and needs and the ability to pay for your product or service.
  • Your unique value proposition. What does your single product or service or bundle of products and services offer your customer segment?
  • Having efficient channels. Channels are the way you communicate with your customers. There are sales channels, delivery channels, follow-up channels etc.
  • Customer relationships. What is your experience and interaction with your clients? Are they satisfied with your business?
  • Revenue Streams, money coming in.
  • Key Activities, things you must do to conduct business every day. These might include manufacturing and design, etc.
  • Key Resources, your staff, financial resources, physical and intellectual property etc.
  • Key Partnerships, your vendor relationships and partnerships that make your business better, more affordable and more efficient; and
  • Your Cost Structure, the money you have to spend to run your business, including payroll, loan payments, raw materials, delivery costs, licensing fees and rent etc.

Many business owners and managers have yet to learn that comprehensive data analysis or predictive analytics can help strengthen weak areas in their business models.


  1. Predictive analytics organizes big data. This helps businesses clearly see what their target market needs, wants and can afford to pay for.
  2. Once business owners and managers understand their target market at this level, their value proposition becomes clearer.
  3. Businesses can then start using communication and delivery channels that are convenient to their best prospects and therefore the most effective.
  4. Customers feel heard when businesses meet their needs and provide the products and services they desire. Satisfied customers turn into long-term clients and brand advocates.

Predictive analytics can also boost sales by helping businesses:

  • Identify their best revenue streams
  • Understand what activities they should continue and which ones to stop
  • Know which employees are right for each job
  • Know which vendors to partner with to satisfy their target market better
  • Budget more efficiently and stop wasting money on expenses they don’t need