From the Vault – The Computing of Business, October 2014
Did you know that in the wide world of open source software there is an application for analyzing seismic data? If I had only known that a few weeks ago I could have thrown a portable seismograph into my carryon for a recent trip to California.
Since I was rudely awakened during my stay by what FEMA now calls a major earthquake, I could easily have done a quick data reduction and submitted the results to the open source Global Quake Model. But alas, during the event I was thinking more about what might happen to the roof than contributing to science. At 40 miles from the epicenter it was certainly a unique experience and the building held together nicely—check that one off the bucket list.
So open source is everywhere. Good developers are developing good, creative software and distributing for all to use within the confines of a number of different license platforms.
In a recent IT-related university classroom experience Phil McCullough, a fellow COMMON Director, noted that the entire curriculum centered on open source. Open source operating systems, open source databases, open source development tools, open source applications; the entire gamut. The message was loud and clear: open source will be a big part of our computing future.
But software, by its very nature, contains defects. In many cases those defects do not consist of what we traditionally consider a problem. For example, as the programmable point of sale system industry was developing a couple of decades ago, I cannot imagine that leaving a credit card number unencrypted in ram memory would ever be considered a problem. Turns out it was.
Recently my employer sold some software we had developed to a very large company. Parts of that software contain open source components like frameworks and other mechanisms that helped develop a very complex set of code. Before the deal could close the buyer required substantial documentation of every open source component, and the version of that component. Additionally, the entire base of object code was scanned for any known vulnerabilities. All went well and the deal was completed.
Open source operating systems like Linux, and large open source applications like Sugar CRM, have the benefit of having substantial support organizations. Dedicated people watching for problems are an advantage many small applications do not, and cannot, possibly have. The likelihood is very real that the bad guys—and there are an awful lot of them—will stumble on to something they can exploit in almost any open source project. If you run some or all of a smaller targeted code base, it then becomes simply a matter of a bad guy (they are voracious sharers of exploit information) finding your system and deploying one of the myriad ways of injecting malware on to it.
Of course there needs to be something worthwhile to steal. Credit card numbers are the currency de jour though certain kinds of pictures bring a bigger bang. Regardless, any information about your company or your customers has value to someone.
A number of scanning solutions like OpenLogic and OpenVAS can attempt to locate problematic software. But using such tools seems to me to be a bit reactive. The veritable cat may have already left its bag.
A more proactive approach is to know where in your software, and on your systems, the open source stuff (and anything else with known vulnerabilities) is. Trust me, the stuff is there. Complete, accurate application inventories are more important than ever. An inventory, coupled with appropriate monitoring of the threat landscape, will keep you ahead of the bad guys. Yes, this is something more to do with your constantly shrinking resource base. But staying ahead of the bad guy sure beats the alternative.
For many the 2014 South Napa earthquake was a disaster. For me it was an interesting experience.
For some the open source landscape will be a disaster. Reading about those problems should be the only experience you want.
About the Author: Randy Dufault, CCBCP
Randy is the Director of Solution Development for Genus Technologies, a Midwestern consultancy dealing primarily with enterprise content management systems. His experience with content management dates back 25 years, where he helped develop what ultimately became IBM‘s Content Manager for iSeries. He has also developed and integrated a number of advanced technologies including document creation, character recognition, records management, and work flow management. Randy is a member of the COMMON North America Board of Directors and was active in the development of COMMON‘s Certification program.
Read Randy’s Computing of Business column in COMMON.CONNECT.