Posted by: Greg Ness | January 16, 2008

A Perspective On Oracle’s Security Paradox

Years ago databases commonly operated within brick walls and were accessed by small teams of people wearing smocks.  In high school I used to deliver database updates from a bank data center to its branches scattered across the county. The bank “WAN” in those days was a beat-up pickup truck driven by a high school kid blasting Steppenwolf’s “Magic Carpet Ride” between branches while swapping out computer printouts in vinyl pouches.  

In the formative years of database software development network security wasn’t on the radar.  Security was a combination of background checks, ID badges and entryway scanners.  When the internet came along everything changed, especially when it came to application delivery and security.   Security against the abstract possibility of a remote threat was not designed in when I was swapping pouches for the bank, because no one cared. 

As messengers in pickup trucks were replaced by electrons traveling between wider communities of users, database software was increasingly exposed to new and unanticipated levels of access. Databases were already architected and installed in thousands of data centers when steady streams of vulnerabilities became known (public). 

Database vendors responded responsibly by issuing code changes (patches) in an effort to “retrofit” the vast, installed base of databases for new security and delivery realities.  Yet these code changes were problematic because of how these databases were already deployed and their critical role in 24/7 business operations. 

I used retrofit because in California we know what’s involved when bridges and homes are retrofitted for earthquakes as we gain knowledge about faults, construction methods and enhance building codes.  I think it’s an apt metaphor.  New knowledge and realities cause us to rethink the design of critical infrastructure. 

Like retrofits, security patch updates (designed to fix software vulnerabilities in servers) aren’t easy to install and bring with them significant system availability risk and operational expense.  They are also frequently interconnected with other systems that can similarly be adversely impacted by database software code changes.  Because these systems usually reside on core servers, when they go down business suffers.  The retrofit metaphor is still apt. 

Those responsible for databases (database administrators or DBAs) are usually neither security experts nor consider themselves responsible for security.  Within the enterprise, database and security expertise are fuzzy blends of teams with varying levels of budget, knowledge and responsibility.  That’s not an environment conducive to decisive, proactive problem solving. 

To complicate decisions further patches can be delivered by vendors to address more than security issues.  As a result, committees emerge inside IT teams to decide what needs to be done now and what needs to be done someday. 

That kind of dynamic unfortunately often drives security decisions.  Until the roof leaks it works.  Over time and vast theaters of change enterprise security departments have been conditioned to respond to problems AFTER they occur.  So across thousands of enterprises security vulnerabilities are tolerated until there is a successful attack.   

AS THE FROG BOILS

In the meantime pressures/risks continue to build on security teams as they watch the webification of the enterprise with budgets driven by perceived risk which is based on a history that is becoming less relevant every day.  As access is increased and databases are increasingly vulnerable, vendors continue to announce the vulnerabilities to the world as they’re discovered.   

Add to this mix of history and risk and responsibility the ambivalent security business case for software vendors. Most are encouraged (by the market and the profit incentive) to migrate users to newer versions of software versus investing substantial efforts in “long-lifing” older versions.  After all, maintenance and update costs for older versions of software can be an incentive to buy newer (and more expensive) versions.   

That also helps the vendors recoup the cost of vulnerability research that wasn’t initially part of the business model, but was demanded by customers adopting the enterprise web.  After all, these demands didn’t exist when the initial PO was placed.  

The result of this confluence of factors: a software security patch paradox that perpetuates growing security and operational challenges for existing databases (and other mission-critical servers):

 “JANUARY 14, 2008 | Oracle (NSDQ: ORCLmessage board) on Tuesday is scheduled to issue 21 patches for its database, applications, and related products, a move that reflects a four-year old patching process. But a software executive who’s been visiting Oracle user groups says only a third of Oracle database administrators adopt the patches.”  – Charles Babcock, InformationWeek, January 14 2008 

While this recent InformationWeek article (and the blog intro theme) is about Oracle vulnerabilities and the security patch problem, the security paradox is endemic to all database and software vendors.  We weren’t at all surprised by the news.  We’ve heard it first hand from customers over and over again across vertical industries, operating systems and applications.   

The most vulnerable victim of this paradox has been data center and server security, because system availability impacts all users immediately (unlike desktop PCs). It is easier for operations teams to fall behind on higher risk security patch updates than manage the risk of core systems crashing for all users of a database or application.  

Pain and visibility attract more enterprise resources than the perception of risk among committee members with mixed agendas.  That is the reality of why security patch updates aren’t kept up with many kinds of server-based software. 

These “security paradox gaps” are also relevant for embedded systems in manufacturing and health care as well as legacy software (like older Windows versions) deployments running underneath mission-critical (and sometimes life-critical) applications created by third parties. 

The third party software paradox has an extra dimension of complexity because vulnerabilities can exist based on the operating system or the application or even the integration.  A post-patch server reboot, for example, can kill a server and bring down a critical appliance or application as its supporting thousands of users, customers, partners and/or suppliers. 

THE RESULT: ENTREPRENEURS OF ALL STRIPES

The security paradox has fueled a thriving, entrepreneurial cybercrime community and new categories of network and application security solutions specifically designed to mitigate the risk and expense of the gap between server patching and vulnerability announcement. 

Yet the status quo is still stuck in a reactive mode, bracing for the next attack.   

I think netsec pros and vendors need to think differently about data center security.  The times have changed since the pimply-faced hacker (perhaps also listening to Steppenwolf) started playing the desktop fame game. 

As I’ve discussed at here previously I’m concerned about the increased sophistication of mutating attacks and the growing interests of cybercrime in vulnerable data centers BEHIND the increasingly porous perimeter.  We’ve seen a rise in mutating attacks that are especially difficult for today’s perimeter appliances to detect.  From a web services standpoint we’re also seeing the rise of cross-site scripting and SQL injection attacks that are evading traditional security solutions. 

A security status quo that has worked for years is now crumbling under cybercriminal pressures, according to one widely quoted expert last year at RSA. We need to rethink the perimeter and how we approach security.  We need to look at server defense as being both a strategic opportunity and a critical imperative. 

Servers don’t wander the web clicking and downloading code from unknown sources.  That’s an advantage when it comes to security.  They could be a real perimeter, or area of enforcement and control.   Servers also contain most of the core information assets of the enterprise.  They are the eventual targets, even though desktops are more frequently compromised. 

Desktops are easier to patch because they have less devastating service consequences and their operating systems are more homogenous. Instead of dedicated security appliances for every application or operating system or type of database or capable of scanning traffic for millions of suspicious activities, we need to drive toward protocol-fluent technologies that can protect wide arrays of servers, operating systems, applications, databases and even virtualized environments.

The traditional perimeter may not be the best point of defense, even though superficially it seems the perfect place to start.  It’s too busy and cannot control enterprise user actions, a substantial weakness.  As I’ve blogged it’s a zone of convenience. 

Some general purpose netsec appliances (which also protect desktops and network gear, etc) have made notable strides in protocol-fluent protection for some operating systems; yet others are still at core signature/exploit pattern recognition and traffic blocking devices that suffer ongoing accuracy and availability problems when their functionality is (fully) turned on.   

I think that is one of the key reasons today why the netsec business is so services-focused.  Widely deployed solutions are simply not accurate enough to protect servers and databases while maintaining availability. 

The netsec community must address the paradox today.  Software vendors and their customers need a workable solution.  Neither software vendors nor customers are in a position to single-handedly solve the security paradox on their own.  We need to start thinking about security from the inside out versus the outside in. 

Until then, reports like this recent article in InformationWeek are going to continue to be disturbing reminders of a dying status quo and what could have been possible if only we had planned and acted properly. 

Disclosure: I am VP Marketing at Blue Lane Technologies.


Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Categories

%d bloggers like this: