Posted by: Greg Ness | December 12, 2007

Security 3.0 and the Perimeter Myth


 Over the last few weeks I’ve been talking to analysts and security pros about virtualization, security and the evolution of netsec to virtsec.  Last week I was in Los Angeles on a virtualization panel at the InformationWeek Virtualization Summit  and then in NYC on a MISTI panel on virtsec.  

As a result of several discussions, I’ve come to the conclusion that for many organizations their network really doesn’t have a perimeter, at least in the classic sense of defense.  The idea of a strategic point of defense that protects what is inside has become a legacy myth, an anachronism from the early days of netsec and fame-seeking hackers. 


 The “perimeter myth” exists because (perimeter security) devices that sit between desktops and the Internet face insurmountable and yet growing technological and operational challenges.  They simply cannot be architected to be powerful enough to accurately and efficiently inspect ALL traffic for ANY possible exploit or traffic anomaly, without a substantial re-architecting of their core hardware and software.  As networks are increasingly connected to the Internet, these devices require ongoing hardware upgrades to merely maintain basic protection, much less to deal with ever more sophisticated attacks. 

Many netsec solutions are still at core exploit signature-matching appliances with add-ons for anomalies and some limited protocol/application intelligence.  Their core architectures were developed around tremendous exploit expertise and yet very limited software knowledge.  They’ve done an excellent job of protecting the network in simpler days; but the shift to user convenience and Internet access for wider ranges of applications has placed these older devices into some long, strange product roadmap trips filled with potholes.  

Being protocol and application ignorant in the world of netsec is the equivalent of trying to secure a bustling city without speaking the languages. You can inspect all traffic in a very shallow and time-consuming way, but are you really making the most of every inspection?  Are you gathering real suspects or droves of innocents?  Are you slowing down traffic for no real reason? 

Because of “suspicion gridlock” some savvy netsec vendors have shifted their value proposition from appliance revenues to the monetization of noise management, i.e. services revenue generated by false alarms and event management.  Netsec vendors can sell an appliance every 2-3 years then supplement hardware sales with recurring “service” revenue to manage the mess and keep service-oriented VARs happy.  While that is a smart business move it is ultimately a shell game for customers.  In essence it scatters the total cost of ownership into multiple buckets and shifts scrutiny away from whether the perimeter signature and tuning efforts are really working.  It also doesn’t encourage vendors to solve the real problems: accuracy and latency. 


The idea of a perimeter is more than a geometric concept when it comes to security; in military terms it’s really the outer edge of a zone of control that enables adequate resupply, protection, coordinated communications and movement.  Through WW1 warfare was often fought via relatively static front lines (perimeters) that shifted back and forth and reflected relative control of the field of battle. 

From a security standpoint if you think about the perimeter as a zone of control then it shouldn’t be placed between endpoints (PCs) and the Internet.  Endpoints (with robust interactivity) really make up a zone of access or convenience, but not a zone of control.  Yet on a typical network diagram endpoints have robust Internet access from BEHIND the perimeter, setting up the strategic head fake that has focused billions in security budgets away from any semblance of a real perimeter and driven an explosion in security as service revenue.  


Military history is filled with lessons of decisive battles defined by mistaken perimeters, surprise flank attacks (where a front line was compromised) and incorrect intelligence about an enemy’s location or capabilities to reach inside a perimeter and disrupt chain of command, communication and control efforts.  Large armies can be routed by smaller foes with surprise flank attacks as in the case of the Battle of Chancellorsville.  

For this reason the perimeter as many security teams understand it is a myth, a vendor’s playground with unrealistic product requirements.   As long as users have free access to the Internet security pros will not be able to secure the perimeter to any level consistent with a military standard.  And data centers will be more vulnerable than they need to be.  The perimeter myth is a head fake tax. 


This troubling shift from zone of control to zone of convenience has also forced perimeter security strategists and vendors into a kind of war between ever more powerful hardware and ever more mobile and innovative invaders.  We know who eventually wins these kinds of wars.  In the meantime, enterprises are spending more on ongoing netsec hardware upgrades and the operational support required to manage ever increasing false alarms; and track, identify and block sophisticated (often mutating) attacks that are not easy to identify without understanding application protocols.   

The real problem, therefore, with the perimeter myth isn’t the operations and management of existing solutions (which have historically served their purpose in protecting entire networks from teeny bopper attacks); it’s the amount of time, strategy and resources that could have been directed at establishing a real perimeter, at a point which is much easier to control.  That would also lead to the development of real requirements that match up with critical, opportunistic and manageable points of specialized, application and protocol-fluent enforcement. 

For the lack of a better term I would like to call this inside-out security strategy serversec (for server and database security) or Security 3.0.  The serversec idea is to secure what you can efficiently by leveraging the point of highest impact/enforcement and creating a REAL perimeter. Start with a trusted zone around servers and databases that don’t proactively engage with unknown, untrusted sites or open downloads, etc.  As they say in the war movies, secure the perimeter. 

I’m not advocating removing netsec appliances or allowing all of your PCs to get infected by viruses or botnets; I’m merely suggesting that the perimeter myth be exposed for what it is, and that security takes a step back and looks at defense more strategically, more opportunistically.  Take full advantage of points of leverage so that there can be a real perimeter, not a point of waste and weakness and false alarms.  Yes, I’m still talking layers of defense. 

Serversec would require new thinking on behalf of netsec vendors.  In some cases existing appliances would be rendered obsolete for server protection because they don’t have the application and protocol awareness necessary to inspect and correct traffic without impacting server traffic (availability).  Keep the protocol-ignorant gear away from the servers.  Yet there’s another rub… 


In Los Angeles I saw the Sun Constellation network diagram slide, which showed dozens of blade servers connected to a high density switch in a hub and spoke pattern.   There were no network appliances in the diagram besides the switch, no cables running from blade to blade.  The benefit: massive flexibility and power savings and an obvious reduction in required cabling.  Cisco has a VFrame server fabric initiative and Brocade has similarly announced its entry into the high density virtualization switch business.   

I think the switch and server vendors see it coming: virtualization is pulling the data center market in the direction of highly responsive processor fabrics.  Those fabrics pose significant enforcement and habitat challenges for netsec hardware solutions.   

It took me back to my Weird Scenes column from weeks back.  In each server fabric scenario the network is terminating on the server itself and the established vantage points of traditional netsec appliances disappear- just at the time they need new functionality, new intelligence to move inside, close to the servers.  

From a technology vendor marketing standpoint that kind of challenge is fraught with risks and tradeoffs.  New investment in new technology is required because a disruptive technology is turning the security world inside out while concomitantly habitats start disappearing.  Yet re-architecture is painful and time-consuming and risky. 


In the short term the netsec hardware vendors MUST announce a virtsec product in 2008.  Being late to the party will cost them substantial vision and revenue growth points.  As I commented before, these 2008 virtsec announcements will likely be vapor ware because of the substantial difficulties in moving from signature processing (usually ASIC) “architecture crunch” to massive hypervisor footprints.  Maybe these products will be broken into multiple parts in order to lessen the load on individual servers and avoid massive processing burdens.  Maybe they’ll find a creative way to exploit the hypervisor layer from afar?  Either way, they are in a world of computational disadvantage until they understand the nature and weaknesses of the applications they are defending. 


Armed forces around the world have learned the importance of language skills.  For netsec to get to the next level and establish a real perimeter close to servers and databases, they’ll have to become data center protocol and application fluent.  They’ll need to extricate threats while allowing traffic to flow unimpeded, which is often called inline correction.  The days of brute force signature-matching and suspicious noise are coming to an end. 

They’ll also need to continue to focus enterprises away from gear shortcomings to noise mitigation investments.  After all, a fabric of blade processors introduces substantial traffic pulse demands; plenty of security events can be generated by moving and morphing servers across IP addresses, offline and online, etc.  In a world where mouse clicks can become security events you can expect netsec noise pollution to reach dramatic new heights.  And those who are the last to get it will pay dearly for relief. 

Anyway, many thanks to those whom I had a chance to speak with last week in New York and Los Angeles, and to the analyst who suggested Security 3.0 as the next big thing for netsec.  You can also catch my recent interview on or my ongoing blog. 

Disclosure: I’m the VP Marketing for Blue Lane Technologies, a winner of the 2007 InfoWorld Technology of the Year for security, Best of Interop 2007 in security and the AO 100 Top Private Company award for 2006 and 2007. Blue Lane is also a 2007 Best of VMworld Finalist in data protection. I’ve been a marketing executive at Juniper Networks, Redline Networks, IntruVert Networks and ShoreTel. I’ve been an Always On blogger/columnist since 2004. My recently launched personal blog is: .  These are all my opinions, and do not represent the opinions of employers, spouses, kids, etc.


Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s


%d bloggers like this: