Network security has been around almost as long as we’ve had networks, and it is easy to trace the various elements of network security to the components of networking that they try to mitigate. Over the past 30-35 years or so, the expansion of networking, especially the increased reliance on the Internet both as an avenue for commerce and as the corporate backbone, has created an entire industry that did not exist before. Network security as we know it today is largely focused on the assumption that organizations are connecting an existing [closed] network to the Internet for the purpose of commerce, data sharing, and communications across the open/public Internet while maintaining a secure posture and the protection of their internal networks, data, and communications.
The overall goal of network security is primarily based on the premise of protecting the safe, trusted internal network from the dangerous and unknown external actors. This approach has always been problematic since the bad guys do not always operate from the outside; nevertheless, much of this industry and the security “solutions” that are currently on the market can be traced back to this overarching goal. Today, however, the traditional model of network security is being challenged as new technologies are becoming commonplace. These products connect the traditional network to other networks (as well as the Internet) using different communication technologies such as wireless, cellular/mobile, Bluetooth, near field communications (NFC), and even satellite communications. Let’s take a brief look at the evolution of networking and network security, discuss the impact of the changing networking landscape, discuss what needs to change in terms of an organization’s security strategy, and look at some tools that can help the organization to accomplish its goals.
A Brief History of Networking
The first computers, which were used by the government, research universities, and eventually commercial businesses, were called mainframes. They were designed to perform mathematical calculations and other data processing much faster than could be performed by humans. The first mainframes were extremely large, consisting of banks of vacuum tubes that required high power consumption and generated a lot of heat. This made them costly to operate and maintain. The ability to interact with these computers were generally limited to a set of operators who had access to terminals or data entry points physically connected to the systems. Eventually, more terminals were added to the mainframes so that more users could take advantage of the computing capabilities, and these terminals started to be “remotely” located in other parts of the buildings where the mainframes were located.
Technological advances happened rapidly in both storage capabilities and processing speeds, a development which was led by the invention of transistors and integrated circuit boards and later the invention of the microprocessor. These advances enabled computers to become smaller while at the same time greatly increasing their storage and processing capabilities. When personal computers became available, companies started to purchase them for their employees. But they were not connected in any way, so the sharing of data and communications were limited and non-existent. There was a high demand to be able to link the personal computers together to enable fast and easy data-sharing and communications amongst users.
The basic function of a network is to enable access to sharing of data quickly and with as little user involvement as possible. The ability to do all this stems largely from connecting all the component parts together using wires and cables, then connecting workstations to distributed servers and fixed mainframes using networking devices and communications software. The acceptance of the network for conducting internal business led to an ever-increasing demand for faster, seamless, and transparent communications for all users. These initial networks were primarily wired networks.
They connected all workstations and servers together using devices such as hubs, routers, and switches. These systems were generally all in one location, and the network they were attached to became known as the local area network (LAN). Companies with multiple locations would contract with telecommunications companies to connect all their locations together using private line circuits to create a wide area network (WAN). Then the Internet came along and companies, besides trying to come up with an eCommerce strategy, figured out they could communicate over the Internet more cost effectively and efficiently with a single publicly available network connection to an Internet Service Provider (ISP) than multiple private line circuits, so a migration to Internet-based networking began.
Basic Network Security
It didn’t take long to figure out that when companies started to connect their internal/trusted networks to the external/untrusted Internet that there was a need for some basic security protections. Network architectures were developed that created layers of protection between the innermost, most sensitive parts of the corporate network – typically where the mainframe or databases resided – out to the untrusted, external Internet. Networking devices were modified to control the traffic flow into the corporate network leading to the invention of the ‘firewall’.
The network firewall created a barrier between networks and acted as a “traffic cop” to control communications between the different networks based on a specified listing. The early principles of firewall were either “explicit allow” where specific communications are permitted through the firewall and “explicit deny” where every form of communication was allowed except for those communications that were listed as being forbidden.
It is interesting to note that the basic architecture for network security is based on old military strategies of creating layers of protection around the most valuable resources. The model is like a castle or fortress, with the most secure or protected internal structures protected by various layers of perimeter security such as walls, ramparts, bulwarks, parapets, trenches, and moats. You can see in Figure 3 that the primary purpose of the layers of defenses is to keep the internal “systems” protected from all external forces.
Where this model breaks down, and what has been the greatest challenge of network and Internet security strategies, is the demand for communications and data sharing into and out of the secure internal structures between trusted users and customers while keeping the bad guys and attackers out. You can imagine that attacks against the fortress model have always been directed at the lowest walls, the doors, windows, and gates. There is a reason that some forms of malware are referred to as “Trojans” – named for the ancient Trojan Horse that was used to penetrate the fortress city of Troy by disguising itself as a legitimate and harmless gift.
To allow for communications into and out of the internal network while being attached to the Internet, a three-tiered architecture was developed based on the classic layered approach found in fortresses.
Using a three-tier architecture enables an untrusted source (from the Internet) to establish communications with an initial system (web server) that can both communicate with the external source but also communicate with an internal source (application) that is somewhat more trusted and only accepts communications from the web server but not any other of the untrusted, external sources.
These intermediary systems were generally isolated on a separate network segment that came to be known as the DMZ (de-militarized zone) because of the analogy to actual DMZs employed in the real World (separating East and West Germany in Berlin, or North and South Korea). These intermediary application servers, in turn, were the only systems that could speak to the internal data repository where all the sensitive and critical information is “securely” stored and protected so that only authorized and legitimate access from the application server is allowed.
Unfortunately, the bad guys have always seemed to be able to figure out ways to defeat the perimeter defenses by various and evolving methods. Advances in technology and the proliferation of various applications and communications have only compounded the problem. It seems that with every advance in technology or communications, the bad guys discover new ways to exploit the technology and compromise all types of corporate networks. Today’s attackers have also figured out how to monetize their efforts which gives them more incentive to continue to find new exploits so they can compromise companies’ networks and steal valuable data.
The evolution of the Internet gave rise to a host of network security challenges and corresponding strategies to address them. In my next post, I will discuss how technological innovation is forcing organizations to adopt a novel approach.
About the Author:
Jeff Man is a respected Information Security expert, adviser, and evangelist. He has over 33 years of experience working in all aspects of computer, network, and information security, including risk management, vulnerability analysis, compliance assessment, forensic analysis and penetration testing. He has held security research, management and product development roles with NSA, the DoD and private-sector enterprises and was part of the first penetration testing "red team" at NSA. For the past twenty years, he has been a pen tester, security architect, consultant, QSA, and PCI SME, providing consulting and advisory services to many of the nation's best known brands.
Editor’s Note: The opinions expressed in this guest author article are solely those of the contributor, and do not necessarily reflect those of Tripwire, Inc.
Mastering Security Configuration Management
Master Security Configuration Management with Tripwire's guide on best practices. This resource explores SCM's role in modern cybersecurity, reducing the attack surface, and achieving compliance with regulations. Gain practical insights for using SCM effectively in various environments.