Chair: Karl Levitt, University of California, Davis, USA
Josh Haines, MIT Lincoln Laboratory, USA
Phil Porras, SRI International, USA
Jeff Rowe, Unversity of California, Davis, USA
Stuart Stanifor, Silicon Defense, USA
Johannes Ullrich, The SANS Institute, USA
Intrusion detection is a technique employed to catch and report attacks as they occur. It is needed given that vulnerabilities in operating systems, network protocols, applications, and configurations leave systems open to attacks.
Intrusion detection systems rely on data sources, which can be drawn from operating system audit logs, “sniffed” network packets, logs from components such as routers or applications. The principal approaches to intrusion detection include misuse detection where the intrusion detection system detects activity that matches attack signatures, and anomaly detection, where the intrusion detection system identifies activity that is inconsistent with what is expected. The expectation can be derived from previously observed activity or can be captured in a specification, giving rise to the concept of specification-based intrusion detection.
Research on intrusion detection started in the early 1980s, has continued through several major DARPA (and other Government) programs and has led to a number of products and numerous deployments. Intrusion detection is particularly appealing as an approach to enhancing the security of existing systems where it is faster to configure an intrusion detection than issue and install a patch when a new vulnerability is identified. Furthermore, anomaly-based is perhaps the only approach to the detection of unknown attacks.
This panel discusses the evaluation of current intrusion detection systems and suggests some indication of possible future directions, addressing such as issues as:
What current IDSs can do and cannot do
Johannes Ulrich, Sans
Modern intrusion detection systems are able to detect attacks early and in some cases can be used to disrupt attacks in progress. False positives are a major concern delaying further implementation of reactive intrusion detection systems. While modern IDSs have become better in recognizing and defeating popular IDS evasion techniques, the fundamental problem of tuning an intrusion detection system to achieve optimum performance still requires a skilled analyst and a thorough understanding of the protected assets. IDSs are not able to autonomously ascertain the impact of a packet stream to the network. However, some attempts to combine IDSs with vulnerability scanners have been made to allow the IDS to become aware of the network.
Something else an IDS is not able to do by itself is to look beyond the traffic stream it monitors. Correlating multiple Sensors and integrating them as part of a layered network defense requires the skilled use of other tools.
Evaluation of Intrusion Detection Systems
Joshua Haines, MIT Lincoln Laboratory
IDS performance evaluation is central to IDS research and development. Several recent efforts at Lincoln Laboratory have evaluated intrusion detection system performance using a variety of techniques. All have pointed out areas where further research is necessary and advanced the field in many ways, however none has provided the IDS developers with the data and tools necessary to create truly “next-generation” intrusion detection algorithms and tools. For example, we provided testbed-generated data that could be authoritatively labeled and has been shared amongst researchers to start many research programs. One study used real-world data with selected attacks inter-mixed so that background data was very realistic, but the lack of labeling limited false alarm measurement. Better datasets are necessary for better calculation of metrics in future evaluations and to further research. Datasets need to consist of many more examples of both attack and background traffic than have previously been available. Datasets need to be gathered collaboratively by a wide variety of researchers and stored centrally so that they represent a wide variety of network and system configurations and can be updated periodically without undue effort by any one entity. Datasets will need to take on new forms such as specifications and tools for created attack and background traffic in ones own environment so that IDS developers (and their systems) can explore use of new and different inputs for their systems. Metrics for IDS performance are a research topic in and of themselves, and will need to be expanded to better calculate and compare the amount by which an IDS improves the security of a given network configuration rather than simply tallying attack and false alarm rates.
Intrusion Report Correlation
Phillip A. Porras, SRI International
Among the most visible areas of active research in the IDS community is the development of technologies to manage and interpret securityrelevant alert streams produced from an everincreasing number of INFOSEC devices. Over recent years, the growing number of security enforcement services, access logs, intrusion detection systems, authentication servers, vulnerability scanners, and various operating system and applications logs have given administrators a potential wealth of information to gain insight into security-relevant activities occurring within their systems. The motivation for INFOSEC alarm correlation is straightforward: as we continue to incorporate and distribute advanced security services into our networks, we need the ability to understand the various forms of hostile and fault-related activity that our security services observe as they help to preserve the operational requirements of our systems. Today, in the absence of significant field-able technology for security-incident correlation, there are several challenges in providing effective security management for mission-critical network environments:
Among the challenges of INFOSEC alert correlation research is to bring to practice analytical techniques to address the problems of alert inundation, support true positive isolation and alert prioritization, identify known sequences of alerts that pertain to a complex attack scenario, and discover meaningful trends or commonalities across alert streams that are not discernable through the isolated inspection of individual INFOSEC alert logs.
New techniques in detection and response
Jeff Rowe, UC Davis
Typically, intrusion detection techniques use signatures of known malicious behavior, or classify statistically anomalous behavior as malicious. The signature technique is limited when faced with new, unknown attacks. Statistical anomaly-based methods can handle new, unknown attacks but falsely classify new non-malicious behavior as attacks as well. New specification based approaches, based upon the correct security behavior of a system, detect previously unseen attack instances without misclassifying non-malicious new behavior. With this approach, Formal methods might even be usefully applied to a real-time IDS, to verify that desirable security properties are enforced.
The worm threat
Stuart Staniford, Silicon Defense
The nature of the worm threat will be discussed (as distinct from viruses), and illustration of various worm-spread strategies with examples from worm incidents in the last year or two. Scanning worms such as Code Red will be addressed, as well as the possibility of flash worms (very rapid worms with entirely prescripted spread maps), and topological worms, which rely on information on infected machines to find others.
Then work on detecting and/or stopping worms will be the focus. First GrIDS, a system developed at UCD in the mid nineties that detected worms and other large scale misbehavior by correlating connections that had something in common and occurred sufficiently close together in time. Such connections were assembled into graphs, and large graphs were indications of worms. More recently, Silicon Defense has been working on techniques to prevent the spread of worms on networks.
Approaches for stopping scanning and flash worms and initial ideas on topological worms will be provided. Finally, wormholes and honeyfarms will be noted, especially ideas for capturing and automatically characterizing worms early in their spread.