Keynotes
Distinguished Practitioner
Users do the darndest things: True stories from the CyLab Usable Privacy and Security Laboratory
Lorrie Faith Cranor, Carnegie Mellon University, USA
How can we make security and privacy software more usable? The first step is to study our users. Ideally, we would watch them interacting with security or privacy software in situations where they face actual risk. But everyday computer users don't sit around fiddling with security software, and subjecting users to actual security attacks raises ethical and legal concerns. Thus, it can be difficult to observe users interacting with security and privacy software in their natural habitat. At the CyLab Usable Privacy and Security Laboratory, we've conducted a wide variety of studies aimed at understanding how users think about security and privacy and how they interact with security and privacy software. In this talk I'll give a behind the scenes tour of some of the techniques we've used to study users both in the laboratory and in the wild. I'll discuss the trials and tribulations of designing and carrying out security and privacy user studies, and highlight some of our surprising observations. Find out what privacy-sensitive items you can actually get study participants to purchase, how you can observe users' responses to a man-in-the-middle attack without actually conducting such an attack, why it's hard to get people to use high tech cell phones even when you give them away, and what's actually in that box behind the couch in my office.
Lorrie Faith Cranor is an Associate Professor in the School of Computer Science and the department of Engineering and Public Policy at Carnegie Mellon University. She is director of the CMU Usable Privacy and Security Laboratory (CUPS). She has authored over 80 research papers on online privacy, phishing and semantic attacks, spam, electronic voting, anonymous publishing, usable access control, and other topics. She has played a key role in building the usable privacy and security research community, having co-edited the seminal book Security and Usability (O'Reilly 2005) and founded the Symposium On Usable Privacy and Security (SOUPS). She also chaired the Platform for Privacy Preferences Project (P3P) Specification Working Group at the W3C and authored the book Web Privacy with P3P (O'Reilly 2002). She has served on a number of boards, including the Electronic Frontier Foundation Board of Directors, and on the editorial boards of several journals. In 2003 she was named one of the top 100 innovators 35 or younger by Technology Review magazine. She was previously a researcher at AT&T-Labs Research and taught in the Stern School of Business at New York University.
Invited Essayist
The Good, The Bad, And The Ugly: Stepping on the Security Scale
Mary Ann Davidson, Oracle, USA
Metrics are both fashionable and timely: many regulations that affect cybersecurity rely upon metrics – albeit, of the checklist variety in many cases – to ascertain compliance. However, there are far more effective uses of security metrics than external compliance exercises. The most effective use of security metrics is simply to manage better, which may include:
- Make a business case for needed change
- Focus scarce resource on most pressing problems (with the biggest payoff for resolution)
- Help spot problems early -- or successes early
- Address "outside" concerns or criticisms fairly and objectively
This paper explores the qualities of good security metrics and their application in security vulnerability handling as well as a software assurance program.
Mary Ann Davidson is the Chief Security Officer at Oracle, responsible for Oracle product security, as well as security evaluations, assessments, and incident handling. She represents Oracle on the Board of Directors of the Information Technology Information Security Analysis Center, is a member of the Global Chief Security Officer Council, and serves on the editorial advisory board of SC Magazine. She was recently named one of Information Security magazine.s top five .Women of Vision. and is a 2004 Fed100 award recipient from Federal Computer Week. She has served on the Defense Science Board and was recently named to the Center for Strategic and International Studies commission on cybersecurity. Ms. Davidson has a BS in mechanical engineering from the University of Virginia and an MBA from the Wharton School of the University of Pennsylvania. She has also served as a commissioned officer in the U.S. Navy Civil Engineer Corps, where she was awarded the Navy Achievement Medal.
Classic Paper 1
Reflections on UNIX Vulnerabilities
Matt Bishop, UC Davis, USA
The UNIX operating system was developed in a friendly, collaborative environment without any particular predefined objectives. As it entered less friendly environments, expanded its functionality, and became the basis for commercial, infrastructure, and home systems, vulnerabilities in the system affected its robustness and security. This paper presents a brief history of UNIX vulnerabilities, beginning with a report written in 1981.1983, but never published. It examines how the nature of vulnerabilities has (and has not) changed since then, and presents some thoughts on the future of vulnerabilities in the UNIX operating system and its variants and other UNIX-like systems.
Matt Bishop received his Ph.D. in computer science from Purdue University, where he specialized in computer security, in 1984. He is on the faculty at the Department of Computer Science at the University of California at Davis. His main research area is the analysis of vulnerabilities in computer systems, including modeling them, building tools to detect vulnerabilities, and ameliorating or eliminating them. He is active in information assurance education, and is a charter member of the Colloquium on Information Systems Security Education. He has been active in the area of UNIX security since 1979, and has presented tutorials at SANS, USENIX, and other conferences. His textbook, Computer Security: Art and Science, was published in December 2002 by Addison-Wesley Professional. He also teaches software engineering, machine architecture, operating systems, programming, and (of course) computer security.
Classic Paper 2
Java Security: A Ten Year Retrospective
Li Gong, Mozilla Online Ltd., China
Read Original Paper Read Paper View Presentation
The first edition of Java (both the language and the platform) was released in 1995, which contained the all-or-nothing security access model. A mid-1997 IEEE Micro paper laid out a vision for the future of Java security, which notably included a model for fine-grained access control, a crypto architecture, and a number of other security mechanisms. The first implementation of these features was officially released in late 1998 as part of the JDK 1.2 platform. Ten years on, the original vision of Java security was largely realized and the overall architecture had in fact been carried over to both the enterprise Java and mobile Java platforms. This paper reflects on lessons -- technical and otherwise -- learned in the process of designing and implementing the Java security architecture and in the aftermath of its release into the real world.
Li Gong is Chairman and CEO of Mozilla Online Ltd., the Beijing-based subsidiary of Mozilla Corporation. Immediately prior to that, he was General Manager of MSN China at Microsoft. He started his career as a researcher, specializing in computer security. He served as both Program Chair and General Conference Chair for ACM CCS, IEEE Security & Privacy, and IEEE CSFW. He was Associate Editor of ACM TISSEC and Associate Editor-in-Chief of IEEE Internet Computing. He held visiting positions at Cornell and Stanford, and was a Guest Chair Professor at Tsinghua University, Beijing. Li obtained BS/MS at Tsinghua University and a PhD at the University of Cambridge. In 1996 he left SRI to join the newly formed JavaSoft division at Sun where he was a Distinguished Engineer and Chief Java Security Architect and designed the Java security architecture and product lines in use today across all major Java platforms. Li has 14 issued US patents, wrote 3 books (including "Java 2 Platform Security"), and received the 1994 Leonard G. Abraham Award given by the IEEE Communications Society for .the most significant contribution to technical literature in the field of interest of the IEEE.
Lunch Speaker
Risk Futures: Who (or What) May Be Eating Your Lunch?
Peter Neumann, SRI, USA
This presentation reflects on many types of risks in the use of computer-related systems. It considers possible alternatives for the future, suggests some remedial approaches, and draws some broad conclusions from the past 25 years. Although various common-sense approaches may not seem novel, they are still urgently relevant.
Peter G. Neumann has doctorates from Harvard and Darmstadt. After 10 years at Bell Labs in Murray Hill, New Jersey, in the 1960s, during which he was heavily involved in the Multics development jointly with MIT and Honeywell, he has been in SRI's Computer Science Lab since September 1971. He is concerned with computer systems and networks, trustworthiness/dependability, high assurance, security, reliability, survivability, safety, and many risks-related issues such as voting-system integrity, crypto applications and policies, health care, social implications, and human needs -- especially those including privacy. He moderates the ACM Risks Forum, edits CACM's monthly Inside Risks column, chairs the ACM Committee on Computers and Public Policy, and chairs the National Committee for Voting Integrity (http://www.votingintegrity.org). He created ACM SIGSOFT's Software Engineering Notes in 1976, was its editor for 19 years, and still contributes the RISKS section. He is on the editorial board of IEEE Security and Privacy. He has participated in four studies for the National Academies of Science: Multilevel Data Management Security (1982), Computers at Risk (1991), Cryptography's Role in Securing the Information Society (1996), and Improving Cybersecurity for the 21st Century: Rationalizing the Agenda (2007). His 1995 book, Computer-Related Risks, is still timely. He is a Fellow of the ACM, IEEE, and AAAS, and is also an SRI Fellow. He received the National Computer System Security Award in 2002 and the ACM SIGSAC Outstanding Contributions Award in 2005. He is a member of the U.S. Government Accountability Office Executive Council on Information Management and Technology, and the California Office of Privacy Protection advisory council. He co-founded People For Internet Responsibility (PFIR, http://www.PFIR.org). He has taught courses at Darmstadt, Stanford, U.C. Berkeley, and the University of Maryland.