Repository logo
 

Trusting in computer systems


Type

Thesis

Change log

Authors

Harbison, William Samuel 

Abstract

We need to be able to reason about large systems, and not just about their components. For this we would like to have conceptual tools that will help us to understand the behaviour of these systems, and to help us make sense of other, possibly conflicting, views. In this dissertation we have sought to indicate the need for a new methodology that will allow us to better identify and understand those areas of possible conflict or lack of knowledge, and we have looked for ways to improve the design of computer-based systems in a practical manner that can be readily understood and applied. In particular, we have taken the concept of trust and how this can help us understand some of the basic security aspects of a system. We have paid particular attention to the nature and type of assumptions that are made both within and between computer systems when they seek to communicate with each other. The work contained in this dissertation has been motivated by a belief that the design and implementation of many computer-based systems in operation today do not meet the needs of users and operators; and by a strong desire to identify ways in which the design and engineering of such systems can be improved. We note that many assumptions are frequently made on a de facto basis and which are frequently not acknowledged or even recognised for what they are. We show that an incomplete understanding of what is being assumed, relied upon and trusted can lead to an inadequate understanding of true vulnerabilities of systems. We examine various trust aspects of systems and introduce a definition of trust that we believe can help towards a greater understanding of system weaknesses. We propose that systems are examined in a manner that analyses the conditions under which it has been designed to perform, examines the circumstances under which it has been implemented, and then compares the two. We believe such an approach to be essential since we have (sadly) seldom found in our experience the two situations to be the same. It is unfortunately all too common to find the application of a design for one context being inappropriately implemented in another. We are proposing that anyone planning the design of a system or part of a system should look at it from the point of view of each of the participants, and that this should include all of the components - including users and implementers to see what they are relying on and to make sure that these assumptions are compatible. We look at this problem from the approach of what is being trusted in a system, or what a system is being trusted for. We start from some approaches developed in a (military) security context and in widespread use in commercial distributed systems, and demonstrate how the inappropriate application of this concept can lead to unanticipated risks to the system. We show how the usual use of trust as a system property can restrict the ability to reason about the security properties of a system; and we introduce a new notion of trust that we show is more fruitful for the analysis of the risk characteristics of systems. In particular, we show how, in contrast, our approach can be applied to the analysis of subsystems and systems components. We propose that trust be considered a "relative" concept, in contrast to the more usual usage, and that it is not the result of knowledge but a substitute for it. We show that although the concepts arose in a security domain, they are equally applicable to the analysis of assumption and risk throughout a system and its components. In contrast to the standard use of trust as a property of a system, our notion of trust applies only within the context of a specific viewpoint from which to judge risks. We argue that it is only after the introduction of a specific context from which trust is to be judged, that we can understand many of the intrinsic vulnerabilities of a distributed system. We have introduced the concept of there being more than one viewpoint from which to describe the behaviour of a system, and therefore the trust relationships that pertain. The utility of this concept lies in its ability to enable the nature of the risks associated with a specific participant to be measured, whether these are explicitly recognised and accepted by them, or not. We propose a distinction between trust and trustworthy, and demonstrate that most current uses of the term trust are more appropriately to be viewed as statements of trustworthiness. In particular we propose that trust is more properly understood and used as a substitute for knowledge; rather than the traditional "Orange Book" [DOD85] concept of it being the result of knowledge; where something is trusted if it exists within the security boundary of the system, and can violate the security policy of the system.

Description

Date

Advisors

Keywords

Qualification

Doctor of Philosophy (PhD)

Awarding Institution

University of Cambridge
Sponsorship
Digitisation of this thesis was sponsored by Arcadia Fund, a charitable fund of Lisbet Rausing and Peter Baldwin.