Andrew's Blog     All posts     Feed     About


General thoughts on computer security

The following is a document I wrote in mid 2008. I haven’t edited it, although I was tempted to since parts of it sound a bit strange now. Rather than improve on it, I’ll just clarify what it was attempting to say: The connections between devices and/or agents have a finite set of security properties which determine the options available to secure the assets involved. These properties can be simply classified, providing a way to systematically explore the space of security configurations. I felt that the ideas in the document were a good summary of some facts about computer security.


The culmination of my independent studies in computer security involved the development of a system of detailed original models and axioms to describe every aspect of the security state of digital systems by generalizing all the disparate case-by-case knowledge I had compiled. I had heard of no similar methods or structure from anywhere else because the vast majority of IT security knowledge is in applied form for various technical domains. Indeed, when work on this system concluded, I no longer needed to continue learning to increase my understanding of computer security. While the concepts used apply to security of all domains, it the models are optimized for computerized resources. A summary of it follows.

The fundamental principles of security are authentication and authorization. An asset is secure to the degree that the owner is willing to make it if these properties are properly configured to function at that degree (or higher) and understood by its owner to function at that degree (or lower). Security is needed when access is to the asset is enabled. In the context of digital security, in which the asset is a digital substance (hardware or software), access is generally arbitrary usage, reading, and/or writing.

If one is to determine the optimal expenditure of funds on security, one must first find the optimal system of security, on a per-asset or per-asset-group basis. The asset involved will have one or more points of access (security/trust chains, or simply chains). Each chain will have one or more links, with the distinction between links depending on the granularity of the analysis. The conceptual chain will “connect” the potential user to the asset in question (with no regard for physical circumstances). Each link of the chain involves an authentication/authorization process of which there are three primary orders.

The first order is preventive, and akin to the wall and gate around a medieval castle. This order provides access control at the site of the access in full context of the asset’s execution environment. This is also the order at work in file-access Access Control Lists (ACLs) in an operating system. The second is pre-preventive, and akin to a security check before boarding an airplane. This order is used by an intermediate system before the request arrives at the destination, but strives to serve the same purpose as first order security. It may be cheaper to implement, especially if one specialized device can be used to serve many others. The principal drawback is that it must be aware of the full system state of the destination device (asset) in order to rule on the safety of the incoming request. This is often not the case in practice, but the benefit in economy may outweigh the cost. The other main drawback to second order security procedures is that even if enough information about the asset is known at the time of access to make a ruling, there may not be enough to “sanitize” the incoming request, or provide an effective, safe alternative to the requested access, and may instead simply drop it; this would be unintended in some circumstances. The third order, also called policy, involves a time delay, the need for evidence, the tracking of identity, the requirement for effective deterrence, a willing judiciary, and a cost-effective punishment system. The costs involved with the use of this method in the digital world dictate that its use tends to be primarily as a backup, although there have been questions about the merits of its existence at all.

Each link in the security/trust chain will either provide access control on one of the three levels, involve trusting an uncontrolled third party to determine the security of the access, or do neither. In practical modeling, links of the final option would likely only be included to illustrate failures of security.

All security systems in use in digital environments can be described using this method. A web of chains can be used to illustrate trust relationships between end devices and the role they play in the security chain of others. A potential attacker would be presumed to use what would be perceived by him to be the “weakest” chain, or sequence of connections of chains, that leads him to the asset. Note that what is often derided as “security through obscurity” is often quite secure indeed. For example, a file on an Internet-connected Web server that is only meant to be accessible by a specific individual could possibly be protected in a number of ways. Two of these are storing the file in a password-protected folder, or putting the file in a hard-to-guess directory. The latter is considered “security through obscurity”, but requires almost the exact same security procedures to access as the former (except for the traffic patterns created, types and permissions of log files stored, etc., none of which are necessarily risks for a given situation). Ultimately, all chains from the user to the asset need only one strong link, but combinations of the different kinds are used in different domains. Possible examples of the system in practice are numerous, a few follow (with thorough background).

An example of a simplistic security chain model illustrating the typical protection used against XSS (cross-site scripting) attacks involves three entities: the server hosting the vulnerable Web page, the attacker, and the victim user. In a persistent XSS attack, the chain begins with a second order link in between the attacker and the server. The server will inspect the input sent to determine whether or not it will violate the user’s presumed (and usually generalized) security policy. In the case of JavaScript malware, the server checks to make sure none of the input could be executed (in the [likely] context of the eventual Web page it will be rendered in) and either encodes it to prevent this happening, or cancels the request if the check fails. Note that this process may have to be looped to prevent some encoding bugs, until the result is safe. The user then requests the Web page that the attacker’s input is found in (the reason for the visit may be the result of the function of another chain). Generally, the chain link here will be one of trust between the user and server: the user trusts that the server (uncontrolled third party) will not serve anything dangerous. If dynamic content does get served and attempts to connect to a remote host other than the original server (subject to a few loopholes and exceptions), the connection will be blocked. This is called the Same Origin Policy (SOP). This requires the removal of functionality (and the need for chains of security) for controlling this kind of execution access, which could potentially be legitimate. However, the check involved in comparing the server and the remote host is done using domain names. This causes problems when the SOP (showing second-order characteristics) presumes it knows the way in which requests are sent in the user’s network, and is known to have weaknesses. The user’s understanding of his security state is fundamentally to blame in this case. Note that the integrity of the network all devices are on and other aspects of the scenario represent the need for additional security chains, but are ignored as perfectly secure in this example.

The final link in the most common scenario for persistent XSS defense can be the user’s disallowing of JavaScript altogether on a per-domain basis. Essentially similar to the SOP, this involves overriding the trust in the domain from the previous link, and in effect closes access (removes the need for a security chain altogether, but at the cost of functionality) to JavaScript-enabled Web sites that are not on the domain-allow list. A non-persistent XSS would involve the user trusting or not trusting the attacker’s sent link based on the information available (trust link) or using security of some other kind. Note that links containing JavaScript in the query string or post body may be legitimate. The link here would be one of either possible first order security (if the script affects only client-side functions) or trust (if the script accesses the server, where the user cannot monitor). Note also that the protections described here are as fine-grained as generally regarded to be feasible. A first order security link only on the part of the client, combined with trust links based on the destination domain of remote-connecting scripts would enable flexible cross-domain access. The feasibility of this has been doubted (by Mark Russinovich for one) but by no means disproved.

Further examples could be included (SQLi, encrypted/steganographed traffic in a workplace)

* * *