You are a computer administrator for a large manufacturing company. In the middle of a production run, all the mainframes on a crucial network grind to a halt. Production is delayed costing your company millions of dollars. Upon investigating, you find that a virus was released into the network through a specific account. When you confront the owner of the account, he claims he neither wrote nor released the virus, but he admits that he has distributed his password to “friends” who need ready access to his data files.
Is he liable for the loss suffered by your company? In whole or in part? And if in part, for how much? These and related questions are the subject of computer law. The answers may very depending in which state the crime was committed and the judge who presides at the trial. Computer security law is new field, and the legal establishment has yet to reach broad agreement on may key issues. Advances in computer security law have been impeded by the reluctance on the part of lawyers and judges to grapple with the technical side of computer security issues[1].
This problem could be mitigated by involving technical omputer security professional in the development of computer security law and public policy. This paper is meant to help bridge to gap between technical and legal computer security communities. The principal objective of computer security is to protect and assure the confidentiality, integrity, and availability of automated information systems and the data they contain. Each of these terms has a precise meaning which is grounded in basic technical ideas about the flow of information in automated information systems.
There is a broad, top-level consensus regarding the meaning of most echnical computer security concepts. This is partly because of government involvement in proposing, coordinating, and publishing the definitions of basic terms[2]. The meanings of the terms used in government directives and regulations are generally made to be consistent with past usage. This is not to say that there is no disagreement over the definitions in the technical community. Rather, the range of such disagreement is much narrower than in the legal community.
For example there is presently no legal consensus on exactly what constitutes a computer[3]. The term used to establish the scope of computer security is “automated information system,” often abbreviated “AIS. ” An Ais is an assembly of electronic equipment, hardware, software, and firmware configured to collect, create, communicate, disseminate, process, store and control data or information. This includes numerous items beyond the central processing unit and associated random access memory, such as input/output devises (keyboards, printers, etc. ) Every AIS is used by subjects to act on objects.
A subject is any active entity that causes information to flow among passive entities called bjects. For example, subject could be a person typing commands which transfer information from a keyboard (an object) to memory (another object), or a process running on the central processing unit that is sending information from a file(an object) to a printer a printer(another object).
Confidentiality is roughly equivalent to privacy. If a subject circumvents confidentiality measures designed to prevent it’s access to an object, the object is said to be “comprised. Confidentiality is the most advanced area of computer security because the U. S. Department of Defense has invested heavily for many years to find way to maintain the confidentiality of classified data in AIS [4]. This investment has produced the Department of Defense trusted computer system evaluation criteria[5], alternatively called the Orange Book after the color of it’s cover. The orange book is perhaps the single most authoritative document about protecting the confidentiality of data in classified AIS.
Integrity measures are meant to protect data form unauthorized modification. The integrity of an object can be assessed by comparing it’s urrent state to it’s original or intended state. An object which has been modified by a subject with out proper authorization is sad to “corrupted. ” Technology for ensuring integrity has lagged behind that for confidentiality[4]. This is because the integrity problem has until recently been addressed by restricting access to AIS to trustworthy subjects. Today, the integrity threat is no longer tractable exclusively through access control.
The desire for wide connectivity through networks and the increased us of commercial off the shelf oftware has limited the degree to which most AIS’s can trust accelerating over the past few years, and will likely become as important a priority as confidentiality in the future. Availability means having an AIS system and it’s associated objects accessible and functional when needed by it’s user community. Attacks against availability are called denial of service attacks. For example, a subject may release a virus which absorbs so much processor time that the AIS system becomes overloaded.
This is by far the least well developed of the three security roperties, largely for technical reasons involving the formal verification of AIS designs[4]. Although such verification is not likely to become a practical reality for many years, techniques such as fault tolerance and software reliability are used to migrate the effects of denial service attacks. The three security properties of confidentiality, integrity, and availability are acvhied by labeling the subjects and objects in an AIS and regulating the flow of information between them according to a predetermined set of rules called a security policy.
The security policy specifies which subject labels can access which object labels. For example, suppose you went shopping and had to present your drives license to pick up some badges assigned to you at the entrance, each listing a brand name. The policy at some stores is that you can only buy the brand name listed on one of your badges. At the check-out lane, the cashier compares the brand names of each object you want to buy with names on your badges. If there’s a match, she rings it up. But if you choose a brand name that doesn’t appear on one of your badges she puts it back on the helf.
You could be sneaky and alter a badge, or pretend to be your neighbor who has more badges than you, or find a clerk who will turn a blind eye. No doubt the store would employ a host of measures to prevent you from cheating. The same situation exists on secure computer systems. Security measure are employed to prevent illicit tampering with labels, positively identify subjects, and provide assurance that the security measures are doing the job correctly. A comprehensive list of minimal requirements to secure an AIS are presented in The Orange Book[5].