The user is often the weak link in the security of a system. Many security breaches have been caused by weak passwords, unencrypted files left on unprotected computers, and successful social engineering attacks. In a social engineering attack, the user is tricked into either divulging secret information or running malicious code. For example, the Melissa virus and the Love Letter worm each infected thousands of computers when users downloaded and opened files sent in email. Therefore, it is vitally important that your program's user interface enhance security by making it easy for the user to make secure choices and avoid costly mistakes. This article discusses how doing things that are contrary to user expectations can cause a security risk, and gives hints for creating a user interface that minimizes the risk from social engineering attacks.
Secure human interface design is a complex topic affecting operating systems as well as individual programs. This article gives only a few hints and highlights. For an extensive discussion of this topic, see Cranor and Garfinkel, Security and Usability: Designing Secure Systems that People Can Use, O'Reilly, 2005. There is also an interesting weblog on this subject maintained by researchers at the University of California at Berkeley (http://usablesecurity.com/).
Use Secure Defaults
Meet Users' Expectations for Security
Secure All Interfaces
Validate All Inputs
Place Files in Secure Locations
Make Security Choices Clear
Fight Social Engineering Attacks
Most users use the default settings of a program and assume that the program is secure. If they have to make specific choices and take multiple actions in order to make a program secure, few will do so. Therefore, the default settings for your program should be as secure as possible. If your program launches other programs, for example, it should launch them with the minimum privileges they need to run. The Love Letter worm was a word processor macro that was able to do great damage because it ran with the user's privilege, which was usually root.
There is a common belief that security and convenience are incompatible. With careful design, this does not have to be so. In fact, it is very important that the user not have to sacrifice convenience for security, because many users will choose convenience in that situation. In many cases, a simpler interface is more secure, because the user is less likely to ignore security features and less likely to make mistakes. Whenever possible, you should make security decisions for your users: you know more about security than they do, and if you can't evaluate the evidence to determine which choice is most secure, the chances are your user will not be able to do so either. For a detailed discussion of this issue and a case study, see the article "Firefox and the Worry-Free Web" in Cranor and Garfinkel, Security and Usability: Designing Secure Systems that People Can Use.
If your program handles data that the user expects to be kept secret, make sure that you protect that data at all times. That means not only keeping it in a secure location or encrypting it on the user's computer, but not handing it off to another program unless you can verify that the other program will protect the data, and not transmitting it over an insecure network. If for some reason you cannot keep the data secure, you should make this situation obvious to users and give them the option of canceling the insecure operation. In this regard, note that the absence of an indication that an operation is secure is not a good way to inform the user that the operation is insecure. A common example of this is any web browser that adds a lock icon (usually small and inconspicuous) on web pages that are protected by SSL/TLS or some similar protocol. The user has to notice that this icon is not present (or that it's in the wrong place, in the case of a spoofed web page) in order to take action. Rather, the program should prominently display some indication for each web page or operation that is not secure.
The user must be made aware of when they are granting authorization to some entity to act on their behalf or to gain access to their files or data. For example, a program might allow users to share files with other users on remote systems in order to allow collaboration. In this case, sharing should be off by default. If the user turns it on, the interface should make clear the extent to which remote users can read from and write to files on the local system. If turning on sharing for one file also lets remote users read any other file in the same folder, for example, the interface must make this clear before sharing is turned on. In addition, as long as sharing is on, there should be some clear indication that it is on, lest users forget that their files are accessible by others.
Authorization should be revocable: if a user grants authorization to someone, the user generally expects to be able to revoke that authorization later. Whenever possible, your program should not only make this possible, it should make it easy to do. If for some reason it will not be possible to revoke the authorization, you should make that clear before granting the authorization. You should also make it clear that revoking authorization cannot reverse damage already done (unless your program provides a restore capability).
Similarly, any other operation that affects security but that cannot be undone should either not be allowed or the user should be made aware of the situation before they act. For example, if all files are backed up in a central database and can't be deleted by the user, the user should be aware of that fact before they record information that they might want to delete later.
As the user's agent, you must carefully avoid performing operations that the user does not expect or intend. For example, avoid automatically running code if it performs functions that the user has not explicitly authorized.
Some programs have multiple user interfaces, such as a graphical user interface, a command-line interface, and an interface for remote access. If any of these interfaces require authentication (such as with a password), then all the interfaces should require it. Furthermore, if you require authentication through a command line or remote interface, be sure the authentication mechanism is secure—don't transmit passwords in plaintext, for example.
As discussed in “Validating Input,” every time your program accepts input from another entity (a user or another program), there is a potential for that entity to enter inappropriate data. By carefully crafting such input, an attacker can sometimes cause a buffer overflow or can cause your program to interpret the data in a way you didn't intend, causing data corruption or even passing control to an attacker's code. Whenever the user can enter data, your user interface should make clear what sort of data and what quantity of data can be entered before the user enters it, and your program should check the input to enforce these restrictions. If the user disregards your instructions and tries to enter inappropriate data, you should tell them exactly what they did wrong and how to correct it. If some entity other than the user is attempting to enter inappropriate data, you should log the attempt and inform the user about the problem, and about what steps they can take to address the issue.
Unless you are encrypting all output, the location where you save files has important security implications. For example, see “Time Of Check–Time Of Use” for a discussion of how to keep temporary files secure. Note that FileVault can secure the user's home folder, but not other locations where the user might choose to place files. You should restrict the locations where users can save files if they contain information that must be protected. If you allow the user to select the location to save files, you should make the security implications of a particular choice clear; specifically, they must understand that, depending on the location of a file, it might be accessible to other applications or even remote users.
When giving the user a choice that has security implications, make the potential consequences of each choice clear. The user should never be surprised by the results of an action. The choice given to the user should be expressed in terms of consequences and trade-offs, not technical details. For example, a choice of encryption methods should be based on the level of security (expressed in simple terms, such as the amount of time it might take to break the encryption) versus the time and disk space required to encrypt the data, rather than on the type of algorithm and the length of the key to be used. If there are no practical differences of importance to the user (as when the more secure encryption method is just as efficient as the less-secure method), just use the most secure method and don't give the user the choice at all.
Be sensitive to the fact that few users are security experts. For example, most users don't know what a digital certificate is, let alone the implications of accepting a certificate with an unknown anchor. Give as much information—in clear, nontechnical terms—as necessary for them to make an informed decision. In some cases, it might be best not to give them the option of changing the default behavior. For example, letting the user permanently add an anchor certificate might not be a good idea if you can't be confident that the user can evaluate the validity of the certificate. (If the user is a security expert, they'll know how to add an anchor certificate to the keychain without the help of your application.)
If you are providing security features, you should make their presence clear to the user. For example, if your mail application allows users to see the certificate used to sign a message, but to do so the user must double click a small icon, most users will never realize that the feature is available.
In an often-quoted but rarely applied monograph, Jerome Saltzer and Michael Schroeder wrote "It is essential that the human interface be designed for ease of use, so that users routinely and automatically apply the protection mechanisms correctly. Also, to the extent that the user's mental image of his protection goals matches the mechanisms he must use, mistakes will be minimized. If he must translate his image of his protection needs into a radically different specification language, he will make errors." (Saltzer and Schroeder, "The Protection of Information in Computer Systems," Proceedings of the IEEE 63:9, 1975.)
For example, you can assume the user understands that the data must be protected from unauthorized access; however, you cannot assume the user has any knowledge of encryption schemes or knows how to evaluate password strength. In this case, your program should present choices to the user such as "is your computer physically secure or is it possible that an unauthorized user will have physical access to the computer?" or "Is your computer connected to a network?" From the user's answers, you can determine how best to protect the data. Do not ask the user questions such as "Do you want to encrypt your data, and if so, with which encryption scheme?" "How long a key should be used?" or "Do you want to permit SSH access to your computer?" because these questions don't correspond to the user's view of the problem. Therefore, the user's answers to such questions are likely to be erroneous. In this regard, it is very important to understand the user's perspective. Very rarely is an interface that seems simple or intuitive to a programmer actually simple or intuitive to average users.
To quote Ka-Ping Yee (User Interaction Design for Secure Systems, at http://www.eecs.berkeley.edu/Pubs/TechRpts/2002/CSD-02-1184.pdf):
In order to have a chance of using a system safely in a world of unreliable and sometimes adversarial software, a user needs to have confidence in all of the following statements:
Things don’t become unsafe all by themselves. (Explicit Authorization)
I can know whether things are safe. (Visibility)
I can make things safer. (Revocability)
I don’t choose to make things unsafe. (Path of Least Resistance)
I know what I can do within the system. (Expected Ability)
I can distinguish the things that matter to me. (Appropriate Boundaries)
I can tell the system what I want. (Expressiveness)
I know what I’m telling the system to do. (Clarity)
The system protects me from being fooled. (Identifiability, Trusted Path)
Social engineering attacks are particularly difficult to fight. In a social engineering attack, the attacker fools the user into executing attack code or giving up private information. A common form of social engineering attack is referred to as phishing. Phishing refers to the creation of an official-looking email or web page that fools the user into thinking they are dealing with an entity with which they are familiar, such as a bank with which they have an account. Typically, the user receives an email informing them that there is something wrong with their account, and instructing them to click on a link in the email. The link takes them to a web page that spoofs a real one; that is, it includes icons, wording, and graphical elements that echo those the user is used to seeing on a legitimate web page. The user is instructed to enter such information as their social security number and password. Having done so, the user has given up enough information to allow the attacker to access the user's account.
Fighting phishing and other social engineering attacks is difficult because the computer's perception of an email or web page is fundamentally different from that of the user. If an email contains a graphic that links to a URL, and the graphic appears to the user to be text with a familiar name (such as Apple.com), then the computer sees a graphic and the URL to which it links, but the user sees a link to Apple. The user cannot easily tell that the graphic is not text and does not link to the location they expect; the computer cannot tell that the graphic contains misleading text.
Most programs, upon detecting a problem or discrepancy, display a dialog box informing the user of the problem. Often this approach does not work, however. For one thing, the user might not understand the warning or its implications. For example, if the dialog warns the user that the site to which they are linking has a certificate whose name does not match the name of the site, the user is unlikely to know what to do with that information, and is likely to ignore it. Furthermore, if the program puts up more than a few dialog boxes, the user is likely to ignore all of them.
Some creative techniques have been tried for fighting social engineering attacks, including trying to recognize URLs that are similar to, but not the same as, well-known URLs, using private email channels for communications with customers, and allowing users to see messages only if they come from known, trusted sources. All of these techniques have problems, and the sophistication of social engineering attacks is increasing all the time. For a more in-depth analysis of the problem, more suggested approaches to fighting it, and some case studies, see Cranor and Garfinkel, Security and Usability: Designing Secure Systems that People Can Use.
© 2008 Apple Inc. All Rights Reserved. (Last updated: 2008-05-23)