Developing secure software involves planning, execution, and testing. In the planning phase, you must determine the nature of the threat to your software. During execution, you must avoid using insecure coding methods. In the testing phase, there are tools available to help you find known security vulnerabilities such as buffer overflows. This article provides general information and a checklist to help you get started. The other articles in this document provide advice for avoiding many of these vulnerabilities. There are also many other books available with detailed information that can help with each phase of this process (see “Introduction to Secure Coding Guide” for a reading list).
Risk Assessment and Threat Modeling
Security Development Checklists
Imagine a program that requires authentication, using multiple authentication methods, to perform any operation, runs only long enough to perform that operation, does not share use of the CPU with any other program, and then quits, requiring reauthorization for the next operation. Such a mode of operation would be very secure, and might be appropriate for a program that launches nuclear missiles, but would you want to use a word processor that acted like that? On the other hand, imagine a program that always runs with root privileges and performs any operation you like without ever requiring authorization. Such a program would be easy to use and would cause no problems on a physically secure computer that is not connected to a network, but would be a security risk otherwise. Somewhere in between these two extremes is the balance between security and user convenience that you need to strike in your software. Exactly where your software fits in this continuum depends on the damage that might occur if your program is compromised—the risk—and the types of attacks the software is likely to face—the threat.
This section gives a brief, high-level introduction to the principles of risk assessment and threat modeling. For a much more thorough treatment, see Howard and LeBlanc, Writing Secure Code (second edition), Microsoft Press, 2003, and Anderson, Security Engineering, John Wiley & Sons, 2001.
In order to assess the risk, you should first assume that your program will be attacked. However, the amount of time and money an attacker is likely to be willing to spend on hacking your program depends on factors such as the value of the data your program handles (thousands of credit card numbers or a user's recipe collection?) and how widely your program will be distributed (used by a single, small workgroup or part of a worldwide operating system rollout?). You need to decide what level of risk is acceptable. A loss of data that will cost your company $1000 to rectify doesn't justify a $10,000 development effort to close all security bugs. On the other hand, damage to your company's reputation might be worth far more in the long run than it would cost to design and develop secure code.
Here are some factors to consider when evaluating risk:
What is the worst thing that can happen if your software is successfully attacked? Theft of a user's identity? Allowing an attacker to gain control of a user's computer? Or just enabling a hacker to get an unusually high score in pinball?
How hard is it to mount a successful attack? If exploiting a vulnerability would require installing a trojan on the user's computer that can take advantage of a race condition that occurs only once in 50 times the program starts up, you might decide the level of risk is acceptable. If the exploit can be put into a script and used by script kiddies or automated to spread by botnets, the level of risk is much higher.
How big a target is it? Is the program installed by default on tens of thousands of computers? Does an attack depend on a user selecting an unusual set of options so that few copies of the program are vulnerable?
How many users would be affected? A denial of service attack on a server might affect thousands of users if even one server is attacked. A worm spread by a common email program might infect thousands of computers.
How accessible is the target? Does running the program require local access, or does the program accept requests across a network? Is authentication required in order to establish a connection, or can anyone who wants to send requests to the program?
Next, you have to determine how your program might be attacked. To this end, you need to create a threat model of your software, which is basically a high-level data-flow model. Every input to the program is a potential attack target—if an attacker can cause a buffer overflow, they might be able to run their own code or otherwise compromise the user's data or system. Every point at which the program outputs data, either to the user or to another software module, is a potential attack point. The attacker might be able to gain access to private information stored on the system, or to read and modify the information being passed between modules (a "man in the middle" attack). Any data stored on the system, either permanently (as in a database) or temporarily (as in a global variable) could potentially be compromised.
There are several types of threats to consider, including the following:
Modifying data: either data used internally by the program (such as interprocess messages), data acted on by the program (such as numbers on which the program does a statistical analysis or an audio track that the program filters), or data stored on disk to which the program gives access (directly, as in a database, or indirectly, as when the attacker uses a vulnerability in a program to take control of a computer).
Compromising data: getting access to secrets either directly through the program or indirectly, by taking control of the computer.
Denial of service: either causing an application or server to stop functioning, or making a server so busy that legitimate users can't get access to it.
Executing malicious code, especially with administrator or root access: the attacker might get your application to execute the attacker's code by exploiting a buffer overflow or by code insertion in a URL command, for example. If your application is running with administrative privileges, the attacker's code will be privileged as well. Once an attacker has administrative control of a computer, all other threats are possible as well.
Spoofing: the attacker might be able to guess or obtain a valid username and password and therefore authenticate as an authorized user. Or, a spoofed server might be able to convince a client application that it is a legitimate server and get the client to give it data, or get the user to provide secrets, such as passwords.
As related problems, you should determine how to guarantee the integrity of your software to prevent it being modified by a virus or worm, and how to ensure nonrepudiation—that is, how to make it impossible for a user to deny performing an operation (such as using a specific credit card number).
The governments of the United States, Canada, the United Kingdom, France, Germany, and the Netherlands have worked together to develop a standardized process and set of standards that can be used to evaluate the security of software products. This process and set of standards is called the Common Criteria. As an attempt to systematize security evaluations, the Common Criteria can be helpful in suggesting a large number of potential problems that you can look for. On the other hand, as with any standardization scheme, the Common Criteria cannot anticipate vulnerabilities that haven't been seen before and is less flexible than might be wished. Although opinions of security experts vary as to the value of a Common Criteria evaluation, some government agencies cannot use software that hasn't been through a full Common Criteria evaluation by an accredited laboratory. Mac OS X v10.3.6 received Common Criteria certification in January 2005. For more information about the Common Criteria, including links to download the complete official criteria, see the Common Criteria portal at http://www.commoncriteriaportal.org/ and the website of the Common Criteria Evaluation and Validation Scheme (CCEVS) (http://www.niap-ccevs.org/cc-scheme/).
This section presents a set of security audit checklists that you can use to help reduce the security vulnerabilities of your software. These checklists are designed to be used during software development. If you read this section all the way through before you start coding, you may avoid many security pitfalls that are difficult to correct in a completed program.
Note that these checklists are not exhaustive; you might not have any of the potential vulnerabilities discussed here and still have insecure code. For this reason, it's very important that you have your code reviewed for security problems by an independent reviewer. A security expert would be best, but any competent programmer, if aware of what to look for, might find problems that you, as the author of the code (and therefore too close to the code to be fully objective) might have overlooked. In addition, any time the code is updated or changed in any way, including to fix bugs, it should be checked again for security problems. In other words:
This checklist is intended to determine whether your code ever runs with elevated privileges, and if it does, how best to do so safely. Note that it's best to avoid running with elevated privileges if possible; see “Avoiding Elevated Privileges.”
Do any components of your program run with privileges that are different from that of the logged on user (applications) or different from that with which it started (servers)?
Sometimes a program needs elevated privileges to perform a limited number of operations, such as writing files to a privileged directory or opening a privileged port. In most cases, a program can get by without elevated privileges. If an attacker can get your code to launch the attacker's code, the attacker's code runs with the privilege that your code had at the time of launch. If malicious code gains root privileges, it can take complete control of the computer. Because of the risk of having your privileged code hijacked by an attacker, you should avoid elevating privileges if at all possible. If you must run code with elevated privileges, never run your main process in this way. Instead, you should spawn a separate thread that runs with elevated privileges, and terminate that thread as soon as possible. See “Elevating Privileges Safely” and Authorization Services Programming Guide for more information.
Important: If all or most of your code runs with root or other elevated privileges, or if you have complex code that performs multiple operations with elevated privileges, then your program could have a serious security vulnerability. You should seek help in performing a security audit of your code to reduce your risk.
For servers, if you start with elevated privileges and then drop privileges, be sure you use a locally unique user ID for your program. If you use some standard UID such as unknown
or nobody
, then any other process running with that same UID can interact with your program, either directly through interprocess communication, or indirectly by altering configuration files. In that case, if someone hijacks another server on the same system, they can interfere with your server; or, conversely, if someone hijacks your server, they can use it to interfere with other servers on the system. You can use Open Directory services to obtain a locally unique UID. Note that UIDs from 0 through 500 are reserved for use by the system.
If any of your code does run with an effective user ID different from that of the logged on user, how did it obtain root (or other) privilege?
Each of the following methods can be used to obtain root privilege. Some methods allow you to launch a process with root privilege; others enable you to elevate the privilege of a running process. In each case, you must understand the limitations and security vulnerabilities of the method.
Important: If at all possible, you should avoid running code with elevated privileges altogether. If you absolutely must run with elevated privileges, you should write a helper tool that performs only the privileged operation and then quits; do not run your main application with elevated privileges. Under no circumstances should you run an application that has a graphical user interface (GUI) with elevated privileges. The preferred method to launch a privileged helper tool is with the launchd
daemon. For more information on these issues, see “Elevating Privileges Safely.”
Starting with Mac OS X v10.4, the launchd
daemon is used to launch daemons and other programs that must be started automatically, without user intervention. (For systems running versions of the OS earlier than Mac OS X v10.4, you can use the standard BSD routine mach_init
for this purpose.) The launchd
daemon launches daemons on a per-user basis and can restart daemons after they quit if they are needed. You provide a configuration file that tells launchd
the level of privilege with which to launch your routine. In contrast, mach_init
adds all processes that execute with root privilege in a single session; these processes serve all users. In either case, you need to be sure that you do not request higher privilege than you actually need, and you should drop privilege or quit execution as soon as possible. The launchd.plist
file includes key-value pairs that you can use to limit the system services—such as memory, number of files, and cpu time—that the daemon can use. You can use the ipfw
firewall program to control packets and traffic flow for internet daemons. For more information on launchd
, see “launchd,” the manual pages for launchd(8)
, launchctl(1)
, and launchd.plist(5)
, and Getting Started with launchd in http://developer.apple.com/macosx/. For more information on ipfw
, see the ipfw(8)
manual page. For more information about mach_init
, see The Boot Process in System Startup Programming Topics and Root and Login Sessions in Multiple User Environments.
If an executable is owned by root
and its setuid
bit is set, the program runs as root regardless of which process launches it. There are two approaches to using setuid
to obtain root privileges while minimizing risk: you can launch your program with root privileges, perform whatever privileged operations are necessary immediately, and then permanently drop privileges; or, your program can launch a setuid
helper tool that runs only as long as necessary and then quits. If the operation you are performing needs a group privilege or user privilege other than root, you should launch your program or helper tool with that privilege only, not with root privilege, to minimize the damage if the program is hijacked.
It's important to note that, if you are running with both a group ID (GID) and user ID (UID) that are different from those of the user, you have to drop the GID before dropping the UID. Once you've changed the UID, you can no longer change the GID. As with every security-related operation, you must check the return values of your calls to setuid
, setgid
, and related routines to make sure they succeeded.
For more information about the use of the setuid
bit and related routines, see “Elevating Privileges Safely.”
When you put an executable in the /Library/StartupItems
directory, it is started by the SystemStarter
program at boot time. Because SystemStarter
runs with root privileges, you can start your program with any level of privilege you wish. Be sure to use the lowest privilege level that you can use to accomplish your task, and to drop privilege as soon as possible. For Mac OS X v10.4 and later, the use of startup items is deprecated; use the launchd
daemon instead. For more information on startup items and startup item privileges, see Creating a Startup Item in Multiple User Environments.
AuthorizationExecWithPrivilege
The Authorization Services API provides the AuthorizationExecWithPrivilege
function, which sets the effective user ID (EUID) of your process to root. Although this function can execute any process temporarily with root privileges, it is not recommended except for installers that have to be able to run from CDs and self-repairing setuid
tools. See Authorization Services Programming Guide for more information.
xinetd
The xinetd
daemon is launched with root privileges at system startup and subsequently launches internet services daemons when they are needed. You use the xinetd.config
configuration file to specify the UID and GID of each daemon started and the port to be used by each service. Starting with Mac OS X v10.4, you should use launchd
to perform the services formerly provided by xinetd
. See Getting Started with launchd in http://developer.apple.com/macosx/ for information about converting from xinetd
to launchd
. See the manual pages for xinetd(8)
and xinetd.conf(5)
for more information about xinetd
.
The xinetd
configuration file includes attributes that you can use to limit the number of connections, number of servers, and so forth in order to prevent a denial of service attack that uses up all system resources. Unfortunately, this feature makes it relatively easy to launch a denial of service attack against xinetd
that shuts off internet services. You need to be aware of this and be prepared to fail gracefully if the system is attacked in this way. If, for example, the resource limits in the xinetd
configuration file are reached, you can suspend operations without shutting down. That way, when the attack ends, your application—and the system—will still be up and running. The xinetd
configuration file includes options—such as the cps
attribute—that you can use to "throttle down" connections under a denial of service attack.
Other
If you are using some other method to obtain elevated privilege for your process, you should switch to one of the methods described here and follow the cautions described in this article and in “Elevating Privileges Safely.”
Does your code execute using sudo?]
If authorized to do so in the sudoers
file, a user can use sudo
to execute a command as root. The sudo
command is intended for occasional administrative use by a user sitting at the computer and typing into the Terminal application. Its use in scripts or called from code is not secure. For one thing, after executing the sudo
command—which requires authenticating by entering a password—there is a five-minute period (by default) during which the sudo command can be executed without further authentication. It's possible for another process to take advantage of this situation to execute a command as root. A more common problem is to fail to realize that there is no encryption or protection of the command being executed. Because sudo is used to execute privileged commands, the command arguments often include user names, passwords, and other information that should be kept secret. A command executed in this way by a script or other code thus exposes confidential data to possible interception and compromise.
Have you separated out the piece of the program that needs access to the privileged facility into a separate program or module?
If your entire program runs with elevated privileges, then not only can any security vulnerabilities in your code be attacked, but any vulnerabilities in all the libraries you link to can be exploited. Separating out the code that needs privileges into a separate process helps limit your program's vulnerability to attack. See Authorization Services Programming Guide for more information.
Approximately how many lines of code need to run with elevated privileges?
If this answer is either "all" or is a difficult number to compute, then it will be very difficult to perform a security review of your software. If you can't determine how to factor your application to separate out the code that needs privileges, you are strongly encouraged to seek assistance with your project immediately. See the Apple Developer Connection (ADC) page on security at http://developer.apple.com/security/ for resources available from Apple. If you are an ADC member, you are encouraged to ask for help from Apple engineers with factoring your code and doing a security audit. If you are not an ADC member, see the ADC membership page at http://developer.apple.com/membership/
Does the tool or application that runs as a different EUID/EGID from that of the console user create new files or rewrite existing files?
If you write to a directory to which someone else has access—either a globally writable directory such as /tmp
or a directory owned by the user—then there is a possibility that someone will modify or corrupt your files. If your code depends on the contents of a file, you should create a safe directory to which only you have access before creating or writing to the file, if at all possible. If you must use a vulnerable directory, you must make sure that all files are assigned the correct file permissions with the correct owner and group and you must check for hard and symbolic links before writing to files or directories. If available, use functions that refer to a file by file descriptor rather than pathname.
For more information about vulnerabilities associated with writing files, and how to minimize the risks, see “Time Of Check–Time Of Use.”
Does the tool or application that runs as a different EUID/EGID from that of the console user call popen or system?
If you are using routines such as popen
or system
to send commands to the shell, and you are using input from the user or received over a network to construct the command, you should be aware that these routines do not validate their input. Consequently, a malicious user can pass shell metacharacters—such as an escape sequence or other special characters—in command line arguments. These metacharacters might cause the following text to be interpreted as a new command and executed. See Viega and McGraw, Building Secure Software, Addison Wesley, 2002, and Wheeler, Secure Programming for Linux and Unix HOWTO, available at http://www.dwheeler.com/secure-programs/, for more information on problems with these and similar routines and for secure ways to execute shell commands.
Does the tool or application that runs as a different EUID/EGID from that of the console user use configuration files, preferences, or environment variables to effect its operation?
In many cases the user can control environmental variables and configuration files, as well as preferences. If you are executing a program for the user with elevated privileges, you are giving the user the opportunity to perform operations that they cannot ordinarily do. Therefore, you should avoid using elevated privileges where the user has access to files or environmental variables. If you must execute such a program with elevated privileges, then it is imperative that you validate all input, whether directly from the user or through environmental variables, configuration files, preferences files, or other files. In the case of environmental variables, the effect might not be immediate or obvious; however the user might be able to modify the behavior of your program or of other programs or system calls. Make sure that any file paths do not contain wildcard characters, such as ../
or ~
, which an attacker can use to switch the current directory to one under the attacker’s control. In addition, in order to avoid unintended privilege escalation, you must specifically set the privileges, environmental variables, and resources available to the running process, rather than assuming the process has inherited the correct environment.
Does the tool or application that runs as a different EUID/EGID from that of the console user have a graphical user interface (GUI)?
You should never run a GUI application with elevated privileges. Any GUI application links in many libraries over which you have no control and which, due to their size and complexity, are very likely to contain security vulnerabilities. In this case, your application runs in an environment set by the GUI, not by your code. Your code and your user's data can then be compromised by the exploitation of any vulnerabilities in the libraries or environment of the graphical interface.
Some security vulnerabilities are related to reading or writing files. This checklist is intended to help you find any such vulnerabilities in your code.
Does your code write to or create files or directories in a publicly writable place?
If you write temporary files to a publicly writable place (for example, /tmp
, /var/tmp
, /Library/Caches
or another specific place with this characteristic), an attacker may be able to modify your files before the next time you read them. If your program runs with elevated privileges and requires the contents of a file to operate correctly, you should create a secure directory and write the file to that directory. On Mac OS X, any user can mount a file system. Therefore, any user can create a directory called /tmp
, for example, and mount their own volume into that directory. (See the manual page for mount(2)
for details.) That volume then appears to be owned by root. For that reason, if you absolutely need to write to a publicly writable directory, you must check to make sure the directory you are writing to is where you think it is.
For more information about vulnerabilities associated with writing files, and how to minimize the risks, see “Time Of Check–Time Of Use.”
Does your code load one or more kernel extensions on demand?
A kernel extension is the ultimate privileged code—it has access to levels of the operating system that cannot be touched by ordinary code, even running as root. You must be extremely careful why, how, and when you load a kernel extension to guard against being fooled into loading the wrong one. It's possible to load a root kit if you're not sufficiently careful. (A root kit is malicious code that, by running in the kernel, can not only take over control of the system but can cover up all evidence of its own existence.) In general, it's best to not write kernel extensions and it's seldom necessary (see Coding in the Kernel). However, if you must use a kernel extension, use the facilities built into Mac OS X to load your extension and be sure to load the extension from a separate privileged process. See “Elevating Privileges Safely” to learn more about the safe use of root access. See Kernel Programming Guide for more information on writing and loading kernel extensions. For help on writing device drivers, see I/O Kit Fundamentals.
This checklist is intended to help you find vulnerabilities related to sending and receiving information over a network. If your project does not contain any tool or application that sends or receives information over a network, skip to “Audit Logs” (for servers) or “Integer and Buffer Overflows” for all other products.
Does your program need to create or use a privileged network port?
Port numbers 0 through 1023 are reserved for use by certain services specified by the Internet Assigned Numbers Authority (IANA; see http://www.iana.org/). On many systems including Mac OS X, only processes running as root can bind to these ports. It is not safe, however, to assume that any communications coming over these privileged ports can be trusted. It's possible that an attacker has obtained root access and used it to connect to a privileged port. Furthermore, on some systems, root access is not needed to connect to these ports.
You should also be aware that if you use the SO_REUSEADDR
socket option with UDP, it is possible for a local attacker to hijack your port.
Therefore, you should always use port numbers assigned by the IANA, you should always check return codes to make sure you have connected successfully, and you should check that you are connected to the correct port. Also, as always, never trust input data, even if it's coming over a privileged port. Whether data is being read from a file, entered by a user, or received over a network, you must validate all input.
What data transport protocol does your program use (TCP, UDP, other)?
Lower-level protocols, such as UDP, are easier to spoof than higher-level protocols, such as TCP. If you’re using TCP, you still need to worry about authenticating both ends of the connection, but there are encryption layers you can add to increase security.
Is authentication required for the clients of the services provided?
If you're providing a free and nonconfidential service, and do not process user input, then authentication is not necessary. On the other hand, if any secret information is being exchanged, the user is allowed to enter data that your program processes, or there is any reason to restrict user access, then you should authenticate every user. Remember that, even if there is no confidential data to be compromised, there are other ways a server can be attacked. For example,a malicious user can still mount a denial of service attack against your server. See question #7 in this section.
If authentication is required, what protocol is used?
Mac OS X provides a variety of secure network APIs and authorization services, all of which perform authentication. You should always use these services rather than creating your own authentication mechanism. For one thing, authentication is very difficult to do correctly, and dangerous to get wrong. If an attacker breaks your authentication scheme, you could compromise secrets or give the attacker an entry to your system. The only approved authorization mechanism for networked applications is Kerberos; see “User-Server Authentication.” For more information on secure networking, see Secure Transport Reference and CFNetwork Programming Guide.
If authentication is not required, are clients of the provided network service limited to accessing a specific set of functions, so they cannot damage data on the system or view private data?
If you answered no, you should restructure your code to limit the access of clients. Note that UI limitations are not sufficient protection; you should run with restricted privileges and protect against privilege escalation as discussed in “Elevating Privileges Safely.”
If your client application cannot make a connection with the server, does it fail gracefully?
If a server is unavailable, either because of some problem with the network or because the server is under a denial of service attack, your client application should limit the frequency and number of retries and should give the user the opportunity to cancel the operation. Poorly-designed clients that retry connections too frequently and too insistently, or that hang while waiting for a connection, can inadvertently contribute to—or cause their own—denial of service.
Is your server designed to handle high volumes of connections?
Your server should be capable of surviving a denial of service attack without crashing or losing data. In addition, you should limit the total amount of processor time, memory, and disk space your server can use, to avoid allowing a denial of service attack on your server resulting in denial of service to every process on the system. You can use the ipfw
firewall program to control packets and traffic flow for internet daemons. For more information on ipfw
, see the ipfw(8)
manual page. See Wheeler, Secure Programming for Linux and Unix HOWTO, available at http://www.dwheeler.com/secure-programs/, for more advice on dealing with denial of service attacks.
It's very important to audit attempts to connect to a server or to gain authorization to use a secure program. If someone is attempting to attack your program, you should know what they are doing and how they are going about it. Furthermore, if your program is attacked successfully, your audit log is the only way you can determine what happened and how extensive the security breach was. This checklist is intended to help you make sure you have an adequate logging mechanism in place.
Important: Don’t log confidential data, such as passwords, which could then be read later by a malicious user.
Does your server or secure program audit attempts to connect?
If not, you should add auditing to your project.
Note that an attacker can attempt to use the audit log itself to create a denial of service attack; therefore, you should limit the rate of entering audit messages and the total size of the log file. You also need to validate the input to the log itself, so that an attacker can't enter special characters such as the newline character that you might misinterpret when reading the log.
See Wheeler, Secure Programming for Linux and Unix HOWTO for some advice on audit logs.
Does your project make use of the libbsm
auditing library?
The libbsm
auditing library is part of the TrustedBSD project, which in turn is a set of trusted extensions to the FreeBSD operating system. Apple has contributed to this project and has incorporated the audit library into the Darwin kernel of the Mac OS X operating system. (This library is not available in iPhone OS.) You can use the libbsm
auditing library to implement auditing of your program for login and authorization attempts. This library gives you a lot of control over which events are audited and how to handle denial of service attacks. The libbsm project is located at http://www.opensource.apple.com/darwinsource/Current/bsm/. For documentation of the BSM service, see the "Auditing Topics" chapter in Sun Microsystems' System Administration Guide: Security Services located at http://docs.sun.com/app/docs/doc/806-4078/6jd6cjs67?a=view.
If you answered no to 2, indicate how your project creates auditing information:
Prior to the implementation of the libbsm
auditing library, the standard C library function syslog
was most commonly used to write data to a log file. If you are using syslog
, consider switching to libbsm
, which gives you more options to deal with denial of service attacks. If you want to stay with syslog
, be sure your auditing code is resistant to denial of service attacks, as discussed in step 1.
Custom log file
If you have implemented your own custom logging service, consider switching to libbsm
to avoid inadvertently creating a security vulnerability. In addition, if you use libbsm
your code will be more easily maintainable and will benefit from future enhancements to the libbsm
code.
If you stick with your own custom logging service, you must make certain that it is resistant to denial of service attacks (see step 1) and that an attacker can't tamper with the contents of the log file. Because your log file must either be encrypted or protected with access controls to prevent tampering, you must also provide tools for reading and processing your log file. Be sure your logging code is audited for security vulnerabilities.
If any private or secret information is passed between a server and a client, both ends of the connection should be authenticated. This checklist is intended to help you determine whether your server's authentication mechanism is safe and adequate. If you are not writing a server, skip to “Integer and Buffer Overflows.”
Does your server store or validate user passwords?
It's a very bad idea to store or validate passwords yourself, as it's very hard to do so securely, and Mac OS X provides secure facilities for just that purpose. On a user computer, you can use the keychain to store passwords and Authorization Services to validate them (see Keychain Services Programming Guide) and Authorization Services Programming Guide). On Mac OS X Server, you can use Open Directory (see Open Directory Programming Guide). On an iPhone OS device, you can use the keychain. iPhone OS devices authenticate the application that is attempting to obtain a keychain item rather than asking the user for a password. The best way to keep passwords secure in iPhone backup data is to store the passwords in the keychain, because they are encrypted in the keychain and remain encrypted in the backup.
Do users ever have to provide a password over a network connection in the clear?
You should never assume that an unencrypted network connection is secure. Information on an unencrypted network can be intercepted by any individual or organization between the client and the server. Even an intranet, which does not go outside of your company, is not secure. A large percentage of cyber crime is committed by company insiders, who can be assumed to have access to a network inside a firewall (see “The Security Landscape”). Mac OS X provides APIs for secure network connections; see Secure Transport Reference and CFNetwork Programming Guide for details.
Is an existing system security component used to create, modify, delete, and validate user passwords?
You should never manipulate or validate passwords yourself. You can use Authorization Services to create, modify, delete, and validate user passwords on a client computer and Open Directory to store passwords and authenticate users on a network.
If the client is providing authentication information, has the server already provided its own authentication credential for verification (as an anti-spoofing measure)?
Although server authentication is optional in the SSL/TLS protocols, you should always do it. Otherwise, an attacker might spoof your server, injuring your users and damaging your reputation in the process.
Does your server enforce any of the following policies?
Password strength—you can evaluate the strength of a proposed password
Password expiration
Disabled accounts
Expired accounts
Changing password—you can require that the client application support the ability to change passwords
Lost password (such as a system that triggers the user's memory or a series of questions designed to authenticate the user without a password)—make sure your authentication method is not so insecure that an attacker doesn't even bother to try a password, and be careful not to leak information, such as the correct length of the password, the email address to which the recovered password is sent, or whether the user ID is valid
Limitations on characters used in a password
Limitations on password length
Minimum password length settable by the system administrator
The more of these policies you enforce, the more secure your server will be. Rather than creating your own password database—which is difficult to do securely—you should use the Apple Password Server. See Open Directory Programming Guide for more information about the Password Server, Directory Service Framework Reference for a list of Directory Services functions, and the manual pages for pwpolicy(8)
, passwd(1)
, passwd(5)
, and getpwent(3)
at http://developer.apple.com/documentation/Darwin/Reference/ManPages/index.html for tools to access the password database and set password policies.
Does your product ever reissue to a third-party application a password given to it by a client?
In order to reissue a password, you first have to cache the password, which is bad security practice. Furthermore, when you reissue a password, you can put that password into an inappropriate security context. For example, suppose your program is a web server, and you use SSL to communicate with clients. If you take a client's password and use it to log into a database server to do something on the client's behalf, there's no way to guarantee that the database server keeps the password secure and does not pass it on to another server in the clear. Therefore, even though the password was in a secure context when it was being sent to the web server over SSL, when the web server reissues it, it's in an insecure context. If you want to spare your client the trouble of logging in separately to each server, use some kind or forwardable authentication, such as Kerberos. For more information on Apple's implementation of Kerberos, see http://developer.apple.com/darwin/projects/kerberos/.
Does your product support Kerberos?
Kerberos is the only authorization service available over a network for Mac OS X servers, and it offers single-sign-on capabilities. If you are writing a server to run on Mac OS X, you should support Kerberos. When you do:
Be sure you're using the latest version (v5).
Use a service-specific principal, not a host principal. Each service that uses Kerberos should have its own principal so that compromise of one key does not compromise more than one service. If you use a host principal, anyone who has your host key can spoof login by anybody on the system.
Does your product support authentication methods other than Kerberos?
The only alternative to Kerberos for authentication is SSL/TLS, which does not support authorization.
Does your product support unauthenticated (guest) access?
If you allow guest access, be sure that guests are restricted in what they can do, and that your user interface makes clear to the system administrator what guests can do. Guest access should be off by default. It's best if the administrator can disable guest access.
Does your product use Open Directory for all authentication?
Open Directory is the directory server provided by Mac OS X for secure storage of passwords and user authentication. It is important that you use this service and not try to implement your own, as secure directory servers are difficult to implement and an entire directory's passwords can be compromised if it's done wrong. See Open Directory Programming Guide for more information.
Does your product scrub (zero) user passwords from memory after validation?
Passwords must be kept in memory for the minimum amount of time possible and should be written over, not just released, when no longer needed. It is possible to read data out of memory even if the operating system has no pointers to it.
As discussed in “Avoiding Buffer Overflows,” buffer overflows are a major source of security vulnerabilities. This checklist is intended to help you identify and correct buffer overflows in your program.
Do you use signed or unsigned values when calculating memory object offsets or sizes?
Signed values make it easier for an attacker to cause a buffer overflow, creating a security vulnerability, especially if your application accepts signed values from user input or other outside sources. Be aware that data structures referenced in parameters might contain signed values. See “Integer Overflow” for details.
Do you check for integer overflows (or underflows, in the case of signed integers) when calculating memory object offsets or sizes?
You should use unsigned integers and must always check for integer overflows when calculating memory offsets or sizes. Integer overflows can often be exploited to corrupt memory and even to execute an attacker's own code. See “Integer Overflow.”
Does your program ever allocate space using code similar to size = value + N
?
If yes, you must check to make sure that the total space allocated does not exceed the space available and that value + N
does not exceed the maximum size for an integer. You must also check for underflows if you are using signed integers. See “Calculating Buffer Sizes”
Do you use any of the following string-handling functions: strcat
, strcpy
, strncat
, strncpy
, sprintf
, vsprintf
, gets
?
These functions have no built-in checks for string length, and can lead to buffer overflows. For alternatives, see “String Handling.”
This checklist is intended to help you determine whether your program has any vulnerabilities related to use of encryption, cryptographic algorithms, or random number generation.
Does your application need to use good random numbers?
Do not attempt to generate your own random numbers. You can obtain high-quality random numbers from the Randomization Services programming interface in iPhone OS or from /dev/random
in Mac OS X (see the manual page for random(4)
. For a C function that returns random numbers, see the header file random.h
in the Apple CSP module, which is part of Apple's implementation of the CDSA framework (available at http://developer.apple.com/darwin/projects/security/). Note that rand(3)
does not return good random numbers and should not be used.
Does your program use TLS/SSL (by using OpenSSL or https through CFNetwork) or Secure Transport?
If you are not using TLS/SSL, what are you using?
Is this method a standard?
If yes, what is the standards body?
You should use an accepted standard protocol for secure networking. The only way to ensure that your messages are as secure as possible is to use the most recent version of a standard secure networking protocol, such as TLS. A standard has had peer review and so is more likely to be secure. For secure networking protocols available on Mac OS X, see http://developer.apple.com/referencelibrary/Networking/idxSecurity-title.html.
Does your program use any other cryptographic algorithms?
If yes, be sure you use existing optimized functions. It is very difficult to implement a secure cryptographic algorithm, and good, secure cryptographic functions are readily available. In iPhone OS, use the cryptographic functions in Certificate, Key, and Trust Services (Certificate, Key, and Trust Services Reference). For Mac OS X, see Apple's implementation of the CDSA framework (available at http://developer.apple.com/darwin/projects/security/) for functions you can use to add cryptographic capabilities to your program.
If you want to use cryptography to ensure that a message or document has not been corrupted rather than to keep it secret, you can use digital signatures instead. In iPhone OS, use the digital signature functions in Certificate, Key, and Trust Services . For Mac OS X, see the CDSA framework for functions to create and evaluate digital signatures.
If you want to encrypt small amounts of data such as passwords and private keys, you can use Keychain Services. Note that the keychain is designed to protect a user's secrets from others. Because the user has access to all secrets in the keychain, it is not useful for protecting a vendor's secrets from the user. Also, because in Mac OS X the user can create multiple keychains and can specify which one is the default keychain, you should not make any assumptions about which keychain to write to or to search. The default keychain may or may not be the login keychain. Always use the default keychain unless you have a specific reason to do otherwise. See Keychain Services Programming Guide for more information.
In iPhone OS, secrets in the keychain are automatically encrypted and decrypted by the system without requesting a password from the user. In an iPhone backup, the keychain data is kept in its encrypted state, and cannot be decrypted on the backup computer.
Many security vulnerabilities are caused by problems with how programs are installed or code modules are loaded. This checklist is intended to help you find any such problems in your project.
Does your program install components in /Library/StartupItems
or /System/Library/Extensions
?
Code installed into these directories runs with root permissions. Therefore, it is very important that such programs be carefully audited for security vulnerabilities (as discussed in this checklist) and that they have their permissions set correctly. For information on proper permissions for startup items, see Creating a Startup Item. For information on permissions for extensions, see Kernel Extension Programming Topics. Starting with Mac OS X v10.4, you should not use startup items; use the launchd daemon instead. See Getting Started with launchd in http://developer.apple.com/macosx/.
Does your application use a custom install script?
If yes, be sure your script follows the guidelines in this checklist for secure scripts. For example, don't write temporary files to globally writable directories, don't execute with higher privileges than necessary, and don't execute with elevated privileges any longer than necessary. In general, your script should execute with the same privileges the user has normally, and should do its work in the user's directory on behalf of the user. Make sure your installed program does not have permissions that are more lax than it needs. (For example, don’t give everyone read/write permission if only the owner needs such permission.) Set your installer's file code creation mask (umask) to restrict access to the files it creates (see “Secure File Operations”). Check return codes, and if anything is wrong, log the problem and report the problem to the user through the user interface. For advice on writing installation code that needs to perform privileged operations, see Authorization Services Programming Guide.
Does your program load any plug-ins or does it link against a library that does?
If yes, the places from which the plug-ins can be loaded should be restricted to secure directories. If your application loads plug-ins from directories that are not restricted, then an attacker might be able to trick the user into downloading malicious code, which your application might then load and execute. Be aware that the dynamic link editor (dyld
) might link in plugins, depending on the environment in which your code is running. If your code uses loadable bundles (CFBundle
or NSBundle
), then it is dynamically loading code and potentially could load bundles written by a malicious hacker. See Code Loading Programming Topics for Cocoa for more information about dynamically loaded code.
If your program includes or uses any command-line tools, you have to look for security vulnerabilities specific to the use of such tools. This checklist is intended to help you find and correct such vulnerabilities.
Does your program include command-line tools?
If so, you must keep in mind that your process environment is visible to other users (see man ps(1)
). You must be careful not to pass sensitive information in an insecure manner. Also, remember that anyone can execute a tool—it is not only executable through your program. Because all command-line arguments, including the program name (argv(0)
), are under the control of the user, you should not use the command line to execute any program without validating every parameter, including the name.
Do the command-line tools require authentication information?
If yes, how is this done?
Command-line argument
It is not safe to enter a password on the command line, as it is visible to others and a brute force attack can be automated through a script.
Pipe or standard in
A password is safe while being passed through a pipe; however, you must be careful that the process sending the password obtains and stores it in a safe manner.
Environment variable
Environment variables can be read by other processes and are not secure.
Shared memory
Named and globally-shared memory segments may be read by other processes.
Temporary file
Temporary files are safe only if kept in a directory to which only your program has access. See “Data, Configuration, and Temporary Files,” earlier in this article, for more information on temporary files.
This checklist is intended to help you avoid denial of service attacks related to the use of hash functions. Use this section for kernel and IOKit code only.
Does your code use a hash function to improve performance?
Hash tables are often used to improve search performance. However, when there are hash collisions (where two items in the list have the same hash result), a much slower search must be used to resolve the conflict. If it is possible for a user to deliberately generate different requests that have the same hash result, by making many such requests an attacker can mount a denial of service attack.
This checklist is intended to help you determine whether your code has the correct level of privileges. This check is especially important for kernel-level code, because kernel-level code has access to levels of the operating system that cannot be touched by ordinary code, even running as root.
Important: Kernel-level code poses special security risks. Do not write kernel-level code unless absolutely necessary. See Coding in the Kernel for more information on this subject.
Does your code ever check the user ID (EUID or EGID) of the process?
To make sure that an attacker hasn't somehow substituted their own kernel extension for yours, you should check to make sure that the module is executing with the correct effective user ID (EUID) and effective group ID (EGID). Don’t forget to check the current set of groups that have access to make sure no groups are included that shouldn’t be.
Does your code ever check the existence of any specific Mach ports?
Kernel-level code can work directly with the Mach component. A Mach port is an endpoint of a communication channel between a client who requests a service and a server that provides the service. Mach ports are unidirectional; a reply to a service request must use a second port. If you are using Mach ports for communication between processes, you should check to make sure you are contacting the correct process. Because Mach bootstrap ports can be inherited, it is important for servers and clients to authenticate each other. You can use audit trailers for this purpose.
If you answered yes to either 1 or 2, do you create audit records for these checks?
You should create an audit record for each security-related check your program performs. See “Audit Logs,” earlier in this article, for more information on audit records.
This checklist is intended to help you determine whether your kernel-level code has any vulnerabilities related to memory use. You can skip this checklist if you are not writing kernel-level code.
Important: If your code copies uninitialized memory into a buffer that you return to a user, then you could be leaking privileged information. Coding in the kernel poses special security risks and is seldom necessary. See Coding in the Kernel for alternatives to writing kernel-level code.
Does your code ever copy data to or from user space?
If yes, you should check the bounds of the data with unsigned arithmetic—just as you check all bounds (see “Integer and Buffer Overflows,” earlier in this article)—to avoid buffer overflows.
You should also check for and handle misaligned buffers.
Does your code limit the memory resources a user may request?
If your code does not limit the memory resources a user may request, then a malicious user can mount a denial of service attack by requesting more memory than is available in the system.
When copying data from kernel to user space, is any padding added?
If you add padding to align the bytes in user space, you should zero the padding to make sure you are not adding spurious (or even malicious) data to the user-space buffer.
This checklist is intended to help you determine whether your kernel-level code is using kernel messages correctly. You can skip this checklist if you are not writing kernel-level code.
Does your code generate kernel messages?
Kernel code often generates messages to the console for debugging purposes. If your code does this, be careful not to include any sensitive information in the messages. In addition, you need to limit the rate and total number of such messages, because an attacker who can figure out how to trigger these messages can use them to mount a denial of service attack. Also, any logging mechanism can be attacked with a denial of service attack. You should be prepared to handle such an attack without hanging or causing a kernel panic.
© 2008 Apple Inc. All Rights Reserved. (Last updated: 2008-05-23)