Next: , Previous: Debugging Options, Up: Invoking GCC


3.10 Options That Control Optimization

These options control various sorts of optimizations.

Without any optimization option, the compiler's goal is to reduce the cost of compilation and to make debugging produce the expected results. Statements are independent: if you stop the program with a breakpoint between statements, you can then assign a new value to any variable or change the program counter to any other statement in the function and get exactly the results you would expect from the source code.

Turning on optimization flags makes the compiler attempt to improve the performance and/or code size at the expense of compilation time and possibly the ability to debug the program.

The compiler performs optimization based on the knowledge it has of the program. Optimization levels -O2 and above, in particular, enable unit-at-a-time mode, which allows the compiler to consider information gained from later functions in the file when compiling a function. Compiling multiple files at once to a single output file in unit-at-a-time mode allows the compiler to use information gained from all of the files when compiling each of them.

Not all optimizations are controlled directly by a flag. Only optimizations that have a flag are listed.

-O
-O1
Optimize. Optimizing compilation takes somewhat more time, and a lot more memory for a large function.

With -O, the compiler tries to reduce code size and execution time, without performing any optimizations that take a great deal of compilation time.

-O turns on the following optimization flags:

          -fdefer-pop 
          -fdelayed-branch 
          -fguess-branch-probability 
          -fcprop-registers 
          -floop-optimize 
          -fif-conversion 
          -fif-conversion2 
          -ftree-ccp 
          -ftree-dce 
          -ftree-dominator-opts 
          -ftree-dse 
          -ftree-ter 
          -ftree-lrs 
          -ftree-sra 
          -ftree-copyrename 
          -ftree-fre 
          -ftree-ch 
          -fmerge-constants
     

-O also turns on -fomit-frame-pointer on machines where doing so does not interfere with debugging.

-O doesn't turn on -ftree-sra for the Ada compiler. This option must be explicitly specified on the command line to be enabled for the Ada compiler.

-O2
Optimize even more. GCC performs nearly all supported optimizations that do not involve a space-speed tradeoff. The compiler does not perform loop unrolling or function inlining when you specify -O2. As compared to -O, this option increases both compilation time and the performance of the generated code.

-O2 turns on all optimization flags specified by -O. It also turns on the following optimization flags:

          -fthread-jumps 
          -fcrossjumping 
          -foptimize-sibling-calls 
          -fcse-follow-jumps  -fcse-skip-blocks 
          -fgcse  -fgcse-lm  
          -fexpensive-optimizations 
          -fstrength-reduce 
          -frerun-cse-after-loop  -frerun-loop-opt 
          -fcaller-saves 
          -fforce-mem 
          -fpeephole2 
          -fschedule-insns  -fschedule-insns2 
          -fsched-interblock  -fsched-spec 
          -fregmove 
          -fstrict-aliasing 
          -fdelete-null-pointer-checks 
          -freorder-blocks  -freorder-functions 
          -funit-at-a-time 
          -falign-functions  -falign-jumps 
          -falign-loops  -falign-labels 
          -ftree-pre
     

Please note the warning under -fgcse about invoking -O2 on programs that use computed gotos.

In Apple's version of GCC, -fstrict-aliasing, -freorder-blocks, and -fsched-interblock are disabled by default when optimizing.

-O3
Optimize yet more. -O3 turns on all optimizations specified by -O2 and also turns on the -finline-functions, -funswitch-loops and -fgcse-after-reload options.
-O0
Do not optimize. This is the default.
-fast
Optimize for maximum performance. -fast changes the overall optimization strategy of GCC in order to produce the fastest possible running code for PPC7450 and G5 architectures. By default, -fast optimizes for G5. Programs optimized for G5 will not run on PPC7450. To optimize for PPC7450, add -mcpu=7450 on command line.

-fast currently enables the following optimization flags (for G5 and PPC7450). These flags may change in the future. You cannot override any of these options if you use -fast except by setting -mcpu=7450 (or -fPIC, see below).

          -O3
          -falign-loops-max-skip=15
          -falign-jumps-max-skip=15
          -falign-loops=16
          -falign-jumps=16
          -falign-functions=16
          -malign-natural (except when -fastf is specified)
          -ffast-math
          -fstrict-aliasing
          -funroll-loops
          -ftree-loop-linear
          -ftree-loop-memset
          -mcpu=G5
          -mpowerpc-gpopt
          -mtune=G5  (unless -mtune=G4 is specified).
          -fsched-interblock
          -fgcse-sm
          -mpowerpc64
     

To build shared libraries with -fast, specify -fPIC on the command line as -fast turns on -mdynamic-no-pic otherwise.

Important notes: -ffast-math results in code that is not necessarily IEEE-compliant. -fstrict-aliasing is highly likely to break non-standard-compliant programs. -malign-natural only works properly if the entire program is compiled with it, and none of the standard headers/libraries contain any code that changes alignment when this option is used.

On Intel target, -fast currently enables the following optimization flags:

          -O3
          -fomit-frame-pointer
          -fstrict-aliasing
          -momit-leaf-frame-pointer
          -fno-tree-pre
          -falign-loops
     

All choices of flags enabled by -fast are subject to change without notice.

-Os
Optimize for size, but not at the expense of speed. -Os enables all -O2 optimizations that do not typically increase code size. However, instructions are chosen for best performance, regardless of size. To optimize solely for size on Darwin, use -Oz (APPLE ONLY).

The following options are set for -O2, but are disabled under -Os:

          -falign-functions  -falign-jumps  -falign-loops 
          -falign-labels  -freorder-blocks  -freorder-blocks-and-partition 
          -fprefetch-loop-arrays
     

When optimizing with -Os or -Oz (APPLE ONLY) on Darwin, any function up to 30 “estimated insns” in size will be considered for inlining. When compiling C and Objective-C sourcefiles with -Os or -Oz on Darwin, functions explictly marked with the inline keyword up to 450 “estimated insns” in size will be considered for inlining. When compiling for Apple POWERPC targets, -Os and -Oz (APPLE ONLY) disable use of the string instructions even though they would usually be smaller, because the kernel can't emulate them correctly in some rare cases. This behavior is not portable to any other gcc environment, and will not affect most programs at all. If you really want the string instructions, use -mstring.

-Oz
(APPLE ONLY) Optimize for size, regardless of performance. -Oz enables the same optimization flags that -Os uses, but -Oz also enables other optimizations intended solely to reduce code size. In particular, instructions that encode into fewer bytes are preferred over longer instructions that execute in fewer cycles. -Oz on Darwin is very similar to -Os in FSF distributions of GCC. -Oz employs the same inlining limits and avoids string instructions just like -Os.

If you use multiple -O options, with or without level numbers, the last such option is the one that is effective.

Options of the form -fflag specify machine-independent flags. Most flags have both positive and negative forms; the negative form of -ffoo would be -fno-foo. In the table below, only one of the forms is listed—the one you typically will use. You can figure out the other form by either removing `no-' or adding it.

The following options control specific optimizations. They are either activated by -O options or are related to ones that are. You can use the following flags in the rare cases when “fine-tuning” of optimizations to be performed is desired.

-fno-default-inline
Do not make member functions inline by default merely because they are defined inside the class scope (C++ only). Otherwise, when you specify -O, member functions defined inside class scope are compiled inline by default; i.e., you don't need to add `inline' in front of the member function name.
-fno-defer-pop
Always pop the arguments to each function call as soon as that function returns. For machines which must pop arguments after a function call, the compiler normally lets arguments accumulate on the stack for several function calls and pops them all at once.

Disabled at levels -O, -O2, -O3, -Os, -Oz (APPLE ONLY).

-fforce-mem
Force memory operands to be copied into registers before doing arithmetic on them. This produces better code by making all memory references potential common subexpressions. When they are not common subexpressions, instruction combination should eliminate the separate register-load.

Enabled at levels -O2, -O3, -Os, -Oz (APPLE ONLY).

-fforce-addr
Force memory address constants to be copied into registers before doing arithmetic on them. This may produce better code just as -fforce-mem may.
-fomit-frame-pointer
Don't keep the frame pointer in a register for functions that don't need one. This avoids the instructions to save, set up and restore frame pointers; it also makes an extra register available in many functions. It also makes debugging impossible on some machines.

On some machines, such as the VAX, this flag has no effect, because the standard calling sequence automatically handles the frame pointer and nothing is saved by pretending it doesn't exist. The machine-description macro FRAME_POINTER_REQUIRED controls whether a target machine supports this flag. See Register Usage.

Enabled at levels -O, -O2, -O3, -Os, -Oz (APPLE ONLY).

-foptimize-sibling-calls
Optimize sibling and tail recursive calls.

Enabled at levels -O2, -O3, -Os, -Oz (APPLE ONLY).

-fno-inline
Don't pay attention to the inline keyword. Normally this option is used to keep the compiler from expanding any functions inline. Note that if you are not optimizing, no functions can be expanded inline.
-finline-functions
Integrate all simple functions into their callers. The compiler heuristically decides which functions are simple enough to be worth integrating in this way.

If all calls to a given function are integrated, and the function is declared static, then the function is normally not output as assembler code in its own right.

Enabled at level -O3.

-finline-limit=n
By default, GCC limits the size of functions that can be inlined. This flag allows the control of this limit for functions that are explicitly marked as inline (i.e., marked with the inline keyword or defined within the class definition in c++). n is the size of functions that can be inlined in number of pseudo instructions (not counting parameter handling). The default value of n is 600. Increasing this value can result in more inlined code at the cost of compilation time and memory consumption. Decreasing usually makes the compilation faster and less code will be inlined (which presumably means slower programs). This option is particularly useful for programs that use inlining heavily such as those based on recursive templates with C++.

Inlining is actually controlled by a number of parameters, which may be specified individually by using --param name=value. The -finline-limit=n option sets some of these parameters as follows:

max-inline-insns-single
is set to n/2.
max-inline-insns-auto
is set to n/2.
min-inline-insns
is set to 130 or n/4, whichever is smaller.
max-inline-insns-rtl
is set to n.

See below for a documentation of the individual parameters controlling inlining.

Note: pseudo instruction represents, in this particular context, an abstract measurement of function's size. In no way, it represents a count of assembly instructions and as such its exact meaning might change from one release to an another.

-fkeep-inline-functions
In C, emit static functions that are declared inline into the object file, even if the function has been inlined into all of its callers. This switch does not affect functions using the extern inline extension in GNU C. In C++, emit any and all inline functions into the object file.
-fkeep-static-consts
Emit variables declared static const when optimization isn't turned on, even if the variables aren't referenced.

GCC enables this option by default. If you want to force the compiler to check if the variable was referenced, regardless of whether or not optimization is turned on, use the -fno-keep-static-consts option.

-fmerge-constants
Attempt to merge identical constants (string constants and floating point constants) across compilation units.

This option is the default for optimized compilation if the assembler and linker support it. Use -fno-merge-constants to inhibit this behavior.

Enabled at levels -O, -O2, -O3, -Os, -Oz (APPLE ONLY).

-fmerge-all-constants
Attempt to merge identical constants and identical variables.

This option implies -fmerge-constants. In addition to -fmerge-constants this considers e.g. even constant initialized arrays or initialized constant variables with integral or floating point types. Languages like C or C++ require each non-automatic variable to have distinct location, so using this option will result in non-conforming behavior.

-fmodulo-sched
Perform swing modulo scheduling immediately before the first scheduling pass. This pass looks at innermost loops and reorders their instructions by overlapping different iterations.
-fno-branch-count-reg
Do not use “decrement and branch” instructions on a count register, but instead generate a sequence of instructions that decrement a register, compare it against zero, then branch based upon the result. This option is only meaningful on architectures that support such instructions, which include x86, PowerPC, IA-64 and S/390.

The default is -fbranch-count-reg, enabled when -fstrength-reduce is enabled.

-fno-function-cse
Do not put function addresses in registers; make each instruction that calls a constant function contain the function's address explicitly.

This option results in less efficient code, but some strange hacks that alter the assembler output may be confused by the optimizations performed when this option is not used.

The default is -ffunction-cse

-fno-zero-initialized-in-bss
If the target supports a BSS section, GCC by default puts variables that are initialized to zero into BSS. This can save space in the resulting code.

This option turns off this behavior because some programs explicitly rely on variables going to the data section. E.g., so that the resulting executable can find the beginning of that section and/or make assumptions based on that.

The default is -fzero-initialized-in-bss.

-fbounds-check
For front-ends that support it, generate additional code to check that indices used to access arrays are within the declared range. This is currently only supported by the Java and Fortran front-ends, where this option defaults to true and false respectively.
-fmudflap -fmudflapth -fmudflapir
For front-ends that support it (C and C++), instrument all risky pointer/array dereferencing operations, some standard library string/heap functions, and some other associated constructs with range/validity tests. Modules so instrumented should be immune to buffer overflows, invalid heap use, and some other classes of C/C++ programming errors. The instrumentation relies on a separate runtime library (libmudflap), which will be linked into a program if -fmudflap is given at link time. Run-time behavior of the instrumented program is controlled by the MUDFLAP_OPTIONS environment variable. See env MUDFLAP_OPTIONS=-help a.out for its options.

Use -fmudflapth instead of -fmudflap to compile and to link if your program is multi-threaded. Use -fmudflapir, in addition to -fmudflap or -fmudflapth, if instrumentation should ignore pointer reads. This produces less instrumentation (and therefore faster execution) and still provides some protection against outright memory corrupting writes, but allows erroneously read data to propagate within a program.

-fstrength-reduce
Perform the optimizations of loop strength reduction and elimination of iteration variables.

Enabled at levels -O2, -O3, -Os, -Oz (APPLE ONLY).

-fthread-jumps
Perform optimizations where we check to see if a jump branches to a location where another comparison subsumed by the first is found. If so, the first branch is redirected to either the destination of the second branch or a point immediately following it, depending on whether the condition is known to be true or false.

Enabled at levels -O2, -O3, -Os, -Oz (APPLE ONLY).

-fcse-follow-jumps
In common subexpression elimination, scan through jump instructions when the target of the jump is not reached by any other path. For example, when CSE encounters an if statement with an else clause, CSE will follow the jump when the condition tested is false.

Enabled at levels -O2, -O3, -Os, -Oz (APPLE ONLY).

-fcse-skip-blocks
This is similar to -fcse-follow-jumps, but causes CSE to follow jumps which conditionally skip over blocks. When CSE encounters a simple if statement with no else clause, -fcse-skip-blocks causes CSE to follow the jump around the body of the if.

Enabled at levels -O2, -O3, -Os, -Oz (APPLE ONLY).

-frerun-cse-after-loop
Re-run common subexpression elimination after loop optimizations has been performed.

Enabled at levels -O2, -O3, -Os, -Oz (APPLE ONLY).

-frerun-loop-opt
Run the loop optimizer twice.

Enabled at levels -O2, -O3, -Os, -Oz (APPLE ONLY).

-fgcse
Perform a global common subexpression elimination pass. This pass also performs global constant and copy propagation.

Note: When compiling a program using computed gotos, a GCC extension, you may get better runtime performance if you disable the global common subexpression elimination pass by adding -fno-gcse to the command line.

Enabled at levels -O2, -O3, -Os, -Oz (APPLE ONLY).

-fgcse-lm
When -fgcse-lm is enabled, global common subexpression elimination will attempt to move loads which are only killed by stores into themselves. This allows a loop containing a load/store sequence to be changed to a load outside the loop, and a copy/store within the loop.

Enabled by default when gcse is enabled.

-fgcse-sm
When -fgcse-sm is enabled, a store motion pass is run after global common subexpression elimination. This pass will attempt to move stores out of loops. When used in conjunction with -fgcse-lm, loops containing a load/store sequence can be changed to a load before the loop and a store after the loop.

Not enabled at any optimization level.

-fgcse-las
When -fgcse-las is enabled, the global common subexpression elimination pass eliminates redundant loads that come after stores to the same memory location (both partial and full redundancies).

Not enabled at any optimization level.

-fgcse-after-reload
When -fgcse-after-reload is enabled, a redundant load elimination pass is performed after reload. The purpose of this pass is to cleanup redundant spilling.
-floop-optimize
Perform loop optimizations: move constant expressions out of loops, simplify exit test conditions and optionally do strength-reduction as well.

Enabled at levels -O, -O2, -O3, -Os, -Oz (APPLE ONLY).

-floop-optimize2
Perform loop optimizations using the new loop optimizer. The optimizations (loop unrolling, peeling and unswitching, loop invariant motion) are enabled by separate flags.
-fcrossjumping
Perform cross-jumping transformation. This transformation unifies equivalent code and save code size. The resulting code may or may not perform better than without cross-jumping.

Enabled at levels -O2, -O3, -Os, -Oz (APPLE ONLY).

-fif-conversion
Attempt to transform conditional jumps into branch-less equivalents. This include use of conditional moves, min, max, set flags and abs instructions, and some tricks doable by standard arithmetics. The use of conditional execution on chips where it is available is controlled by if-conversion2.

Enabled at levels -O, -O2, -O3, -Os, -Oz (APPLE ONLY).

-fif-conversion2
Use conditional execution (where available) to transform conditional jumps into branch-less equivalents.

Enabled at levels -O, -O2, -O3, -Os, -Oz (APPLE ONLY).

-fdelete-null-pointer-checks
Use global dataflow analysis to identify and eliminate useless checks for null pointers. The compiler assumes that dereferencing a null pointer would have halted the program. If a pointer is checked after it has already been dereferenced, it cannot be null.

In some environments, this assumption is not true, and programs can safely dereference null pointers. Use -fno-delete-null-pointer-checks to disable this optimization for programs which depend on that behavior.

Enabled at levels -O2, -O3, -Os, -Oz (APPLE ONLY).

-fexpensive-optimizations
Perform a number of minor optimizations that are relatively expensive.

Enabled at levels -O2, -O3, -Os, -Oz (APPLE ONLY).

-foptimize-register-move
-fregmove
Attempt to reassign register numbers in move instructions and as operands of other simple instructions in order to maximize the amount of register tying. This is especially helpful on machines with two-operand instructions.

Note -fregmove and -foptimize-register-move are the same optimization.

Enabled at levels -O2, -O3, -Os, -Oz (APPLE ONLY).

-fdelayed-branch
If supported for the target machine, attempt to reorder instructions to exploit instruction slots available after delayed branch instructions.

Enabled at levels -O, -O2, -O3, -Os, -Oz (APPLE ONLY).

-fschedule-insns
If supported for the target machine, attempt to reorder instructions to eliminate execution stalls due to required data being unavailable. This helps machines that have slow floating point or memory load instructions by allowing other instructions to be issued until the result of the load or floating point instruction is required.

Enabled at levels -O2, -O3, -Os, -Oz (APPLE ONLY).

-fschedule-insns2
Similar to -fschedule-insns, but requests an additional pass of instruction scheduling after register allocation has been done. This is especially useful on machines with a relatively small number of registers and where memory load instructions take more than one cycle.

Enabled at levels -O2, -O3, -Os, -Oz (APPLE ONLY).

-fno-sched-interblock
Don't schedule instructions across basic blocks. This is normally enabled by default when scheduling before register allocation, i.e. with -fschedule-insns or at -O2 or higher.
-fno-sched-spec
Don't allow speculative motion of non-load instructions. This is normally enabled by default when scheduling before register allocation, i.e. with -fschedule-insns or at -O2 or higher.
-fsched-spec-load
Allow speculative motion of some load instructions. This only makes sense when scheduling before register allocation, i.e. with -fschedule-insns or at -O2 or higher.
-fsched-spec-load-dangerous
Allow speculative motion of more load instructions. This only makes sense when scheduling before register allocation, i.e. with -fschedule-insns or at -O2 or higher.
-fsched-stalled-insns=n
Define how many insns (if any) can be moved prematurely from the queue of stalled insns into the ready list, during the second scheduling pass.
-fsched-stalled-insns-dep=n
Define how many insn groups (cycles) will be examined for a dependency on a stalled insn that is candidate for premature removal from the queue of stalled insns. Has an effect only during the second scheduling pass, and only if -fsched-stalled-insns is used and its value is not zero.
-fsched2-use-superblocks
When scheduling after register allocation, do use superblock scheduling algorithm. Superblock scheduling allows motion across basic block boundaries resulting on faster schedules. This option is experimental, as not all machine descriptions used by GCC model the CPU closely enough to avoid unreliable results from the algorithm.

This only makes sense when scheduling after register allocation, i.e. with -fschedule-insns2 or at -O2 or higher.

-fsched2-use-traces
Use -fsched2-use-superblocks algorithm when scheduling after register allocation and additionally perform code duplication in order to increase the size of superblocks using tracer pass. See -ftracer for details on trace formation.

This mode should produce faster but significantly longer programs. Also without -fbranch-probabilities the traces constructed may not match the reality and hurt the performance. This only makes sense when scheduling after register allocation, i.e. with -fschedule-insns2 or at -O2 or higher.

-freschedule-modulo-scheduled-loops
The modulo scheduling comes before the traditional scheduling, if a loop was modulo scheduled we may want to prevent the later scheduling passes from changing its schedule, we use this option to control that.
-fcaller-saves
Enable values to be allocated in registers that will be clobbered by function calls, by emitting extra instructions to save and restore the registers around such calls. Such allocation is done only when it seems to result in better code than would otherwise be produced.

This option is always enabled by default on certain machines, usually those which have no call-preserved registers to use instead.

Enabled at levels -O2, -O3, -Os, -Oz (APPLE ONLY).

-ftree-pre
Perform Partial Redundancy Elimination (PRE) on trees. This flag is enabled by default at -O2 and -O3.
-ftree-fre
Perform Full Redundancy Elimination (FRE) on trees. The difference between FRE and PRE is that FRE only considers expressions that are computed on all paths leading to the redundant computation. This analysis faster than PRE, though it exposes fewer redundancies. This flag is enabled by default at -O and higher.
-ftree-ccp
Perform sparse conditional constant propagation (CCP) on trees. This flag is enabled by default at -O and higher.
-ftree-dce
Perform dead code elimination (DCE) on trees. This flag is enabled by default at -O and higher.
-ftree-dominator-opts
Perform dead code elimination (DCE) on trees. This flag is enabled by default at -O and higher.
-ftree-ch
Perform loop header copying on trees. This is beneficial since it increases effectiveness of code motion optimizations. It also saves one jump. This flag is enabled by default at -O and higher. It is not enabled for -Os or -Oz (APPLE ONLY), since it usually increases code size.
-ftree-elim-checks
Perform elimination of checks based on scalar evolution informations. This flag is disabled by default.
-ftree-loop-optimize
Perform loop optimizations on trees. This flag is enabled by default at -O and higher.
-ftree-loop-linear
Perform linear loop transformations on tree. This flag can improve cache performance and allow further loop optimizations to take place. This flag is known to have bugs that cause incorrect code to be generated in some rare cases. Note this flag is included in -fast.
-ftree-loop-im
Perform loop invariant motion on trees. This pass moves only invariants that would be hard to handle at RTL level (function calls, operations that expand to nontrivial sequences of insns). With -funswitch-loops it also moves operands of conditions that are invariant out of the loop, so that we can use just trivial invariantness analysis in loop unswitching. The pass also includes store motion.
-ftree-loop-ivcanon
Create a canonical counter for number of iterations in the loop for that determining number of iterations requires complicated analysis. Later optimizations then may determine the number easily. Useful especially in connection with unrolling.
-fivopts
Perform induction variable optimizations (strength reduction, induction variable merging and induction variable elimination) on trees.
-ftree-sra
Perform scalar replacement of aggregates. This pass replaces structure references with scalars to prevent committing structures to memory too early. This flag is enabled by default at -O and higher.
-ftree-copyrename
Perform copy renaming on trees. This pass attempts to rename compiler temporaries to other variables at copy locations, usually resulting in variable names which more closely resemble the original variables. This flag is enabled by default at -O and higher.
-ftree-ter
Perform temporary expression replacement during the SSA->normal phase. Single use/single def temporaries are replaced at their use location with their defining expression. This results in non-GIMPLE code, but gives the expanders much more complex trees to work on resulting in better RTL generation. This is enabled by default at -O and higher.
-ftree-lrs
Perform live range splitting during the SSA->normal phase. Distinct live ranges of a variable are split into unique variables, allowing for better optimization later. This is enabled by default at -O and higher.
-ftree-vectorize
Perform loop vectorization on trees.

In Apple's version of GCC, -fstrict-aliasing is enabled by default when loop vectorization is enabled. See -fstrict-aliasing document for more information.

-ftracer
Perform tail duplication to enlarge superblock size. This transformation simplifies the control flow of the function allowing other optimizations to do better job.
-funroll-loops
Unroll loops whose number of iterations can be determined at compile time or upon entry to the loop. -funroll-loops implies both -fstrength-reduce and -frerun-cse-after-loop. This option makes code larger, and may or may not make it run faster.
-funroll-all-loops
Unroll all loops, even if their number of iterations is uncertain when the loop is entered. This usually makes programs run more slowly. -funroll-all-loops implies the same options as -funroll-loops,
-fsplit-ivs-in-unroller
Enables expressing of values of induction variables in later iterations of the unrolled loop using the value in the first iteration. This breaks long dependency chains, thus improving efficiency of the scheduling passes.

Combination of -fweb and CSE is often sufficient to obtain the same effect. However in cases the loop body is more complicated than a single basic block, this is not reliable. It also does not work at all on some of the architectures due to restrictions in the CSE pass.

This optimization is enabled by default.

-fvariable-expansion-in-unroller
With this option, the compiler will create multiple copies of some local variables when unrolling a loop which can result in superior code.
-fprefetch-loop-arrays
If supported by the target machine, generate instructions to prefetch memory to improve the performance of loops that access large arrays.

These options may generate better or worse code; results are highly dependent on the structure of loops within the source code.

-fno-peephole
-fno-peephole2
Disable any machine-specific peephole optimizations. The difference between -fno-peephole and -fno-peephole2 is in how they are implemented in the compiler; some targets use one, some use the other, a few use both.

-fpeephole is enabled by default. -fpeephole2 enabled at levels -O2, -O3, -Os, -Oz (APPLE ONLY).

-fno-guess-branch-probability
Do not guess branch probabilities using heuristics.

GCC will use heuristics to guess branch probabilities if they are not provided by profiling feedback (-fprofile-arcs). These heuristics are based on the control flow graph. If some branch probabilities are specified by `__builtin_expect', then the heuristics will be used to guess branch probabilities for the rest of the control flow graph, taking the `__builtin_expect' info into account. The interactions between the heuristics and `__builtin_expect' can be complex, and in some cases, it may be useful to disable the heuristics so that the effects of `__builtin_expect' are easier to understand.

The default is -fguess-branch-probability at levels -O, -O2, -O3, -Os, -Oz (APPLE ONLY).

-freorder-blocks
Reorder basic blocks in the compiled function in order to reduce number of taken branches and improve code locality.

Enabled at levels -O2, -O3.

-freorder-blocks-and-partition
In addition to reordering basic blocks in the compiled function, in order to reduce number of taken branches, partitions hot and cold basic blocks into separate sections of the assembly and .o files, to improve paging and cache locality performance.

This optimization is automatically turned off in the presence of exception handling, for linkonce sections, for functions with a user-defined section attribute and on any architecture that does not support named sections.

-freorder-functions
Reorder functions in the object file in order to improve code locality. This is implemented by using special subsections .text.hot for most frequently executed functions and .text.unlikely for unlikely executed functions. Reordering is done by the linker so object file format must support named sections and linker must place them in a reasonable way.

Also profile feedback must be available in to make this option effective. See -fprofile-arcs for details.

Enabled at levels -O2, -O3, -Os, -Oz (APPLE ONLY).

-fstrict-aliasing
Allows the compiler to assume the strictest aliasing rules applicable to the language being compiled. For C (and C++), this activates optimizations based on the type of expressions. In particular, an object of one type is assumed never to reside at the same address as an object of a different type, unless the types are almost the same. For example, an unsigned int can alias an int, but not a void* or a double. A character type may alias any other type.

Pay special attention to code like this:

          union a_union {
            int i;
            double d;
          };
          
          int f() {
            a_union t;
            t.d = 3.0;
            return t.i;
          }
     

The practice of reading from a different union member than the one most recently written to (called “type-punning”) is common. Even with -fstrict-aliasing, type-punning is allowed, provided the memory is accessed through the union type. So, the code above will work as expected. However, this code might not:

          int f() {
            a_union t;
            int* ip;
            t.d = 3.0;
            ip = &t.i;
            return *ip;
          }
     

Every language that wishes to perform language-specific alias analysis should define a function that computes, given an tree node, an alias set for the node. Nodes in different alias sets are not allowed to alias. For an example, see the C front-end function c_get_alias_set.

Enabled at levels -O2, -O3, -Os, -Oz (APPLE ONLY).

-falign-functions
-falign-functions=n
Align the start of functions to the next power-of-two greater than n, skipping up to n bytes. For instance, -falign-functions=32 aligns functions to the next 32-byte boundary, but -falign-functions=24 would align to the next 32-byte boundary only if this can be done by skipping 23 bytes or less.

-fno-align-functions and -falign-functions=1 are equivalent and mean that functions will not be aligned.

Some assemblers only support this flag when n is a power of two; in that case, it is rounded up.

If n is not specified or is zero, use a machine-dependent default.

Enabled at levels -O2, -O3.

-falign-labels
-falign-labels=n
Align all branch targets to a power-of-two boundary, skipping up to n bytes like -falign-functions. This option can easily make code slower, because it must insert dummy operations for when the branch target is reached in the usual flow of the code.

-fno-align-labels and -falign-labels=1 are equivalent and mean that labels will not be aligned.

If -falign-loops or -falign-jumps are applicable and are greater than this value, then their values are used instead.

If n is not specified or is zero, use a machine-dependent default which is very likely to be `1', meaning no alignment.

Enabled at levels -O2, -O3.

-falign-loops-max-skip
-falign-loops-max-skip=n
Align loops to a power-of-two boundary, but do not skip more than n bytes to do so.
-falign-loops
-falign-loops=n
Align loops to a power-of-two boundary, skipping up to n bytes like -falign-functions. The hope is that the loop will be executed many times, which will make up for any execution of the dummy operations.

-fno-align-loops and -falign-loops=1 are equivalent and mean that loops will not be aligned.

If n is not specified or is zero, use a machine-dependent default.

Enabled at levels -O2, -O3.

-falign-jumps
-falign-jumps=n
Align branch targets to a power-of-two boundary, for branch targets where the targets can only be reached by jumping, skipping up to n bytes like -falign-functions. In this case, no dummy operations need be executed.
-falign-jumps-max-skip
-falign-jumps-max-skip=n
Align branch targets to a power-of-two boundary, but do not skip more than n bytes to do so.

-fno-align-jumps and -falign-jumps=1 are equivalent and mean that loops will not be aligned.

If n is not specified or is zero, use a machine-dependent default.

Enabled at levels -O2, -O3.

-funit-at-a-time
Parse the whole compilation unit before starting to produce code. This allows some extra optimizations to take place but consumes more memory (in general). There are some compatibility issues with unit-at-at-time mode:

As a temporary workaround, -fno-unit-at-a-time can be used, but this scheme may not be supported by future releases of GCC.

Enabled at levels -O2, -O3.

-fweb
Constructs webs as commonly used for register allocation purposes and assign each web individual pseudo register. This allows the register allocation pass to operate on pseudos directly, but also strengthens several other optimization passes, such as CSE, loop optimizer and trivial dead code remover. It can, however, make debugging impossible, since variables will no longer stay in a “home register”.

Enabled by default with -funroll-loops.

-fno-cprop-registers
After register allocation and post-register allocation instruction splitting, we perform a copy-propagation pass to try to reduce scheduling dependencies and occasionally eliminate the copy.

Disabled at levels -O, -O2, -O3, -Os, -Oz (APPLE ONLY).

-fprofile-generate
Enable options usually used for instrumenting application to produce profile useful for later recompilation with profile feedback based optimization. You must use -fprofile-generate both when compiling and when linking your program.

The following options are enabled: -fprofile-arcs, -fprofile-values, -fvpt.

-fprofile-use
Enable profile feedback directed optimizations, and optimizations generally profitable only with profile feedback available.

The following options are enabled: -fbranch-probabilities, -fvpt, -funroll-loops, -fpeel-loops, -ftracer.

The following options control compiler behavior regarding floating point arithmetic. These options trade off between speed and correctness. All must be specifically enabled.

-ffloat-store
Do not store floating point variables in registers, and inhibit other options that might change whether a floating point value is taken from a register or memory.

This option prevents undesirable excess precision on machines such as the 68000 where the floating registers (of the 68881) keep more precision than a double is supposed to have. Similarly for the x86 architecture. For most programs, the excess precision does only good, but a few programs rely on the precise definition of IEEE floating point. Use -ffloat-store for such programs, after modifying them to store all pertinent intermediate computations into variables.

-ffast-math
Sets -fno-math-errno, -funsafe-math-optimizations,
-fno-trapping-math, -ffinite-math-only, -fno-rounding-math, -fno-signaling-nans and fcx-limited-range.

This option causes the preprocessor macro __FAST_MATH__ to be defined.

This option should never be turned on by any -O option since it can result in incorrect output for programs which depend on an exact implementation of IEEE or ISO rules/specifications for math functions.

-fno-math-errno
Do not set ERRNO after calling math functions that are executed with a single instruction, e.g., sqrt. A program that relies on IEEE exceptions for math error handling may want to use this flag for speed while maintaining IEEE arithmetic compatibility.

(APPLE ONLY) The Darwin math libraries never set errno, so there is no point in having the compiler generate code that assumes they might. Therefore, the default is -fno-math-errno on Darwin.

-funsafe-math-optimizations
Allow optimizations for floating-point arithmetic that (a) assume that arguments and results are valid and (b) may violate IEEE or ANSI standards. When used at link-time, it may include libraries or startup files that change the default FPU control word or other similar optimizations.

This option should never be turned on by any -O option since it can result in incorrect output for programs which depend on an exact implementation of IEEE or ISO rules/specifications for math functions.

The default is -fno-unsafe-math-optimizations.

-ffinite-math-only
Allow optimizations for floating-point arithmetic that assume that arguments and results are not NaNs or +-Infs.

This option should never be turned on by any -O option since it can result in incorrect output for programs which depend on an exact implementation of IEEE or ISO rules/specifications.

The default is -fno-finite-math-only.

-fno-trapping-math
Compile code assuming that floating-point operations cannot generate user-visible traps. These traps include division by zero, overflow, underflow, inexact result and invalid operation. This option implies -fno-signaling-nans. Setting this option may allow faster code if one relies on “non-stop” IEEE arithmetic, for example.

This option should never be turned on by any -O option since it can result in incorrect output for programs which depend on an exact implementation of IEEE or ISO rules/specifications for math functions.

The default is -ftrapping-math.

-frounding-math
Disable transformations and optimizations that assume default floating point rounding behavior. This is round-to-zero for all floating point to integer conversions, and round-to-nearest for all other arithmetic truncations. This option should be specified for programs that change the FP rounding mode dynamically, or that may be executed with a non-default rounding mode. This option disables constant folding of floating point expressions at compile-time (which may be affected by rounding mode) and arithmetic transformations that are unsafe in the presence of sign-dependent rounding modes.

The default is -fno-rounding-math.

This option is experimental and does not currently guarantee to disable all GCC optimizations that are affected by rounding mode. Future versions of GCC may provide finer control of this setting using C99's FENV_ACCESS pragma. This command line option will be used to specify the default state for FENV_ACCESS.

-fsignaling-nans
Compile code assuming that IEEE signaling NaNs may generate user-visible traps during floating-point operations. Setting this option disables optimizations that may change the number of exceptions visible with signaling NaNs. This option implies -ftrapping-math.

This option causes the preprocessor macro __SUPPORT_SNAN__ to be defined.

The default is -fno-signaling-nans.

This option is experimental and does not currently guarantee to disable all GCC optimizations that affect signaling NaN behavior.

-fsingle-precision-constant
Treat floating point constant as single precision constant instead of implicitly converting it to double precision constant.
-fcx-limited-range
-fno-cx-limited-range
When enabled, this option states that a range reduction step is not needed when performing complex division. The default is -fno-cx-limited-range, but is enabled by -ffast-math.

This option controls the default setting of the ISO C99 CX_LIMITED_RANGE pragma. Nevertheless, the option applies to all languages.

The following options control optimizations that may improve performance, but are not enabled by any -O options. This section includes experimental options that may produce broken code.

-fbranch-probabilities
After running a program compiled with -fprofile-arcs (see Options for Debugging Your Program or gcc), you can compile it a second time using -fbranch-probabilities, to improve optimizations based on the number of times each branch was taken. When the program compiled with -fprofile-arcs exits it saves arc execution counts to a file called sourcename.gcda for each source file The information in this data file is very dependent on the structure of the generated code, so you must use the same source code and the same optimization options for both compilations.

With -fbranch-probabilities, GCC puts a `REG_BR_PROB' note on each `JUMP_INSN' and `CALL_INSN'. These can be used to improve optimization. Currently, they are only used in one place: in reorg.c, instead of guessing which path a branch is mostly to take, the `REG_BR_PROB' values are used to exactly determine which path is taken more often.

-fprofile-values
If combined with -fprofile-arcs, it adds code so that some data about values of expressions in the program is gathered.

With -fbranch-probabilities, it reads back the data gathered from profiling values of expressions and adds `REG_VALUE_PROFILE' notes to instructions for their later usage in optimizations.

Enabled with -fprofile-generate and -fprofile-use.

-fvpt
If combined with -fprofile-arcs, it instructs the compiler to add a code to gather information about values of expressions.

With -fbranch-probabilities, it reads back the data gathered and actually performs the optimizations based on them. Currently the optimizations include specialization of division operation using the knowledge about the value of the denominator.

-fspeculative-prefetching
If combined with -fprofile-arcs, it instructs the compiler to add a code to gather information about addresses of memory references in the program.

With -fbranch-probabilities, it reads back the data gathered and issues prefetch instructions according to them. In addition to the opportunities noticed by -fprefetch-loop-arrays, it also notices more complicated memory access patterns—for example accesses to the data stored in linked list whose elements are usually allocated sequentially.

In order to prevent issuing double prefetches, usage of -fspeculative-prefetching implies -fno-prefetch-loop-arrays.

Enabled with -fprofile-generate and -fprofile-use.

-frename-registers
Attempt to avoid false dependencies in scheduled code by making use of registers left over after register allocation. This optimization will most benefit processors with lots of registers. Depending on the debug information format adopted by the target, however, it can make debugging impossible, since variables will no longer stay in a “home register”.

Not enabled by default at any level because it has known bugs.

-ftracer
Perform tail duplication to enlarge superblock size. This transformation simplifies the control flow of the function allowing other optimizations to do better job.

Enabled with -fprofile-use.

-funroll-loops
Unroll loops whose number of iterations can be determined at compile time or upon entry to the loop. -funroll-loops implies -frerun-cse-after-loop and -fweb. It also turns on complete loop peeling (i.e. complete removal of loops with small constant number of iterations). This option makes code larger, and may or may not make it run faster.

Enabled with -fprofile-use.

-funroll-all-loops
Unroll all loops, even if their number of iterations is uncertain when the loop is entered. This usually makes programs run more slowly. -funroll-all-loops implies the same options as -funroll-loops.
-fpeel-loops
Peels the loops for that there is enough information that they do not roll much (from profile feedback). It also turns on complete loop peeling (i.e. complete removal of loops with small constant number of iterations).

Enabled with -fprofile-use.

-fmove-loop-invariants
Enables the loop invariant motion pass in the new loop optimizer. Enabled at level -O1
-funswitch-loops
Move branches with loop invariant conditions out of the loop, with duplicates of the loop on both branches (modified according to result of the condition).
-fprefetch-loop-arrays
If supported by the target machine, generate instructions to prefetch memory to improve the performance of loops that access large arrays.

Disabled at levels -Os and -Oz (APPLE ONLY).

-ffunction-sections
-fdata-sections
Place each function or data item into its own section in the output file if the target supports arbitrary sections. The name of the function or the name of the data item determines the section's name in the output file.

Use these options on systems where the linker can perform optimizations to improve locality of reference in the instruction space. Most systems using the ELF object format and SPARC processors running Solaris 2 have linkers with such optimizations. AIX may have these optimizations in the future.

Only use these options when there are significant benefits from doing so. When you specify these options, the assembler and linker will create larger object and executable files and will also be slower. You will not be able to use gprof on all systems if you specify this option and you may have problems with debugging if you specify both this option and -g.

-fbranch-target-load-optimize
Perform branch target register load optimization before prologue / epilogue threading. The use of target registers can typically be exposed only during reload, thus hoisting loads out of loops and doing inter-block scheduling needs a separate optimization pass.
-fbranch-target-load-optimize2
Perform branch target register load optimization after prologue / epilogue threading.
-fbtr-bb-exclusive
When performing branch target register load optimization, don't reuse branch target registers in within any basic block.
-fstack-protector
Emit extra code to check for buffer overflows, such as stack smashing attacks. This is done by adding a guard variable to functions with vulnerable objects. This includes functions that call alloca, and functions with buffers larger than 8 bytes. The guards are initialized when a function is entered and then checked when the function exits. If a guard check fails, an error message is printed and the program exits.
-fstack-protector-all
Like -fstack-protector except that all functions are protected.
--param name=value
In some places, GCC uses various constants to control the amount of optimization that is done. For example, GCC will not inline functions that contain more that a certain number of instructions. You can control some of these constants on the command-line using the --param option.

The names of specific parameters, and the meaning of the values, are tied to the internals of the compiler, and are subject to change without notice in future releases.

In each case, the value is an integer. The allowable choices for name are given in the following table:

sra-max-structure-size
The maximum structure size, in bytes, at which the scalar replacement of aggregates (SRA) optimization will perform block copies. The default value, 0, implies that GCC will select the most appropriate size itself.
sra-field-structure-ratio
The threshold ratio (as a percentage) between instantiated fields and the complete structure size. We say that if the ratio of the number of bytes in instantiated fields to the number of bytes in the complete structure exceeds this parameter, then block copies are not used. The default is 75.
max-crossjump-edges
The maximum number of incoming edges to consider for crossjumping. The algorithm used by -fcrossjumping is O(N^2) in the number of edges incoming to each block. Increasing values mean more aggressive optimization, making the compile time increase with probably small improvement in executable size.
min-crossjump-insns
The minimum number of instructions which must be matched at the end of two blocks before crossjumping will be performed on them. This value is ignored in the case where all instructions in the block being crossjumped from are matched. The default value is 5.
max-goto-duplication-insns
The maximum number of instructions to duplicate to a block that jumps to a computed goto. To avoid O(N^2) behavior in a number of passes, GCC factors computed gotos early in the compilation process, and unfactors them as late as possible. Only computed jumps at the end of a basic blocks with no more than max-goto-duplication-insns are unfactored. The default value is 8.
max-delay-slot-insn-search
The maximum number of instructions to consider when looking for an instruction to fill a delay slot. If more than this arbitrary number of instructions is searched, the time savings from filling the delay slot will be minimal so stop searching. Increasing values mean more aggressive optimization, making the compile time increase with probably small improvement in executable run time.
max-delay-slot-live-search
When trying to fill delay slots, the maximum number of instructions to consider when searching for a block with valid live register information. Increasing this arbitrarily chosen value means more aggressive optimization, increasing the compile time. This parameter should be removed when the delay slot code is rewritten to maintain the control-flow graph.
max-gcse-memory
The approximate maximum amount of memory that will be allocated in order to perform the global common subexpression elimination optimization. If more memory than specified is required, the optimization will not be done.
max-gcse-passes
The maximum number of passes of GCSE to run. The default is 1.
max-pending-list-length
The maximum number of pending dependencies scheduling will allow before flushing the current state and starting over. Large functions with few branches or calls can create excessively large lists which needlessly consume memory and resources.
max-inline-insns-single
Several parameters control the tree inliner used in gcc. This number sets the maximum number of instructions (counted in GCC's internal representation) in a single function that the tree inliner will consider for inlining. This only affects functions declared inline and methods implemented in a class declaration (C++). The default value is 450.
max-inline-insns-auto
When you use -finline-functions (included in -O3), a lot of functions that would otherwise not be considered for inlining by the compiler will be investigated. To those functions, a different (more restrictive) limit compared to functions declared inline can be applied. The default value is 90.
large-function-insns
The limit specifying really large functions. For functions larger than this limit after inlining inlining is constrained by --param large-function-growth. This parameter is useful primarily to avoid extreme compilation time caused by non-linear algorithms used by the backend. This parameter is ignored when -funit-at-a-time is not used. The default value is 2700.
large-function-growth
Specifies maximal growth of large function caused by inlining in percents. This parameter is ignored when -funit-at-a-time is not used. The default value is 100 which limits large function growth to 2.0 times the original size.
inline-unit-growth
Specifies maximal overall growth of the compilation unit caused by inlining. This parameter is ignored when -funit-at-a-time is not used. The default value is 50 which limits unit growth to 1.5 times the original size.
max-inline-insns-recursive
max-inline-insns-recursive-auto
Specifies maximum number of instructions out-of-line copy of self recursive inline function can grow into by performing recursive inlining.

For functions declared inline --param max-inline-insns-recursive is taken into acount. For function not declared inline, recursive inlining happens only when -finline-functions (included in -O3) is enabled and --param max-inline-insns-recursive-auto is used. The default value is 450.

max-inline-recursive-depth
max-inline-recursive-depth-auto
Specifies maximum recursion depth used by the recursive inlining.

For functions declared inline --param max-inline-recursive-depth is taken into acount. For function not declared inline, recursive inlining happens only when -finline-functions (included in -O3) is enabled and --param max-inline-recursive-depth-auto is used. The default value is 450.

inline-call-cost
Specify cost of call instruction relative to simple arithmetics operations (having cost of 1). Increasing this cost disqualify inlinining of non-leaf functions and at same time increase size of leaf function that is believed to reduce function size by being inlined. In effect it increase amount of inlining for code having large abstraction penalty (many functions that just pass the argumetns to other functions) and decrease inlining for code with low abstraction penalty. Default value is 16.
max-unrolled-insns
The maximum number of instructions that a loop should have if that loop is unrolled, and if the loop is unrolled, it determines how many times the loop code is unrolled.
max-average-unrolled-insns
The maximum number of instructions biased by probabilities of their execution that a loop should have if that loop is unrolled, and if the loop is unrolled, it determines how many times the loop code is unrolled.
max-unroll-times
The maximum number of unrollings of a single loop.
max-peeled-insns
The maximum number of instructions that a loop should have if that loop is peeled, and if the loop is peeled, it determines how many times the loop code is peeled.
max-peel-times
The maximum number of peelings of a single loop.
max-completely-peeled-insns
The maximum number of insns of a completely peeled loop.
max-completely-peel-times
The maximum number of iterations of a loop to be suitable for complete peeling.
max-unswitch-insns
The maximum number of insns of an unswitched loop.
max-unswitch-level
The maximum number of branches unswitched in a single loop.
lim-expensive
The minimum cost of an expensive expression in the loop invariant motion.
iv-consider-all-candidates-bound
Bound on number of candidates for induction variables below that all candidates are considered for each use in induction variable optimizations. Only the most relevant candidates are considered if there are more candidates, to avoid quadratic time complexity.
iv-max-considered-uses
The induction variable optimizations give up on loops that contain more induction variable uses.
iv-always-prune-cand-set-bound
If number of candidates in the set is smaller than this value, we always try to remove unnecessary ivs from the set during its optimization when a new iv is added to the set.
scev-max-expr-size
Bound on size of expressions used in the scalar evolutions analyzer. Large expressions slow the analyzer.
max-iterations-to-track
The maximum number of iterations of a loop the brute force algorithm for analysis of # of iterations of the loop tries to evaluate.
hot-bb-count-fraction
Select fraction of the maximal count of repetitions of basic block in program given basic block needs to have to be considered hot.
hot-bb-frequency-fraction
Select fraction of the maximal frequency of executions of basic block in function given basic block needs to have to be considered hot
tracer-dynamic-coverage
tracer-dynamic-coverage-feedback
This value is used to limit superblock formation once the given percentage of executed instructions is covered. This limits unnecessary code size expansion.

The tracer-dynamic-coverage-feedback is used only when profile feedback is available. The real profiles (as opposed to statically estimated ones) are much less balanced allowing the threshold to be larger value.

tracer-max-code-growth
Stop tail duplication once code growth has reached given percentage. This is rather hokey argument, as most of the duplicates will be eliminated later in cross jumping, so it may be set to much higher values than is the desired code growth.
tracer-min-branch-ratio
Stop reverse growth when the reverse probability of best edge is less than this threshold (in percent).
tracer-min-branch-ratio
tracer-min-branch-ratio-feedback
Stop forward growth if the best edge do have probability lower than this threshold.

Similarly to tracer-dynamic-coverage two values are present, one for compilation for profile feedback and one for compilation without. The value for compilation with profile feedback needs to be more conservative (higher) in order to make tracer effective.

max-cse-path-length
Maximum number of basic blocks on path that cse considers. The default is 10.
global-var-threshold
Counts the number of function calls (n) and the number of call-clobbered variables (v). If nxv is larger than this limit, a single artificial variable will be created to represent all the call-clobbered variables at function call sites. This artificial variable will then be made to alias every call-clobbered variable. (done as int * size_t on the host machine; beware overflow).
max-aliased-vops
Maximum number of virtual operands allowed to represent aliases before triggering the alias grouping heuristic. Alias grouping reduces compile times and memory consumption needed for aliasing at the expense of precision loss in alias information.
ggc-min-expand
GCC uses a garbage collector to manage its own memory allocation. This parameter specifies the minimum percentage by which the garbage collector's heap should be allowed to expand between collections. Tuning this may improve compilation speed; it has no effect on code generation.

The default is 30% + 70% * (RAM/1GB) with an upper bound of 100% when RAM >= 1GB. If getrlimit is available, the notion of "RAM" is the smallest of actual RAM and RLIMIT_DATA or RLIMIT_AS. If GCC is not able to calculate RAM on a particular platform, the lower bound of 30% is used. Setting this parameter and ggc-min-heapsize to zero causes a full collection to occur at every opportunity. This is extremely slow, but can be useful for debugging.

ggc-min-heapsize
Minimum size of the garbage collector's heap before it begins bothering to collect garbage. The first collection occurs after the heap expands by ggc-min-expand% beyond ggc-min-heapsize. Again, tuning this may improve compilation speed, and has no effect on code generation.

The default is the smaller of RAM/8, RLIMIT_RSS, or a limit which tries to ensure that RLIMIT_DATA or RLIMIT_AS are not exceeded, but with a lower bound of 4096 (four megabytes) and an upper bound of 131072 (128 megabytes). If GCC is not able to calculate RAM on a particular platform, the lower bound is used. Setting this parameter very large effectively disables garbage collection. Setting this parameter and ggc-min-expand to zero causes a full collection to occur at every opportunity.

max-reload-search-insns
The maximum number of instruction reload should look backward for equivalent register. Increasing values mean more aggressive optimization, making the compile time increase with probably slightly better performance. The default value is 100.
max-cselib-memory-location
The maximum number of memory locations cselib should take into acount. Increasing values mean more aggressive optimization, making the compile time increase with probably slightly better performance. The default value is 500.
reorder-blocks-duplicate
reorder-blocks-duplicate-feedback
Used by basic block reordering pass to decide whether to use unconditional branch or duplicate the code on its destination. Code is duplicated when its estimated size is smaller than this value multiplied by the estimated size of unconditional jump in the hot spots of the program.

The reorder-block-duplicate-feedback is used only when profile feedback is available and may be set to higher values than reorder-block-duplicate since information about the hot spots is more accurate.

max-sched-region-blocks
The maximum number of blocks in a region to be considered for interblock scheduling. The default value is 10.
max-sched-region-insns
The maximum number of insns in a region to be considered for interblock scheduling. The default value is 100.
max-last-value-rtl
The maximum size measured as number of RTLs that can be recorded in an expression in combiner for a pseudo register as last known value of that register. The default is 10000.
integer-share-limit
Small integer constants can use a shared data structure, reducing the compiler's memory usage and increasing its speed. This sets the maximum value of a shared integer constant's. The default value is 256.
ssp-buffer-size
The minimum size of buffers (i.e. arrays) that will receive stack smashing protection when -fstack-protection is used.