In compiler optimization, register allocation is the process of assigning a large number of target program variables onto a small number of CPU registers.

Register allocation can happen over a basic block (local register allocation), over a whole function/procedure (global register allocation), or across function boundaries traversed via call-graph (interprocedural register allocation). When done per function/procedure the calling convention may require insertion of save/restore around each call-site.

Context edit

Principle edit

In many programming languages, the programmer may use any number of variables. The computer can quickly read and write registers in the CPU, so the computer program runs faster when more variables can be in the CPU's registers. [1] Also, sometimes code accessing registers is more compact, so the code is smaller, and can be fetched faster if it uses registers rather than memory. However, the number of registers is limited in most CPUs. Therefore, when the compiler is translating code to machine-language, it must decide how to allocate variables to the limited number of registers in the CPU. [2] · [3]

 
Different number of registers in the most common architectures

Not all variables are in use (or "live") at the same time, so, over the lifetime of a program, a given register may be used to hold different variables. However, two variables in use at the same time cannot be assigned to the same register without corrupting one of the variables. If there are not enough registers to hold all the variables, some variables may be moved to and from RAM. This process is called "spilling" the registers. [4] Over the lifetime of a program, a variable can be both spilled and stored in registers: this variable is then considered as "split"[5]. Accessing RAM is significantly slower than accessing registers[6] and so a compiled program runs slower. Therefore, an optimizing compiler aims to assign as many variables to registers as possible. A high "Register pressure" is a technical term that means that more spills and reloads are needed; it is defined by Braun et al. as "the number of simultaneously live variables at an instruction"[7] .

In addition, some computer designs cache frequently-accessed registers. So, programs can be further optimized by assigning the same register to a source and destination of a move instruction whenever possible. This is especially important if the compiler is using an intermediate representation such as static single-assignment form (SSA). In particular, when SSA is not fully optimized it can artificially generate additional move instructions.

Challenges of register allocation edit

Register allocation consists therefore in choosing where to store the variables at runtime, i.e. in registers or outside registers. If the variable is to be stored in registers, then the allocator needs to determine in which register(s) this variable will be stored. Eventually, another challenge is to determine the duration for which a variable should stay at the same location.

A register allocator, disregarding the chosen allocation strategy, can rely on a set of core actions to address these challenges. These actions can be gathered in three different categories: [8] Cite error: There are <ref> tags on this page without content in them (see the help page).

Move insertion
This action consists in increasing the amount of move instructions between registers, i.e. make a variable live in different registers during its lifetime, instead of one. This occurs in the split live range approach.
Spilling
This action consists in storing a variable into memory instead of registers [9]
Assignment
This action consists in assigning a register to a variable. [10]
Coalescing
This action consists in limiting the number of moves between registers, thus limiting the total amount of instructions. For instance, by identifying a variable live across different methods, and storing it into one register during its whole lifetime. [11]

Mostly, a lot of register allocation approaches try to specifically optimize one or the other category of actions.

 
Intel 386 registers

Common problems raised in register allocation edit

Aliasing
In some architectures, when an assignment to one register can affect the value of another. For example the x86 architecture has four general purpose 32-bit registers that can also be used as 16-bit or 8-bits registers[12].
Pre-coloring
This problem is an act to force some variables to be assigned to particular registers. For example, in PowerPC architecture parameters are commonly passed in R3-R10 and the return value is passed in R3[13].
NP-Problem
Chaitin et al. showed that their register allocation algorithm is an NP-complete problem. Their work is based on graph coloring problem, where each node represents a variable and color represents the amount of machine registers. That graph can be arbitrary, thereby proving the NP-completeness of the problem.[14]

Register allocation techniques edit

Register allocation can happen over a basic block of code: it is said to be "local", and was first mentioned by Horwitz et al. [15]. As basic blocks does not contain branches, the allocation process is thought to be fast, because the management of control flow graph merge points in register allocation reveals itself a time-consuming operation [16] . However, this approach is thought not to produce as optimized code as the "global" approach, which operates over the whole compilation unit (a method or procedure for instance).[17]

Graph-coloring allocation edit

Graph-coloring allocation is the predominant approach to solve register allocation. It was first proposed by Chaitin and al., [18] In this approach, nodes in the graph represent live ranges (variables, temporaries, virtual/symbolic registers) that are candidates for register allocation. Edges connect live ranges that interfere , i.e., live ranges that are simultaneously live at at least one program point. Register allocation then reduces to the graph coloring problem in which colors (registers) are assigned to the nodes such that two nodes connected by an edge do not receive the same color.[19]

Using liveness analysis, an interference graph can be built. The interference graph which is an undirected graph where the nodes are the program's variables is used to model which variables cannot be allocated to the same register. [20]

Principle edit

The main phases in a Chaitin-style graph-coloring register allocator are:[21]

 
Chaitin et al.'s iterative graph coloring based register allocator
  1. Renumber: discover live range information in the source program.
  2. Build: build the interference graph.
  3. Coalesce: merge the live ranges of non-interfering variables related by copy instructions.
  4. Spill cost: compute the spill cost of each variable. This assesses the impact of mapping a variable to memory on the speed of the final program.
  5. Simplify: construct an ordering of the nodes in the inferences graph
  6. Spill Code: insert spill instructions, i.e loads and stores to commute values between registers and memory.
  7. Select: assign a register to each variable.

Drawbacks and further improvements edit

The graph-coloring allocation has three major drawbacks. First, it relies on graph-coloring, which is an NP-complete problem, to decide which variables are spilled. Finding a minimal coloring graph is indeed a NP-complete problem.[22] Second, unless live-range splitting is used, evicted variables are spilled everywhere: store (resp. load) instructions are inserted as early (resp. late) as possible, i.e., just after (resp. before) variable definitions (resp. uses). Third, a variable that is not spilled is kept in the same register throughout its whole lifetime. [23] Furthermore, when a single register name may appear in multiple register classes, where a class is a set of register names that are interchangeable in a particular role. Second, multiple register names may be aliases for a single hardware register[24] Finally, graph coloring is an aggressive technique for allocating registers, but is computationally expensive due to its use of the interference graph, which can have a worst-case size that is quadratic in the number of live ranges. [25] The traditional formulation of graph-coloring register allocation implicitly assumes a single bank of non-overlapping general-purpose registers and does not handle irregular architectural features like overlapping registers pairs, special purpose registers and multiple register banks [26]

One later improvement of Chaitin-style graph-coloring approach was found by Briggs et al.: it is called conservative coalescing. This improvement adds a criteria to decide when two live ranges can be merged. Mainly, in addition to the non-interfering requirements, two variables can only be coalesced if their merging will not cause further spilling. Briggs et al. introduces a second improvement to Chaitin's works which is biased coloring. Biased coloring tries to assign the same color in the graph-coloring to live range that are copy related.[27]

Linear Scan edit

Linear scan is another global register allocation approach. It was first proposed by Poletto et. al in 1999 [28]. In this approach, the code is not turned into a graph. Instead, all the variables are linearly scanned to determine their live range, represented as an interval. Once the live ranges of all variables have been figured out, the intervals are traversed chronologically. Although this traversal could help identifying variables whose live ranges interfere, no interference graph is being built and the variables are allocated in a greedy way. [29]

The motivation for this approach is speed; not in terms of execution time of the generated code, but in terms of time spent in code generation. Typically, the standard graph coloring approaches produces quality code, but has a significant overhead [30] [31], the used graph coloring algorithm having a quadratic cost [32]. Due to this feature, linear scan is the approach currently in use in several JIT compilers, like the Hotspot compiler, V8 and Jikes RVM. [33]

Pseudo code edit

This describes the algorithm as first proposed by Poletto et al. [34]

 LinearScanRegisterAllocation
         active ← {}
         foreach live interval i, in order of increasing start point
         ExpireOldIntervals(i)
         if length(active)=R then
                 SpillAtInterval(i)
         else
                 register[i] ← a register removed from pool of free registers
                 add i to active, sorted by increasing end point
 ExpireOldIntervals(i)
         foreach interval j in active, in order of increasing end point
         if endpoint[j] ≥ startpoint[i] then
                   return 
         remove j from active
         add register[j] to pool of free registers
 SpillAtInterval(i)
         spill ← last interval in active
         if endpoint[spill] > endpoint[i] then
                   register[i] ← register[spill]
                   location[spill] ← new stack location
                   remove spill from active
                   add i to active, sorted by increasing end point
         else
                   location[i] ← new stack location

Drawbacks and further improvements edit

However, the linear scan presents two major drawbacks. First, due to its greedy aspect, it does not take lifetime holes into account, i.e. "ranges where the value of the variable is not needed" [35] [36] . Besides, a spilled variable will stay spilled for its entire lifetime.

Lots of other research works followed up on the Poletto's linear scan algorithm. Traub et al., for instance, proposed an algorithm called second-chance binpacking aiming at generating code of better quality Cite error: A <ref> tag is missing the closing </ref> (see the help page).. Indeed, in this approach, spilled variables get the opportunity to be stored later in a register by using a different heuristic from the one used in the standard linear scan algorithm. Indeed, instead of using live intervals, the algorithm relies on live ranges, meaning that if a range needs to be spilled, it is not necessary to spill all the other ranges corresponding to this variable.

Linear scan allocation was also adapted to take advantage from the SSA form: the properties of this intermediate representation are used to make the allocation algorithm simpler. [37] First, the time spent in data flow graph analysis, aimed at building the lifetime intervals, is reduced, namely because variables are unique [38]. It consequently produces shorter live intervals, because each new assignment corresponds to a new live interval [39][40]

Rematerialization edit

The problem of optimal register allocation is NP-complete. As a consequence, compilers employ heuristic techniques to approximate its solution.

Chaitin et al. discuss several ideas for improving the quality of spill code. They point out that certain values can be recomputed in a single instruction and that required operand will always be available for the computation. They call these exceptional values "never-killed" and note that such values should be recalculated instead of being spilled and reloaded. They further note that an uncoalesced copy of a never-killed value can be eliminated by recomputing it directly into the desired register.[41]

These techniques are termed rematerialization. In practice, opportunities for rematerialization include:

  • immediate loads of integer constants and, on some machines, floating-point constants,
  • computing a constant offset from the frame pointer or the static data area, and
  • loading non-local frame pointers from a display.[41]

Briggs and Al extends Chaitin's work to take advantage of rematerialization opportunities for complex, multi-valued live ranges. They found that each value needs to be tagged with enough information to allow the allocator to handle it correctly. Briggs's approach is the following: first, split each live range into its component values, then propagate rematerialization tags to each value, and form new live ranges from connected values having identical tags.[41]

Coalescing edit

In the context of register allocation, coalescing is the act of merging variable-to-variable move operations by allocating those two variables to the same location. The coalescing operation takes place after the interference graph is built. Once two nodes have been coalesced, they must get the same color and be allocated to the same register, once the copy operation becomes unnecessary.[42]

Doing coalescing might have both positive and negative impacts on the colorability of the interference graph.[43] For example, one negative impact that coalescing could have on graph inference colorability is when two nodes are coalesced, as the result node will have a union of the edges of those being coalesced.[44] A positive impact of coalescing on inference graph colorability is, for example, when a node interferes with both nodes being coalesced, the degree of the node is reduced by one which leads to improving the overall colorability of the interference graph.[45]

There are several coalescing heuristics available[46]:

Aggressive coalescing
it was first introduced by Chaitin original register allocator. This heuristic aims at coalescing any non-interfering, copy-related nodes. [47] From the perspective of copy elimination, this heuristic has the best results[48]. On the other hand, aggressive coalescing cloud impacts the inference graph colorability.[49].
Conservative Coalescing
it mainly uses the same heuristic as aggressive coalescing but it merges moves if, and only if, it does not compromise the colorability of the interference graph.[50]
Iterated coalescing
it removes one particular move at the time, while keeping the colorability of the graph.[51]
Optimistic coalescing
it is based on aggressive coalescing, but if the inference graph colorability is compromised, then it gives up as few moves as possible.[52]

Mixed approaches edit

Hybrid allocation edit

Some other register allocation approaches do not limit to one technique to optimize the register use. Cavazos et.al, for instance, proposed a solution where it is possible to use both the linear scan and the graph coloring algorithms. [53] In this approach, the choice between one or the other solution is determined dynamically: first, a machine learning algorithm is used "offline", that is to say not at runtime, to build an heuristic function that determine which allocation algorithm need to be used. The heuristic function is then used at runtime; in light of the code behavior, the allocator can then chose between one of the two available algorithms. [54]

Trace register allocation is a recent approach developed by Eisl et al. [55] · [56] . This technique handles the allocation locally: it relies on dynamic profiling data to determine which branches will be the most frequently used in a given control flow graph. It then infers a set of "traces" (i.e. code segments) in which the merge point is ignored in favor of the most used branch. Each trace is then independently processed by the allocator. This approach can be considered as hybrid because it is possible to use different register allocation algorithms between the different traces. [57]

Split allocation edit

Split allocation is another register allocation technique that combines different approaches, usually considered as opposite. For instance, the hybrid allocation technique can be considered as split because the first heuristic building stage is performed offline, and the heuristic use is performed online. [58] In the same fashion, B. Diouf et al. proposed an allocation technique relying both on offline and online behaviors, namely static and dynamic compilation. [59] · [60]. During the offline stage, a spill set is gathered and annotated. Register allocation is performed afterwards during the online stage, based on the data collected in the offline phase.[61]

In 2007, Bouchez et al suggested as well to split the register allocation in different stages, having one stage dedicated to spilling, and one dedicated to coloring and coalescing. [62]

Comparison between the different techniques edit

Several metrics have been used to assess the performance of one register allocation technique against the other. Register allocation has typically to deal with a trade-off between code quality, i.e. code that executes quickly, and analysis overhead, i.e. the time spent determining analyzing the source code to generate code with optimized register allocation. From this perspective, execution time of the generated code and time spent in liveness analysis are relevant metrics to compare the different techniques. [63]

Once relevant metrics have been chosen, the code on which the metrics will be applied should be available and relevant to the problem, either by reflecting the behavior of real-world application, or by being relevant to the particular problem the algorithm wants to address. The more recent articles about register allocation uses especially the Dacapo benchmark suite. [64]

See also edit

References edit

  1. ^ Ditzel 1982, p. 48
  2. ^ Runeson 2003, p. 242
  3. ^ Eisl 2016, p. 14:1
  4. ^ Chaitin 1981, p. 47
  5. ^ Eisl 2016, p. 1
  6. ^ "Latency Comparison Numbers in computer/network". blog.morizyun.com. Retrieved 8 January 2019.{{cite web}}: CS1 maint: url-status (link)
  7. ^ Braun 2009, p. 174
  8. ^ Koes 2009, p. 21
  9. ^ Bouchez 2007, p. 103
  10. ^ Colombet 2011, p. 26
  11. ^ Bouchez 2007, p. 103
  12. ^ [url=https://software.intel.com/sites/default/files/managed/39/c5/325462-sdm-vol-1-2abcd-3abcd.pdf, Intel® 64 and IA-32 Architectures Software Developer’s Manual, Section 3.4.1
  13. ^ [url=https://developer.apple.com/library/archive/documentation/DeveloperTools/Conceptual/LowLevelABI/100-32-bit_PowerPC_Function_Calling_Conventions/32bitPowerPC.html], 32-bit PowerPC function calling conventions
  14. ^ Bouchez 2006, p. 4
  15. ^ Horwitz 1966, p. 43
  16. ^ Farach 1998, p. 566
  17. ^ Eisl 2017, p. 92
  18. ^ Chaitlin 1981, p. 47
  19. ^ Poletto 1999, p. 896
  20. ^ Runeson 2003, p. 241
  21. ^ Briggs 1992, p. 316
  22. ^ Book 1972, p. 618-619
  23. ^ Colombet 2011, p. 1
  24. ^ Smith 2004, p. 277
  25. ^ Cavazos 2006, p. 124
  26. ^ Runeson 2003, p. 240
  27. ^ Briggs 1992, p. 316
  28. ^ Poletto 1999, p. 895
  29. ^ Cavazos 2006, p. 124
  30. ^ Poletto 1999, p. 902
  31. ^ Wimmer 2005, p. 132
  32. ^ Johansson 2002, p. 102
  33. ^ Eisl 2016, p. 1
  34. ^ Poletto 1999, p. 899
  35. ^ Eisl 2016, p. 2
  36. ^ Traub 1998, p. 143
  37. ^ Wimmer 2010, p. 170
  38. ^ Mössenböck 2002, p. 234
  39. ^ Mössenböck 2002, p. 233
  40. ^ Mössenböck 2002, p. 229
  41. ^ a b c Briggs et al., 1992, p.313.
  42. ^ Chaitin 1982, p. 90
  43. ^ BouchezDarte 2007, p. 103
  44. ^ BouchezDarte 2007, p. 103
  45. ^ Ahn 2009, p. 7
  46. ^ Park2004 2004, p. 736
  47. ^ Chaitin 1982, p. 99
  48. ^ Park2004 2004, p. 738
  49. ^ Ahn 2009, p. 7
  50. ^ Briggs et al. 1994, p. 433
  51. ^ George et al. 1996, p. 212
  52. ^ Park2004 2004, p. 741
  53. ^ Eisl 2007, p. 11
  54. ^ Cavazos 2006, p. 124-127
  55. ^ Eisl 2016, p. 14:1
  56. ^ Eisl 2016, p. 1
  57. ^ Eisl 2016, p. 4
  58. ^ Cavazos 2006, p. 124
  59. ^ Diouf 2010, p. 66
  60. ^ Cohen 2010, p. 1
  61. ^ Diouf 2010, p. 72
  62. ^ Bouchez 2007, p. 1
  63. ^ Poletto 1999, p. 901-910
  64. ^ Blackburn 2006, p. 169
  65. ^ Flajolet 1979


Sources edit

  • Bouchez, Florent; Darte, Alain; Rastello, Fabrice (2006). "Register allocation: what does the NP-Completeness proof of Chaitin et al. really prove?". Citeseerx: 2–14. doi:10.1007/978-3-540-72521-3_21.
  • Chaitin, Gregory J.; Auslander, Marc A.; Chandra, Ashok K.; Cocke, John; Hopkins, Martin E.; Markstein, Peter W. (1981). "Register allocation via coloring". Computer Languages. 6 (1): 47–57. doi:10.1016/0096-0551(81)90048-5. ISSN 0096-0551.
  • Runeson, Johan; Nyström, Sven-Olof (2003). Krall, Andreas (ed.). Retargetable Graph-Coloring Register Allocation for Irregular Architectures. Berlin, Heidelberg: Springer Berlin Heidelberg. pp. 240–254. doi:10.1007/978-3-540-39920-9_17. ISBN 978-3-540-39920-9. Runeson2003. {{cite book}}: Unknown parameter |book-title= ignored (help)
  • Aho, Alfred V.; Lam, Monica S.; Sethi, Ravi; Ullman, Jeffrey D. (2006). Compilers: Principles, Techniques, and Tools (2Nd ed.). Addison-Wesley Longman Publishing Co., Inc. ISBN 0321486811.
  • Book, Ronald V. (2014). "Karp Richard M.. Reducibility among combinatorial problems. Complexity of computer computations, Proceedings of a Symposium on the Complexity of Computer Computations, held March 20-22, 1972, at the IBM Thomas J. Watson Center, Yorktown Heights, New York, edited by Miller Raymond E. and Thatcher James W., Plenum Press, New York and London 1972, pp. 85–103". The Journal of Symbolic Logic. 40 (4): 618–619. doi:10.2307/2271828. ISSN 0022-4812. JSTOR 2271828.
  • Colombet, Quentin; Brandner, Florian; Darte, Alain (2011). "Studying optimal spilling in the light of SSA". Proceedings of the 14th international conference on Compilers, architectures and synthesis for embedded systems - CASES '11. p. 25. doi:10.1145/2038698.2038706. ISBN 9781450307130. S2CID 8296742.
  • Farach, Martin; Liberatore, Vincenzo} (1998). "On Local Register Allocation". Written at San Francisco, California, USA. Proceedings of the Ninth Annual ACM-SIAM Symposium on Discrete Algorithms. SODA '98. Philadelphia, PA, USA: Society for Industrial and Applied Mathematics. pp. 564–573. ISBN 0-89871-410-9. Farach1998.
  • Colombet, Quentin; Brandner, Florian; Darte, Alain (2011). "Studying optimal spilling in the light of SSA". Proceedings of the 14th international conference on Compilers, architectures and synthesis for embedded systems - CASES '11. p. 25. doi:10.1145/2038698.2038706. ISBN 9781450307130. S2CID 8296742.
  • Cavazos, John; Moss, J. Eliot B.; O’Boyle, Michael F. P. (2006). "Hybrid Optimizations: Which Optimization Algorithm to Use?". Compiler Construction. Lecture Notes in Computer Science. Vol. 3923. pp. 124–138. doi:10.1007/11688839_12. ISBN 978-3-540-33050-9. ISSN 0302-9743.
  • Briggs, Preston; Cooper, Keith D.; Torczon, Linda (1994). "Improvements to graph coloring register allocation". ACM Transactions on Programming Languages and Systems. 16 (3): 428–455. doi:10.1145/177492.177575. ISSN 0164-0925. S2CID 6571479.
  • Appel, Andrew W.; George, Lal (2001). "Optimal spilling for CISC machines with few registers". Proceedings of the ACM SIGPLAN 2001 conference on Programming language design and implementation - PLDI '01. pp. 243–253. doi:10.1145/378795.378854. ISBN 1581134142. S2CID 1380545.
  • Diouf, Boubacar; Cohen, Albert; Rastello, Fabrice; Cavazos, John (2010). "Split Register Allocation: Linear Complexity Without the Performance Penalty". High Performance Embedded Architectures and Compilers. Lecture Notes in Computer Science. Vol. 5952. pp. 66–80. doi:10.1007/978-3-642-11515-8_7. ISBN 978-3-642-11514-1. ISSN 0302-9743.
  • Barik, Rajkishore; Grothoff, Christian; Gupta, Rahul; Pandit, Vinayaka; Udupa, Raghavendra (2007). "Optimal Bitwise Register Allocation Using Integer Linear Programming". Languages and Compilers for Parallel Computing. Lecture Notes in Computer Science. Vol. 4382. pp. 267–282. doi:10.1007/978-3-540-72521-3_20. ISBN 978-3-540-72520-6.
  • Smith, Michael D.; Ramsey, Norman; Holloway, Glenn (2004). "A generalized algorithm for graph-coloring register allocation". ACM SIGPLAN Notices. 39 (6): 277. doi:10.1145/996893.996875. ISSN 0362-1340.
  • Ditzel, David R.; McLellan, H. R. (1982). "Register allocation for free". Proceedings of the first international symposium on Architectural support for programming languages and operating systems - ASPLOS-I. pp. 48–56. doi:10.1145/800050.801825. ISBN 0897910664. S2CID 2812379.
  • Wimmer, Christian; Mössenböck, Hanspeter (2005). "Optimized interval splitting in a linear scan register allocator". Proceedings of the 1st ACM/USENIX international conference on Virtual execution environments - VEE '05. p. 132. doi:10.1145/1064979.1064998. ISBN 1595930477. S2CID 494490.
  • Bergner, Peter; Dahl, Peter; Engebretsen, David; O'Keefe, Matthew (1997). "Spill code minimization via interference region spilling". Proceedings of the ACM SIGPLAN 1997 conference on Programming language design and implementation - PLDI '97. pp. 287–295. doi:10.1145/258915.258941. ISBN 0897919076. S2CID 16952747.
  • Nickerson, Brian R. (1990). "Graph coloring register allocation for processors with multi-register operands". ACM SIGPLAN Notices. 25 (6): 40–52. doi:10.1145/93548.93552. ISSN 0362-1340.
  • Chen, Wei-Yu; Lueh, Guei-Yuan; Ashar, Pratik; Chen, Kaiyu; Cheng, Buqi (2018). "Register allocation for Intel processor graphics". Proceedings of the 2018 International Symposium on Code Generation and Optimization. pp. 352–364. doi:10.1145/3168806. ISBN 9781450356176. S2CID 3367270.
  • Blackburn, Stephen M.; Guyer, Samuel Z.; Hirzel, Martin; Hosking, Antony; Jump, Maria; Lee, Han; Eliot, J.; Moss, B.; Phansalkar, Aashish; Stefanović, Darko; VanDrunen, Thomas; Garner, Robin; von Dincklage, Daniel; Wiedermann, Ben; Hoffmann, Chris; Khang, Asjad M.; McKinley, Kathryn S.; Bentzur, Rotem; Diwan, Amer; Feinberg, Daniel; Frampton, Daniel (2006). "The DaCapo benchmarks". Proceedings of the 21st annual ACM SIGPLAN conference on Object-oriented programming systems, languages, and applications - OOPSLA '06. p. 169. doi:10.1145/1167473.1167488. hdl:1885/33723. ISBN 1595933484. S2CID 9255051.