1 Complete Removal of Redundant Expressions Rastislav Bodik Rajiv Gupta Mary Lou Soffa Dept. of Computer Science University of Pittsburgh Pittsburgh, ...

Author:
Elinor Gilbert

2 downloads 14 Views 2MB Size

Rajiv Gupta

Rastislav Bodik

Dept. of Computer Science University of Pittsburgh Pittsburgh, PA 15260 {bodik,gupta,soffa}Pcs.pitt.edu

Abstract

1

Partial redundancy elimination (PRE), the most important component of global optimizers, generalizes the removal of common subexpressions and loop-invariant computations. Because existing PRE implementations are based on code motion, they fail to completely remove the redundancies. In fact, we observed that 73% of loop-invariant statements cannot be eliminated from loops by code motion alone. In dynamic terms, traditional PRE eliminates only half of redundancies that are strictly partial. To achieve a complete PRE, control flow restructuring must be applied. However, the resulting code duplication may cause code size explosion.

Partial redundancy elimination (PRE) is a widely used and effective optimization aimed at removing program statements that are redundant due to recomputing previously produced values [26]. PRE is attractive because by targeting statements that are redundant only along some execution paths, it subsumes and generalizes two important value-reuse techniques: global common subexpression elimination and loop-invariant code motion. Consequently, PRE serves as a unified value-reuse optimizer. Most PRE algorithms employ code motion [ll, 12, 14, 15, 16, 17, 24, 261, a program transformation that reorders instructions without changing the shape of the control flow graph. Unfortunately, code-motion alone fails to remove In practice, one half of computaroutine redundancies. tions that are strictly partially redundant (not redundant along some paths) are left unoptimized due to code-motion obstacles. In theory, even the optimal code-motion algorithm [24] breaks down on loop invariants in while-loops, unless supported by explicit do-until conversion. Recently, Steffen demonstrated that control flow restructuring can remove from the program all redundant computations: including conditional branches [30]. While his property-oriented expansion algorithm (Poe) is complete, it causes unnecessary code duplication. As the first step towards a complete PRE with affordable code growth, this paper presents a new PRE algorithm based on the integration of code motion and control flow restructuring, which allows a complete removal of redundant expressions while minimizing code duplication. No prior work systematically treated combining the two transformations. We control code duplication by restricting its scope to a code-motion preventing (CMP) region, which localizes adverse effects of control flow on the desired value reuse. Whereas the Poe algorithm applied to expression elimination (denoted PoePRE) uses restructuring to carry out the entire transformation, we apply the more economical code-motion transformation to its full extent, resorting to restructuring merely to enable the necessary code motion. The resulting code growth is provably not greater than that of PoePRE; on spec95, we found it to be three times smaller. Second, to answer the overriding question of how complete a feasible PRE algorithm is allowed to be, we move from theory to practice by considering profile information. Using the dynamic amount of eliminated computations as the measure of optimization benefit, we develop a profileguided PRE algorithm that limits the code growth cost

This paper focuses on achieving a complete PRE while incurring an acceptable code growth. First, we present an algorithm for complete removal of partial redundancies, based on the integration of code motion and control flow restructuring. In contrast to existing complete techniques, we resort to restructuring merely to remove obstacles to code motion, rather than to carry out the actual optimization. Guiding the optimization with a profile enables additional code growth reduction through selecting those duplications whose cost is justified by sufficient execution-time gains. The paper develops two methods for determining the optimization benefit of restructuring a program region, one based on path-profiles and the other on data-flow frequency analysis. Furthermore, the abstraction underlying the new PRE algorithm enables a simple formulation of speculative code motion guaranteed to have positive dynamic improvements. Finally, we show how to balance the three transformations (code motion, restructuring, and speculation) to achieve a near-complete PRE with very little code growth. We also present algorithms for efficiently computing dynamic benefits. In particular, using an elimination-style data-flow framework, we derive a demand-driven frequency analyzer whose cost can be controlled by permitting a bounded degree of conservative imprecision in the solution. Keywords: partial redundancy elimination, control flow restructuring, speculative execution, demand-driven frequency data-flow analysis, profile-guided optimization. Psrmiwion to make digital or hard copier of all or part 01 this work for psrronal or dassroom use is granted without 1.0 provided that copies am not made or distributed for profit or commwcial advantage and that copies bear thii notice and the full citation on 6~ first paw. To oopy othwivise, to republish. to post on Seders or to redistribute to lists, requires prior spedtic permi8don and/or a 190. BIGPLAN ‘96 Montreal, Canada 8 ,966 ACM 0-89791-9674/96/0006...96.00

Introduction

for (1,) (

it (0) . . = o+di 01s. it (P) break; if (Q) . . - a+di 01s. R; 91 . . I a+bi

:” duplicaled for complele optimization of [ctdj duplicated for partial optimization of[ctQ .:

duplicated lo make [atb] fu//yredun&nl ,... duplicated lo allow code motion of[atb]

1

1 c!-

-I a) source program

b) PoePRE of [a+b]

c) our optimizationof

Figure 1: Complete PRE through integration

[a+b]

d)

our optimizationof

[c+d]

e) tradeoff

variant of d)

of code motion and control flow restructuring. guaranteed to provide insignificant contribution (frequencywise) to the overall solution. Besides PRE, the demanddriven approximate frequency analysis is applicable in interprocedural branch correlation analysis [lo] and dynamic optimizations [5]. Let us illustrate our PRE algorithms on the loop in Figure l(a). Assume no statement in the loop defines variables a, b, c, or d. Although the computations [a+b] and [c+d] are loop-invariant, removing them from the loop with code motion is not possible. Consider first the optimization of [a+b]. This computation cannot be moved out of the loop because it would be executed on the path En, 0, P, Ex, which does not execute [a + b] in the original program. Because this could slow down the program and create spurious exceptions, PRE disallows such unsafe code motion [24]. The desired optimization is only possible if the CFG is restructured. The PoePRE algorithm [30] would produce the program in Figure l(b), which was created by duplicating each node on which the value of [a+ b] was available only on a subset of incoming paths. While [a + b] is fully optimized, the scope of restructuring is unnecessarily large. Our complete optimization (ComPRE) produces the program in Figure l(c), where code duplication is applied merely to enable the necessary code motion. In this example, to move [a + b] out of the loop, it is sufficient to separate out the offending path En, 0, P, Ez which is encapsulated in the CMP region highlighted in the figure. As no opportunities for value reuse remain, the resulting optimization of [a + b] is complete. Because restructuring may generate irreducible programs, as in Figure l(c), we also present a restructuring transformation that maintains reducibility. Hoisting the loop invariant [a + b] out of the loop was prevented by the shape of control flow. Our experiments show that the problem of removing loop invariant code (LI)

by sacrificing those value-reuse opportunities that are infrequent but require significant duplication. Third, we describe how and when speculative code motion can be used instead of restructuring, and how to guarantee that speculative PRE is profitable. Finally, we demonstrate that a near-complete PRE with very little code growth can be achieved by integrating the three PRE transformations: pure code motion, restructuring, and speculative code motion. All algorithms in this paper rely in a specific way on the notion of the CMP region which is used to reduce both code duplication and the program analysis cost. Thus, we make the PRE optimization more usable not only by increasing its effectiveness (power) through cost-sensitive restructuring, but also by improving its efficiency (implementation). We develop compile-time techniques for determining the impact of restructuring a program region on the dynamic amount of eliminated computations. The run-time benefit corresponds to the cumulative execution frequency of control flow paths that will permit value reuse after the restructuring. We describe how this benefit can be obtained either using edge profiles, path-profiles [7], or through dataflow frequency analysis [27]. As another contribution, we reduce the cost of frequency analysis by presenting a frequency analyzer derived from a new demand-driven data-flow analysis framework. Based on interval analysis, the framework enables formulation of analyzers whose time complexity is independent of the lattice size. This is a requirement of frequency analysis whose lattice is of infinite-height. Due to this requirement, existing demand frameworks are unable to produce a frequency analyzer [18, 22, 291. Furthermore, we introduce the notion of approximate data-flow frequency information, which conservatively underestimates the meet-over-all-paths solution, keeping the imprecision within a given degree. Approximation permits the analyzer to avoid exploring program paths

2

flow restructuring is disabled, ComPRE becomes equivalent to the optimal code-motion PRE algorithm [24]. An expression is partially redundant if its value is computed on some incoming control flow path by a previous expression. Code-motion PRE eliminates the redundancy by hoisting the redundant computation along all paths until it reaches an edge where the reused value is available along either all paths or no paths. In the former case, the computation is removed; in the latter, it is inserted to make the original computation fully redundant. Unfortunately, code motion may be blocked before such edges are reached. Nodes that prevent the desired code motion are characterized by the following set of conditions:

has not been sufficiently solved: a complete LI is prevented for 73% of loop-invariant expressions. In some cases, a simple transformation may help. For example, [a + b] (but not [c + d]) can be optimized by peeling off one loop iteration and performing the traditional LI [l], producing the program Figure l(b). In while-loops, LI can often be enabled with more economical do-until conversion. The example presented does not allow this transformation because the loop exit does not post-dominate the loop entry. In effect, our restructuring PRE is always able to perform the smallest necessary do-until conversion for an arbitrary loop. Next, we optimize the computation [c+d] in Figure I(c). Our optimization performs a complete PRE of [c + 6] by duplicating the shaded CMP region and subsequently performing the code motion (Figure l(d)). The resulting program may cause too much code growth, depending on the sizes of duplicated basic blocks. Assume the size of block S outweighs the run-time gains of eliminating the upper [c + 4. In such a case, we select a smaller set of nodes to duplicate, as shown in Figure l(e). When only block Q is duplicated, the optimization is no longer complete; however, the optimization cost measured as code growth is justified with the corresponding run-time gain. In Section 3.2, speculative code motion is used to further reduce code duplication. In summary, this paper makes the following

1. hoisting of expression e across node n is necessary when . . . a) an optimization candidate follows n: there is a computation of e downstream from n on some path, and b) there is a value-reuse opportunity for e at node n: a computation of e precedes n on some path. 2. hoisting of e across n is disabled when c) any path going through n does not compute e in the source program: such path would be impaired by the computation of e.

contributions:

We present an approach for integrating two widely used code transformation techniques, code motion and code restructuring. The result is an algorithm for PRE that is complete (i.e., it exploits all opportunities for value reuse) and minimizes the code growth necessary to achieve the code motion. We show that restricting produces the traditional

All three conditions are characterizable via solutions to the data-flow problems of anticipability and availability, which are defmed as follows. Definition 1 Let p be any path from the start node to a node n. The expression e is available at n along p iff e is computed on p without subsequent redefinition of its operands. Let r be any path from n to the end node. The expression e is anticipated at n along r iff e is computed on r before any of its operands are defined. The availability of e at the entry of n w.r.t. the incoming paths is defined as:

the algorithm to code motion code-motion PRE [17, 241.

Profile-guided techniques for limiting the code growth through integration of selective duplication and speculative code motion are developed. We develop a demand-driven frequency analyzer based on a new elimination data-flow analysis framework.

AVAL[n,

The notion of approximate data-flow information lined and used to improve analyzer efficiency.

Anticipability

Our experiments PRE, speculative

Complete

all

if e is available along

1 May

is de-

(ANTIC)

no paths. some

is defined analogously.

Given this refined value-reuse definition, code motion is necessary when a) and b) defined above hold mutually. Hence,

compare the power of code-motion PRE, and complete PRE.

Necessary[n, e]

Section 2 presents the complete PRE algorithm. Section 3 describes profile-guided versions of the algorithm and Section 4 presents the experiments. Section 5 develops the demand-driven frequency analyzer. The paper concludes with a discussion of related work. 2

Must No

e] =

=

ANZ’ICi”[n, AVAILi”[n,

e] # NO A e] # NO.

Code motion is disabled when the condition Disabled[n, e]

=

PRE

c) holds:

ANTICi,[n, e] # Must A AVAZLi,[n,e] # Most.

A node n prevents the necessary code motion for e when the motion is necessary but disabled at the same time. By way of conjunction, we get the code motion-preventing condition:

In this section, we develop an algorithm for complete removal of partial redundancies (ComPRE) based on the integration of code motion and control flow restructuring. Code motion is the primary transformation behind ComPRE. To reduce code growth, restructuring is used only to enable hoisting through regions that prevent the necessary code motion. The smallest set of motion-blocking nodes is identified by solving the problems of availability and anticipability on an expressive lattice. We also show that when control

Preuented[n, e]

= =

Necessary[n, e] A Disabled[n, e] ANZ’ICi,[n, e] = May A AVAILi,[n, e] = May

The predicate Prevented characterizes the smallest set of nodes that must be removed for code motion to be enabled.

3

. AVAILPMWI o AVAIL=No . ANrlc44ur o ANTlC=No

i

a) code motion prevented

by CMP

motion becomes possible

Figure 2: Removing

3. Optimize the program.

The code motion transformation is carried out by replacing each original computation e with a temporary variable t,. The temporary is initialized with a computation inserted into each No-available edge that sinks either into a May/Mustavailability path or into an original computation. The insertion edge must also be Must-anticipated, to verify hoisting of the original computation to the edge.

Theorem 1 (Completeness). ComPRE that it minimizes the number of computations

1. Compute anticipability and availability. The problems use the lattice L = ({T, Must, No, May}, A). Note that the flow functions are distributive under the least common element operator A, which is defined using the partial order C shown below. Distributivity property implies that data-flow facts are not approximated at control flow merge points. Intuitively, this is because L is the powerset lattice of {No, Must}, which are the only facts that may hold along an individual path.

is optimal in on each path.

Proof. First, each original computation is replaced with a temporary. Second, no computation is inserted where its value is available along any incoming path. Hence, no additional computations can be removed. Cl Within the domain of the Morel and Renviose codemotion transformation, where PRE is accomplished by hoisting optimization candidates (but not other statements) [26], ComPRE achieves minimum code growth.’ This follows from the fact that after CMP restructuring, no program node can be removed or merged with some other node without destroying any value reuse, as shown by the following observations. Prior to Step 2, each node n may belong to CMP regions of multiple offending expressions. Duplication of n during restructuring can be viewed as partitioning of control flow paths going through n: each resulting copy of n is a path partition that does not contain both a Mustand a No-available path, for any offending expression. The

kT\ No

c) completePRE of [a+b]

cessitates updating data-flow solutions within the CMP region (lines 4-12). While the ANTZC solution is not altered, the previously computed AVAIL solution is invalidated because some paths flowing into the region were eliminated when region entry edges were disconnected. For the expression e, AVAIL becomes either Must or No in the entire region. For other expressions, the solution may become (conservatively) imprecise. In other words, splitting a May path into Must/No paths for e might have also split a May path for some other expression. Therefore, line 6 resets the initial guess and lines lo-12 recompute the solution within the CMP.

To enable code motion, ComPRE removes obstacles presented by the CMP region by duplicating the entire region, as illustrated in Figure 2. The central idea is to factor the May-availability that holds in the entire region into Mustand No-availability, to hold respectively in each region copy. An alternative view is that we separate within the region the paths with Must- and No-availability. To achieve this, we can observe that a) no region entry edge is May-available, and b) the solution of availability within the region depends solely on solutions at entry edges (the expression is neither computed nor killed within the region). Hence, the desired factoring can be carried out by attaching to each region copy the subset of either Must or No entry edges, as shown in Figure 2(c). After the CMP is duplicated, the condition Prevented is false on each node, enabling code motion. The ComPRE algorithm, shown in Figure 3, has the following three steps:

order E:

j code molion

obstacles to code motion via restructuring.

Definition 2 Code Motion Preventing region, denoted CMP[e], is the set of nodes that prevent hoisting of a computation e: CMP[e] = {n ) ANTICin[n,e] = May A AVAIL,,[n,e] = May}.

The partial

;

IIIt ,... *.

i

b) CMP region diluted via codeduplication

region

R L-J

Must

\Ma;

2. Remove CMP regions via control flow restructuring. Given an expression e, the CMP region is identified by examining the data-flow solutions locally at each node. Line 2 in Figure 3 duplicates each CMP node and line 3 adjusts the control flow edges, so that the new copy of the region hosts the Must solution. Restructuring ne-

‘Outside this domain, further code growth by moving instructions out of the CMP before

4

reduction is possible its duplication.

Step

1: Data-flow

analysis:

s Input: control flow each node contains s Comp(n,

availability.

graph G = (N,E,start,end), a single assignment z := e,

e): node n computes

s Transp(n,e): .

anticipability,

following properties of Step 2 can be verified: 1) the number of path partitions (node copies) created at a given node is independent of the order in which expressions are considered (in line l), 2) each node copy is reachable from the start node, and 3) for any two copies of n there is an expression e such that remerging the two copies and their incoming paths will prevent code motion of e across the resulting node. To compare ComPRE with a restructuring-only PRE, we consider PoePRE, a version of Steffen’s complete algorithm [30] that includes minimization of duplicated nodes but is restricted in that only expressions are eliminated (as is the case in ComPRE). Elimination is carried out using a temporary, as in Step 3.

an expression

node n does not assign

e,

any variable

in e,

boundary conditions: for each expression e ANT/C,,t[end, e] := AVA/L;,[start, e] := No,

s initial guess: set all vectors to TS, where S is the number of candidate expressions. Solve iteratively.

ANT/C;,[n,e]

if Comp(n, if -Comp(n, 7 Transp(n, otherwise.

:= ANT/C,,t[n,

ANTK.,,t[n,e]

:=

AVA/L;Jn,e]

A mr+wx(n)

:=

/j

e]

ANT/C;,[m,

e]

AVAbt[m,

e]

e), e)A e),

Theorem 2 ComPRE than PoePRE.

Proof outline. The proof is based on showing that the PoePRE-optimized program after minimization has no less nodes than the same program after CMP restructuring. It can be shown that, given an original node n, for any two copies of n created by CMP restructuring, there are two distinct copies of n created by PoePRE such that the minimization cannot merge them without destroying some value reuse opportunity. In fact, PoePRE can be expressed as a form of ComPRE on a (non-strictly) larger region: for each computation e, PoePRE duplicates {nlAiVTICin[n, e] E {Must, May} A AVAZLi,[n, e] = May}, which is a superset of CMP[e]. Algorithm complexity. Data-flow analysis in Step 1 and in lines lo-12 requires O(NS) steps, where N is the flow graph size and S the number of expressions. The restructuring in Step 2, however, may cause N to grow exponentially, as each node may need to be split for multiple expressions. Because in practice a constant-factor code-growth budget is likely to be defined, the asymptotic program size will not change. Therefore, the running time of Step 2, which dominates the entire algorithm, is O(NS’).

mCpred(n)

AVA/LOUt[n,

e]

:=

fz(AVAU;,[n,

e]) if Comp(n, e) A Transp(n, if 7 Transp(n, e). otherwise.

Step

2: Remove

CMP

s modify G so that for any expression

1

for

2

regions:

control

flow

no CMP e.

nodes exists,

e),

restructuring.

each expression e do duplicate all CMP[e] nodes to create a copy of the CMP. n~“$, is a copy of node n hosting AVAIL = Must. N := N u {rz~“~* ] n E CMP[e]} attach new nodes to perform the restructuring

E =((E u {(wurt, u) I (n,u) E E A u Z CMf’[el)U

3

u, n,~“$~)

1 (u, n) 6 E A AVA/L,,t[u,

~[u”~~k’!?i’!

~?~!$z]?!l~AlL..~[u,

update data-flow solutions for each node n E CMP[e]

4 5

e] = Must}

within do

U

e] = Must}

2.1

CMP and its copy

7 8 9

end for reanalyze availability inside both CMP copies for each expression e’ not yet processed do

10 11

re-compute

AVA/L(n,

e’], AVA/L.[~M~~~,

e’], n E CMP[e]

end for 13 end for 12

Step

3: Optimize:

Insert[(n,

m), e]

code e

motion.

ANTIC;,[m,

e] = Must

AVA/L,,t[n,

e] = No A

(AVA/L;,[m, Replace[n,

e]

ts

Figure 3: ComPRE:

Comp(n,

A

e] = May V Comp(m,

e))

e)

the algorithm

Optimal

Code-Motion

PRE

Besides supporting a complete PRE, the notion of the CMP region also facilitates an efficient formulation of code-motion PRE, called CM-PRE. In this section, we show that our complete algorithm can be naturally constrained by prohibiting the restructuring, and that such modification results in the Same optimization as the optimal motion-only PRE [17, 241. In comparison to ComPRE, the constrained CM-PRE algorithm bypasses the CMP removal; the last step (transformation) is unchanged (Figure 3). The first step (dataflow analysis) is modified with the goal to prevent hoisting across a node n when such motion would subsequently be blocked by a CMP region on each path flowing into node n. First, anticipability is computed as in ComPRE. Second, availability is modified to include detection of CMP nodes. When a CMP node is found, instead of propagating forward May-availability, the solution is adjusted to No. Such adjustment masks those value reuse opportunities that cannot be exploited without restructuring. The result of masking is that code motion is prevented from entering paths that cross a CMP region (see predicate Insert in Step 3 of Figure 3). The modified flow function for the AVAIL problem follows. The third line detects a CMP node. No-availability is now extended to mean that the value might be available

ANT/C;,[nM,,J := ANTIC;&] ANT/C,,&,,& := ANT/C,,&] AVA/L;,[nM,,J := AVA/L;,[n] := TS AVA/L,,t[nM,,t] := AVAIL,,&] := TS AVA/L;J~M~~~, e] := AVAIL,,~[~M,,~,~] := Must AVAIL;&, e] := AVAIL,,t[n, e] := No

6

does not create more new nodes

for complete PRE.

5

along some path but value reuse is blocked by a CMP region along that path. Must

fit(x)

=

;; l

X

if Comp(n, e) A 7hrwp(n, if -Transp(n, e), if z = May A ANTZCi”[n, otherwise.

:~.~~~.~~ CMP[atb] [email protected] /or mdudbilily .”

e), e] = May,

Given a maximal fixed point solution to redefined AVAIL, CM-PRE performs the unchanged transformation phase (Figure 3, Step 3). It is easy to show that the resulting optimization is complete under the immutable shape of the control flow graph. The proof is analogous to that of Theorem 1: alI original computations are removed and no computation has been inserted where an optimization opportunity not blocked by a CMP exists. Besides exploiting all opportunities, a PRE algorithm should guarantee that the live ranges of inserted temporary variables are minimal, in order to moderate the register pressure. The live range is minimal when the insertion point specified by the predicate Insert cannot be delayed, that is, moved further in the direction of control flow.

entry node

I?

cd

1 c!-

a) sourceprogram

b) reducible ComPFUl of [a+b]

Figure 4: Reducible restructuring.

(See Figure l(c))

when there is no reuse (AVAIL = No) or when there is no hoisting (ANTIC = No). The less strict condition is beneficial because computing and testing Must requires one bit per expression, while two bits are required for May. Consequently, we can substitute ANTIC # Must with TAN,,,,, where AN,!1 is defined analogously to AVOl,. As a result, we obtain the same implementation complexity as the algorithms in [17, 241: three data-flow problems must be solved, each requiring one bit of solution per expression. In conclusion, the CMP region is a convenient abstraction for terminating hoisting when it would unnecessarily extend the live ranges. It also provides an intuitive way of explaining the shortest-live-range solution without applying the corrective step based on delayability [24]. Furthermore, the CMP-based, motion-only solution can be implemented as efficiently as existing shortest-live-range algorithms.

Proof. An initialization point Insert cannot be delayed either because it would become partially redundant, destroying completeness, or because its temporary variable is used in the immediate successor. Cl Existing PRE algorithms find the live-range optimal placement in two stages. First, computations are hoisted as high as possible, maximizing the removal of redundancies. Later, the placement is corrected through the computation of delayability [24]. Our formulation specifies the optimal placement directly, as we never hoist into paths where a blocking CMP will be subsequently encountered. However, note that after the above redefinition, fz is no longer monotone: given ANTIC,,[n,e] = May, $1 = May, xz = Must, we have x1 C 22 but fG(xi) = No e fi(xs) = Must. Although a direct approach to solving such system of equations may produce conservatively imprecise solution, the desired maximal fixed point is easily obtained using bitvector GEN/KILL operations as follows. First, compute ANTIC as in Figure 3. Second, solve the well-known availability property, denoted AVOll, which holds when the expression is computed along all incoming paths: AV.11 ($ A VA IL = Must. Finally, we compute AV,,,, which characterizes some-paths availability and also encapsulates CMP detection: AV,,,, H AVAIL # No. The pair of solutions (AVOWS, AXome) can be directly mapped to the desired solution of AVAIL. The GEN and KILL sets [l] for the AV,,,, problem are given below. The value of the initial guess is false, the meet operator is the bit-wise or. = =

0

s

Theorem 3 (Shortest live ranges). Given the CMPrestructured (or original) control flow graph, ComPRE (CM-PRE) is optimal in that it minimizes the live range lengths of inserted temporary variables.

GEN KILL

9

s/rig/eloop ,_,....I H-”

2.2

Reducible

Restructuring

Duplicating a CMP region may destroy reducibility of the control flow graph. In Figure l(c), for example, ComPRE resulted in a loop with two distinct entry nodes. Even though PoePRE preserves reducibility on the same loop (Figure l(b)), like other restructuring-based optimizations [4, 10, 301, it is also plagued by introducing irreducibility. One way to deal with the problem is to perform all optimizations that presuppose single-entry loops prior to PRE. However, many algorithms for scheduling (which should follow PRE) rely on reducibility. After ComPRE, a reducible graph can be obtained with additional code duplication. An effective algorithm for normalizing irreducible programs is given in [23]. To suppress an unnecessary invocation of the algorithm, we can employ a simple test of whether irreducibility may be created after a region duplication. The test is based upon examining only the CMP entry and exit edges, rather than the entire program. Assuming we start from a reducible graph, restructuring will make a loop L irreducible only if multiple CMP exit edges sink into L, and at least one region entry is outside L (i.e., is not dominated by L’s header node). If such a region is duplicated, target nodes of region exit edges may become the (multiple) loop entry nodes. Consider the

CompA Transp -Transp V (AVAIL # Must A ANTIC # Must) -Transp v (-AVolt A ANTIC # Most)

The condition (AVAIL # Must A ANTIC # Must) detects the CMP node. While it is less strict than that in Definition 2, it is equivalent for our purpose, as it is safe to kill

6

Step 1: compute anticipability and availability. (unchanged) Step 2: Partial restructuring: remove profitable CMP regions. 1 for each computation e do subregion Ri of CMP[e] do 2 for each disconnected

loop in Figure 4(a). Two of the three exits of CMp[a + b] fall into the loop. After restructuring, they will become loop entries, as shown in Figure l(c). Rather than applying a global algorithm like [23], a straightforward approach to make the affected loop reducible is to peel off a part of its body. The goal is to extend the replication scope so that the region exits sink onto a single loop node, which will then become the new loop entry. Such a node is the closest common postdominator (within the loop) of all the offending region exits and the original loop entry. Figure 4(a) highlights node c+d whose duplication after CMP restructuring will restore reducibility of the loop. The postdominator of the offending exits is node Q, which becomes the new loop header. 3

Profile-Guided

build the largest connected subregion 3

4 5 6 7

select a node from R and collect all connected CMP nodes determine optimization benefit Rem(R;) carry out frequency analysis of AVAlL on R;

if profitable,

if

T(R;)

then

duplicate (lines 2-12 of Fig. 3) duplicate Ri

end for

end for 8 recompute the AVAlL Step 3: Optimize: code

solution, motion.

using

Figure 5: PgPRE: profile-guided

PRE

fi

from

Section

2.1

(unchanged) version of ComPRE.

While the CMP region is the smallest set of nodes whose duplication enables the desired code motion, its size is often prohibitive in practice. In this section, relying on the profile to estimate optimization benefit, complete PRE is made more practical by avoiding unprofitable code replication. First, we extend ComPRE by inhibiting restructuring in response to code duplication cost and the expected dynamic benefit. The resulting profile-guided algorithm duplicates a CMP region only when the incurred code growth is justified by a corresponding run-time gain from eliminating the redundancies. Second, the notion of the CMP region is combined with profiling to formulate a speculative code-motion PRE that is guaranteed to have a positive dynamic effect, despite impairing certain paths. The third algorithm integrates both restructuring and speculation and selects a profitable subgraph of the CMP for each. While profitably balancing the cost and benefit under a given profile is NPhard, the empirically small number of hot program paths promises an efficient algorithm [4, 191. Finally, to support profile guiding, we show how an estimate of the run-time gain thwarted by a CMP region can be obtained using edge profiles, frequency analysis [27], or path profiles [7].

that the CMP scope localizes the entire benefit thwarted by the region: to compute the benefit, it suffices to examine only the paths within the region. Consider an expression e and its CMP region R = CMP[e]. For each region ezit edge a = (n, m) (i.e., n E CMP[e], m e CMP[e]), the value of ANTICin[m,e] is either Must or No, otherwise m would be in CMP[e]. Let EzitMurt(R) be the set of the Must exit edges. The dynamic benefit is derived from the observation that each time such an edge is executed, any outgoing path contains exactly one computation of e that can be eliminated if: i) R is duplicated and ii) the value of e is available at the exit edge. Let ex(a) be the execution frequency of edge a and p(AVAI&[n, e] = Must) the probability that the value e is available when n is executed. After the region is duplicated, the expected benefit connected with the exit edge a is ex(a).p(AVAIL,,t[n, e] = Must), which corresponds to the number of computations removed on all paths starting at a. The benefit of duplicating the region R is thus the sum of all exit edge benefits 0

3.1

The probability p is computed from an edge profile using frequency analysis [27]. In the frequency domain, the probability of each data-flow fact occurring, rather than the fact’s mere boolean meet-over-all-paths existence, is computed by incorporating the execution probabilities of control flow edges into the data-flow system. Because the frequency analyzer cannot exploit bit-vector parallelism, but instead computes data-flow solutions on floating point numbers, it is desirable to reduce the cost of calculating the probabilities. The CMP region lends itself to effectively restricting the scope of the program that needs to be analyzed. Because all region entry edges are either Must- or A/o-available, the probability of e being available on these edges are 1 and 0, respectively. Therefore, the probability p at any exit edge can only be influenced by the paths within the region. As a result, it is sufficient to perform the frequency analysis for expression e on CMP[e], using entry edges as a precise boundary condition for the CMP data-flow equation system. In Section 5 we reduce the cost of frequency analysis through a demand-driven approach. The algorithm (PgPRE) that duplicates only profitable CMP regions is given in Figure 5. It is structured as its complete counterpart, ComPRE: after data-flow analysis, we proceed to eliminate CMP regions, separately for each

Selective

Rem(R) =

c

ex(a).p(AVA&,t[n,

e] = Must).

o=(n,m)EEsitr~,,,(R)

Restructuring

We model the profitability of duplicating a CMP region R with a cost-benefit threshold predicate T(R), which holds if the region optimization benefit exceeds a constant multiple of the region size. Our metric of benefit is the dynamic amount of computations whose elimination will be enabled after R is duplicated, denoted Rem(R). That is, T(R) = Rem(R) > c .size(R). When T(R) = true for each region R, the algorithm is equivalent to the complete ComPRE. When T(R) = false for each region, the algorithm reduces to the code-motion-only CM-PRE. Obviously, predicate T determines only a sub-optimal tradeoff between exploiting PRE opportunities and limiting the code growth. In particular, it does not explicitly consider the instruction cache size and the increase in register pressure due to introduced temporary variables. We have chosen this form of T in order to avoid modeling complex interactions among compiler stages. In the implementation, T is supplemented with a code growth budget (for example, in [6], code is allowed to grow by about 20%). First, we present an algorithm for computing the optimization benefit Rem(R). The method is based on the fact

7

expression. While in ComPRE it was sufficient to treat all nodes from a single CMP together, selective duplication benefits from dividing the CMP into disconnected subregions, if any exist. Intuitively, hoisting of a particular expression may be prevented by multiple groups of nodes, each in a different part of the procedure. Therefore, line 3 groups nodes from a connected subregion and frequency analysis determines the benefit of the group (line 4). After all profitable regions are eliminated, the motion-blocking effect of CMP regions remaining in the program must be captured. All that is needed is to apply the CM-PRE algorithm from Section 2.1 on the improved control flow graph. Blocked hoisting is avoided by recomputing availability (line 8) using the re-defined flow function fz from Section 2.1, which asserts No-availability whenever a CMP is detected. 3.2

Speculative

Code-Motion

i

AVAlL=Mw 0 AVAIL=No

l

. ANTIc=Mur 0 ANTlC=No

Optimization -ex(eZ) iex(e4) i

Figure 6: Speculative 3.3

PRE

Partial

Restructuring,

benefit: - Inssrtion - removal

exh4J-exte2J .I .,

code-motion Partial

I

PRE.

Speculation

In Section 3.1, edge profiles and frequency analysis were used to estimate the benefit Rem of duplicating a region. An alternative is to use path profiles [3, 71, which are convenient for establishing cost-benefit optimization trade-offs [4, 19,201. To arrive at the value of the region benefit with a path profile, it is sufficient to sum the frequencies of MustMost paths, which are paths that cross any region entry edge that is Must-available and any exit edge that is Mustanticipated. These are precisely the paths along which value reuse exists but is blocked by the region. While there is an exponential number of profiled acyclic paths, only 5.4% of procedures execute more than 50 distinct paths in spec96 [19]. This number drops to 1.3% when low-frequency paths accounting for 5% of total frequency are removed. Since we can afford to approximate by disregarding these infrequent paths, summing individual path frequencies constitutes a feasible algorithm for many CMP regions. Furthermore, because they encapsulate branch correlation, path profiles compute the benefit more precisely than frequency analysis based on correlation-insensitive edge profiles. Moreover, the notion of individual CMP paths leads to a better profile-guided PRE algorithm. Considering the CMP region as an indivisible duplication unit is overly conservative. While it may not be profitable to restructure the entire region, the region may contain a few paths Must-Must paths that are frequently executed and are inexpensive to duplicate. Our goal is to hnd the largest subset (frequency-wise) of region paths that together pass the threshold test T(R). Similarly, speculative hoisting into all entry edges may fail the profitability test. Instead, we seek to find a subset of entry edges that maximizes the speculative benefit. In this section, we show how partial restructuring and speculation are carried out and combined. Partial speculation selects for speculative insertion only a subset Z of the No region entries. The selection of entries influences which subset R of region exits will be able to exploit value reuse. R consists of all Must exits that will become Must-available due to the insertions in I. The rationale behind treating entries separately is that some entries may enable little value reuse, hence they should not be speculated. Note that No entry edges are the only points where speculative insertion needs to be considered: insertions inside the region would be partially redundant; insertions outside the region would extend the live-ranges. Partial speculation is optimal if the difference of total frequencies of R and Z is maximal (but non-negative). Although this problem is NP-

In code-motion PRE, hoisting of a computation e is blocked whenever e would need to be placed on a control flow path p that does not compute e in the original program. Such speculative code motion is prevented because executing e along path p could a) raise spurious exceptions in e (e.g., overflow, wrong address), and b) outweigh the dynamic benefit of removing the original computation of e. The former restriction can be relaxed for instruction that cannot except, leading to safe speculation. New processor generations will support control-speculative instructions which will suppress raising the exception until the generated value is eventually used, allowing unsafe speculation [25]. The latter problem is solved in [20], where an aggressive code-motion PRE navigated by path profiles is developed. The goal is to allow speculative hoisting, but only into such paths on which dynamic impairment would not outweigh the benefit of eliminating the computation from its original position. Next, we utilize the CMP region to determine i) the profitability of speculative code motion and ii) the positions of speculative insertion points that minimize live ranges of temporary variables. Figure 6 illustrates the principle of speculative PRE [20]. Instead of duplicating the CMP region, we hoist the expression into all No-available entry edges. This makes all exits fully available, enabling complete removal of original computations along the I\/lust exits. In our example, [o + 51 is moved into the No-available region entry edge ez. This hoisting is speculative because [a+b] is now executed on each path going through ez and es, which previously did not contain the expression. The benefit is computed as follows. The dynamic amount of computations is decreased by the execution frequency eE(eq) of the Must-anticipable exit edge (following which a computation was removed), and increased by the frequency ex(ez) of the No-available entry edge (into which the computation was inserted). Since speculation is always associated with a CMP region, we are able to obtain a simple (but precise) profitability test: speculative PRE of an expression is profitable if the total execution frequency of Must-anticipable exit edges exceeds that of No-availaible entry edges. Note that the benefit is calculated locally by examining only entry/exit edges, and not the paths within the region, which was necessary in selective restructuring. Hence, the speculative benefit is independent from branch correlation and edge profiles are as precise as path profiles in the case of speculative-motion PRE. As far as temporary live ranges are concerned, insertion into entry edges results in a shortest-live-range solution, and Theorem 3 still holds.

8

must be inserted into No-entries B and C. While B is lowfrequency (lo), C is not (loo), hence the speculation is disadvantageous, as ez(Y) = 100 < ez(B) + ez(C) = 10 + 100. Now assume that the exit branch in Q is strongly biased and the path C, Q, X has a frequency of 100. That is, after edge C is executed, the execution will always follow to X. We can peel off this No-available path, as shown in (b), effectively moving the speculation point C off this path. After peeling, the frequency of C becomes 0 and the speculation is profitable, ez(Y) = 100 > ez(B) + G(C) = 10 + 0.

speculation ‘L. pro/&.& r no/ proli8bL9

4

No-path peeled oft

‘L CMP([c+d])

a) source progmm Figure 7: Integrating

We performed the experiments using the HP Labs VLIW back-end compiler elcor, which was fed specs5 benchmarks that were previously compiled, edge-profiled, and inlined (only specssint) by the Impact compiler. Table 1 shows program sizes in the total number of nodes and expressions. Each node corresponds to one intermediate statement. Memory requirements are indicated by the column max space, which gives the largest nodes-expressions product among all procedures. The running time of our rather inefficient implementation behaved quadratically in the number of procedure nodes; for a procedure with 1,000 nodes, the PRE time was about 5 seconds on PA-8000. Typically, the complete PRE ran faster than the subsequent dead code elimination. Experiment 1: Disabling effects of CMP regions. The column labeled optimizable gives the percentage of expressions that reuse value along some path; 13.9% of (static) expressions have partially redundant computations. The next column prevented-CMP reports the percentage of optimizable expressions whose complete optimization by code motion is prevented by a CMP region. Code-motion PRE will fail to fully optimize 30.5% of optimizable expressions. reports expressions For comparison, column prevented-POE that will require restructuring in PoePRE. Experiment 2: Loop invariant expressions. Next, we determined what percentage of loop invariant (LI) expressions can be removed from their invariant loops with code motion. The column loop invar shows the percentage of optimizable expressions that pass our test of loop-invariance. The following column gives the percentage of LI expressions that have a CMP region; an average of 72.5% of LI computations cannot be hoisted from all enclosing invariant loops without restructuring. Experiment 3: Eliminated computations. The column global CSE reports the dynamic amount of computations removed by global common subexpression elimination; this corresponds to all full redundancies. The column complete PRE gives the dynamic amount of all partially redundant statements. The fact that strictly partial redundancies contribute only 1.7% (the difference between complete PRE and global CSE) may be due to the style of Impact’s intermediate code (e.g., multiple virtual registers for the same variable). We expect a more powerful redundancy analysis to perform better. Figure 8 plots the dynamic amount of strictly partial redundancies removed by various PRE techniques. Code-motion PRE yields only about half the benefit of a complete PRE. Furthermore, speculation results in nearcomplete PRE for most benchmarks, even without special hardware support (i.e., safe speculation). Speculation was carried out on the CMP as whole. Note that the graph accounts for the dynamic impairment caused by speculation.

...A

b) speculation made profitable speculation

Experiments

and restructuring.

hard, the small number of entry edges observed in practice (typically less than 10) makes the problem tractable. An interesting observation is that to determine optimal partial speculation, a) edge profiles are not inferior to path profiles and b) frequency analysis is not required. Therefore, to exploit the power of path profiles, partial restructuring, rather than (speculative) code motion alone, must be used. This becomes more intuitive once we realize that without control flow restructuring, one is restricted to consider only an individual edge (but not a path) for expression insertion and removal. To compare the CMP-based partial speculation with the speculative PRE in [20], we show how to efficiently compute the benefit by defining the CMP region and how to apply edge profiles with the same precision as path profiles. In acyclic code, we achieve the same precision; in cyclic code, we are more precise in the presence of loop-carried reuse. The task of partial restructuringis to localize a subgraph of the CMP that has a small size but contains many hot Must-Must paths. By duplicating only such a subregion, we are effectively peeling off only hot paths with few instructions. In Figure l(e), only the (presumably hot) path through the node Q was separated. Again, the problem of subregion, one whose benefit is maxifinding an optimal mized but passes the T(R) predicate and is smaller than a constant budget, is NP-hard. However, the empirically very small number of hot paths promises an efficient exhaustivesearch algorithm. Integrating partial speculation and restructuring offers additional opportunities for improving the cost-benefit ratio. We are no longer restricted to peeling off hot Must-Must paths and/or selecting No-entries for speculation. When the high frequency of a No entry prevents speculation, we can peel off a hot No-available path emanating from the entry, thereby reducing entry edge frequency and allowing the speculation, at the cost of some code duplication. Figure 7(a) shows an example program annotated with an edge profile. Because peeling hot Must-Must paths from the highlighted CMP([c+d) would duplicate all blocks except S, we try speculation. To eliminate the redundancy at the CMP exit edge Y with frequency ex(Y) = 100, a computation

9

Table 1: Experience

with PRE based on control flow restructuring. 5

Demand-Driven

Frequency

Analysis

Not amenable to bit-vector representation, frequency analysis [27] is an expensive component of profile-guided optimizers. We have shown that ComPRE allows restricting the scope of frequency analysis within the CMP region without a loss of accuracy. However, in large CMP regions the cost may remain high, and path profiles cannot be used as an efficient substitute when numerous hot paths fall into the region. One method to reduce the cost of frequency analysis is computing on demand only the subset of data flow solution that is needed by the optimization. In this section, we develop a demand-driven frequency analyzer which reduces data-flow analysis time by a) examining only nodes that contribute to the solution and, optionally, b) terminating the analysis prematurely, when the solution is determined with desired precision. Besides PRE, the analyzer is suitable for optimizations where acceptable running time must be maintained by restricting analysis scope, as in run-time optimizations [5] or interprocedural branch removal [lo]. Frequency analysis computes the probability that a dataflow fact will occur during execution. Therefore, the probability Yattice” is an infinite chain of real numbers. Because existing demand-driven analysis frameworks are built on iterative approaches, they only permit lattices of finite size [18] or finite height [22, 291 and hence cannot derive a frequency analyzer. We overcome this limitation by designing the demand-driven analyzer based upon elimination dataflow methods [28] whose time complexity is independent of the lattice shape. We have developed a demand-driven analysis framework motivated by the Allen-Cocke interval elimination solver [2]. Next, using the framework, a demanddriven algorithm for general frequency data-flow analysis was derived [8]. We present here the frequency solver specialized for the problem of availability. Definitions. Assume a forward data-flow problem specified

Figure 8: Benefit of various PRE algorithms: dynamic opcount decrease due to strictly partial redundancies. Each algorithm also completely removes full redundancies. The measurements indicate that an ideal PRE algorithm should integrate both speculation and restructuring. Using restructuring when speculation would waste a large portion of benefit will provide an almost complete PRE with small code growth. Experiment 4: Code growth. Finally, we compare the code growth incurred by ComPRE and PoePRE. To make the experiment feasible, we limited procedure size by 3,000 nodes and made the comparison only on procedures that did not exceed the limit in either algorithm. Overall, ComPRE created about three times less code growth than PoePRE.

10

The demand-driven interval analysis substitutes only those equations and reduces only those intervals on which the desired 2: is transitively dependent. To find the relevant equations, we back-substitute equations (flow functions) into the right-hand side of x: along the EG edges. The edges are added to the exploded graph on-line, whenever a new EG node is visited, by first computing the flow function of the node and then inserting its predecessors into the graph. As in [2], we define an EG interval to be a set of nodes dominated by the sink of any back-edge. In an irreducible EG, a back-edge is each loop edge sinking onto a loop entry node. Because the EG shape is not known prior to analysis, on-line identification of EG intervals relies only on the structure of the underlying control flow graph. When the CFG node of an EG node z is a CFG loop entry, then x may be an EG loop entry, and we conservatively assume it is an interval head. Within each interval, back-substitutions are performed in reversed interval order. Such order provides lattice-independence, as each equation needs to be substituted only once per interval reduction, and there are at most N reductions. To find interval order on an incomplete EG, we observe that within each EG interval, the order is consistent with the reverse postorder CFG node numbering. To loop-break cyclic dependencies along an interval backedge, the loop is reduced before we continue into the preceding interval, recursively invoking reductions of nested loops. This process achieves demand analysis of relevant intervals. The desired solution is obtained when z”, is expressed exclusively using constant terms. At this point, we have also identified an EG subgraph that contributes to zi, and removed from it all cyclic dependences. A forward substitution on the subgraph will yield solutions for all subgraph nodes which can be cached in case they are later desired (worst-case running time). This step corresponds to the propagation phase in [2], and to caching in [18, 291. The framework instance calculates the probability of expression e being available at the exit of node n during the execution: xc,, - p(AVAIL,,t[n,e] = Must) E R. Let p(a) denote the probability of edge a being taken, given its sink node is executed. We relate the edge probability to the sink (rather than the source, as in exhaustive analysis [27]) because the demand solver proceeds in the backward direction. The frequency flow function returns probability 1 when the node computes the expression e and 0 when it kills the expression. Otherwise, the sum of probabilities on predecessors weighted by edge execution probabilities is returned. Predicates Camp and Transp are defined in Figure 3.

with an equation system

= jn(n7%EpPed(n)xm) (X:,...,X;

=

(fm

m~pred(n)sn)r..

. 1 fns(n,xd)

Vector xn = (Xi,. . . , xi) is the solution for a node n, variable x”, denotes the fact associated with expression e. The equation system induces a dependence graph EG whose nodes are variables xc and edges represent flow functions f i: an edge (z&, zz) exists if the value of zz is computed from 2;, m E pred(n). The graph EG is called an exploded graph [22]. The data flow problems underlying ComPRE are separable, hence zz only depends on x&,. In value-based PRE [9], constant propagation [29], and branch correlation analysis [lo], edges (z$, xz), d # e, may exist, complicating the analysis. The analyzer presented here handles such general exploded graphs. Requirements. The demand-driven analyzer grew out of four specific design requirements: 1. Demand-driven. Rather than computing x, for each node n, we determine only the desired xk, i.e. the solution for expression e at a node n. Analysis speedup is obtained by further requiring that only nodes transitively contributing to the value of xf, are visited and examined. To guarantee worst-case behavior, when solutions for all EG nodes are desired, the solver’s time complexity does not exceed that of the exhaustive Allen-Cocke method, O(N’), where N is the number of EG nodes. 2. Lattice-independent. The amount of work per node does not depend on lattice size, only on the EG shape. 3.

On-line. The analysis is possible even when EG is not completely known prior to the analysis. To save time and memory, our algorithm constructs EG as analysis progresses. The central idea of on-demand construction is to determine a flow function fi only when its target variable xz is known to contribute to the desired solution. Furthermore, the solver must produce the solution even when EG is irreducible, which can happen even when the underlying CFG is reducible.

4. Informed. In the course of frequency analysis, the contribution weight of each examined node to the desired solution must be known. This information is used to develop a version of the analyzer that approximates frequency by disregarding low-contribution nodes with the goal of further restricting analysis scope. The ezhahaustiueinterval data-flow analysis [2] computes xn for all n as follows. First, loop headers are identified to partition the graph into hierarchic acyclic subregions, called intervals. Second, forward substitution of equations is performed within each interval to express each node solution in terms of its loop header. The substitution proceeds in the interval order (reverse postorder), so that each node is visited only once. Third, mutual equation dependences across loop back-edges are reduced with a loop breaking rule I,: The second and third xn = S(Xn,Xk) + xn = L(g(x*)). step remove cyclic dependences from all innermost loops in EG; they are repeated until all nesting levels are processed and all solutions are expressed in terms of the start node, which is then propagated to all previously reduced equations in the final propagation phase [2].

x,

a

I

= I

1.0 0.0

if Comp(n, e) A Transp(n, e), if TTransp(n,

c p((m, n)).xk nEpred(n)

e),

otherwise.

The demand frequency analyzer is shown in Figure 9. Two data structures are used : sol accumulates the constant terms of the desired probability r:; rhs is the current right-hand side of xz after all back-substitutions. The variables sol and rhs are organized as a stack, the top being used in the currently analyzed interval. The algorithm treats rhs both as a symbolic expression and as a working set of pending nodes (or yet unsubstituted variables, to be precise). For example, the value of rhs may be 0.25*m+0.4*k, where the weights are contributions of nodes m and k to the desired probability z”,. If e is never available at m, and is available

11

of analysis speedup, while most optimization opportunities were still discovered. However, without the approsimate frequency analyzer developed in this paper, we were unable to a) determine the benefit of restructuring, b) select a profitable subset of nodes to duplicate, and c) get a bound on the amount of opportunities lost due to early termination. Algorithm complexity. In an arbitrary exploded graph, reduce_loop may be (recursively) invoked on each node. Hence, each node may be visited at most NE times, where NE = NS is the number of EG nodes, N the number of CFG nodes, and S the number of optimized expressions. With caching of results, then each node is processed in at most one invocation of the algorithm in Figure 9, yielding worst-case time complexity of 0( Ni) = O(N2Sa). Since real programs have loop nesting level bound by a small constant, the expected complexity is (NS), as in [2]. Although most existing demand-driven data-flow algorithms ([18, 221, [29] in particular) can be viewed (like ours) to operate on the principle of back-substituting flow functions into the right-hand side of the target variable, they do not focus on specifying a profitable order of substitutions (unlike ours) but rely instead on finding the fixed point iteratively. Such an approach fails on infmite-height lattices where CFG loops keep always iterating towards a better approximation of the solution. Note that breaking each control flow cycle by inserting a widening operator [13] does not appear to resolve the problem because widening is a local adjustment primarily intended to approximate the solution. Therefore, in frequency analysis, too many iterations would be required to achieve an acceptable approximation. Instead of fixing the equation system locally, a global approach of structurally identifying intervals and reducing their cyclic dependences seems necessary. We have shown how to identify intervals and perform substitutions in interval order on demand, even when the exploded graph is not known prior to the analysis. We believe that existing demand methods can be extended to operate in a structural manner, enabling the application of loop-breaking rules. This would make the methods reminiscent of the elimination algorithms [28].

at k with probability 0.5, then it is available at node n with probability 0.25 * 0 + 0.4 * 0.5 = 0.2. More formally, the contribution weight of a node represents the probability that a path from that node to n without a computation or a kill of the expression e will be executed. First, the rhs is set to 1.0 * n in line 1. Then, flow functions are back-substituted into rhs in post-order (line 3). Substitutions are repeated until all variables have been replaced with constants (line 2), which are accumulated in sol. If a substituted node z computes the expression e, its weight rhs[x] is added to the solution and z is removed from the rhs by the assignment rhs[z] := 0.0 (line 6). In the simple case when x is not a loop entry node (line 12), its contribution c is added to each predecessor’s contribution, weighted by the edge probability p. If z is a loop entry node (line 8), then before continuing to the loop predecessor, all self-dependences of x are found in a call to reduceloop. The procedure reduceloop mimics the main loop (lines I5) but it pushes new entries on the stacks to initiate a reduction of a new interval and also marks the loop entry node to stop when back-substitution collected cyclic dependences along all cyclic paths on the back-edge edge (y,x). The result of reduceloop is returned in a sol-&s pair (s,r), where s is the constant and r the set of unresolved variables, e.g. x = r + s = 0.32 + 0.1. If EG is reducible, the set r contains only x. The value r[x] = 0.3 is the weight of the z’s self-dependence, which is removed by the loop breaking rule derived for frequency analysis from the sum of infinite geometric sequence (lines 10-11). After the algorithm terminates, the stack visited (line 14) specifies the order in which forward substitution is performed to cache the results. Also shown in Figure 9 is an execution trace of the demand-driven analysis. It computes the probability that the expression computed in nodes F, H, and killed in A, D, is available at node C. All paths where availability holds are highlighted. Approximate Data-Flow Analysis. Often, it is necessary to sacrifice precision of the analysis for its speed. We define here a notion of approximate data flow information, which allows the analyzer a predetermined degree of conservative imprecision. For example, given a 5% imprecision level (e = 0.05), the analyzer may output “available: 0.7,” when the maximal fixed point solution is “available: 0.75.” The intention of permitting underestimation is to reduce the analysis cost. When the analyzer is certain that the contribution of a node (and all its incoming paths) to the overall solution is less than the imprecision level, it can avoid analyzing the paths and assume at the node the most conservative data-flow fact. Because the algorithm in Figure 9 was designed to be informed, it naturally extends to approximate analysis. By knowing the precise contribution weight of each node as the analysis progresses, whenever the sum of weights in rhs at the highest interval level falls below e (the while-condition in line 2), we can terminate and guarantee the desired precision. An alternative scenario is more attractive, however. When a low-weight node is selected in line 3, we throw it away. We can keep disregarding such nodes until their total weights exceed e. In essence, this approach performs analysis along hot paths [4], and on-line region formation [21]. The idea of terminating the analysis before it could find the precise solution was first applied in the implementation of inter-procedural branch elimination [lo]. Stopping after visiting a thousand nodes resulted in two magnitudes

6

Conclusion

and Related

Work

The focus of this paper is to improve program tmnsformations that constitute value-reuse optimizations commonly known as Partial Redundancy Elimination (PRE). In the long history of PRE research and implementation, three distinct transformations can be identified. The seminal paper by Morel and Renviose [26] and its derivations [ll, 14, 15, 16, 17, 241 employ pure, non-speculative code motion. Second, the complete PRE by Steffen [30] is based on control flow restructuring. Third, navigated by path profile information, Gupta et al apply speculative code motion in order to avoid code-motion obstacles by controlled impairment of some paths [ZO]. In this work, we defined the code-motion-preventing (CMP) region, which is a CFG subgraph localizing adverse effects’of control flow on the desired value reuse. The notion of the CMP is applied to enhance and integrate the three existing PRE transformations in the following ways, 1. Code motion and restructuring are integrated to remove all redundancies at minimal code growth cost (ComPRE). 2. Morel and Renviose’s original method is expressed as a restricted (motion-only) case of the complete algorithm (CM-PRE). 3. We develop an algorithm whose power adjusts contin-

12

Input:

Output:

node n, expression e. in sol, the probability of e being

available

at the exit of n.

stack of reals (names sol, rhs refer always to top of stack) stack of sets of unsubstituted nodes n with weights rha[n] rhs : oost-dfs : post-order numbering of CFG nodes so1 :

t sol := 0; rhs := {}; rhs[n] := 1.0 2 while rhs not empty do select from rhs a node z with smallest 3 substitute(z) I 5 end while

procedure

substitute(node

post-dfs:

post-dfs(z)

z)

if z has not been visited, determine its flow function if I computes or kills e, adjust sol and remove z from rhs if Comp(z,e) A Transp(z, e) then SO/ := sol + rhs[z]; rhs[z] := 0.0; return else if ~Transp(n,e) then rhs[z] := 0.0; return back-edge is each edge that meets a loop-entry edge if back-edge (y, Z) exists then assume one back-edge per node substitute for y until z occurs on the r.h.s.

5 7 3 9

(s, v) := reduceJoop(y,

c := rhs[z]/(l

10 11

13

14

z is now fully substituted rhs[z] := 0.0; viaited.push(z)

12

end for end

substitute

function mark

15 16 17 18

reduceJoop(node

v; so/.push(O);

contains unmarked nodes do select from rha an unmarked node z with substitute(z)

end while unmark V; return end

(aol.pop(),

lowest

post-dfs(z)

rhs.pop())

reduceJoop

Figure 9: Demand-driven

substitute(H)

6 6

sol := sol + rhs[H] := 0 + 0.2 rhs = 0.3 * G + 0.5 * B

4 12 13 4 9 15

substitute(G) c := 0.3

rhs := 0.5 * B + 0.3 *A

6 6

sol1 := so/l + rhsl[fl := 0 + 0.09 rhsl = 0.36 * D + 0.45 * B

18

substitute(D)

7 20

That := 0.45 + B B; return - 0.45)

(0.09,0.45 = 0.91

*B)

11 rhs := 0.5 * B + 0.3 *A + 0.91 * 0.45 * B 11 sol := 0.2 + 0.91*0.09 := 0.2818 13 rhs := 0.91 * B + 0.3 * A + 0.91 * 0.1 * A 14 rhs := 0.391~ A

rhs[u] := p((u,v))

while rhs

19 20

4

unmark 10 c := 0.5/(1

u, node u)

rha.posh({});

sol := 0; rhs = 1.0 *:C substitute(C)

substitute(B) reduceJoop(E, B) mark B; sol1 := 0; rhsl := 0.9 *E 18 substitute(E) c := 0.9; 12 13 rhsl := 0.36 *D + 0.45 + B + 0.09 * F 18 substitute(F)

geom. sequence

- r[~])

rhs := rhs + c x r; sol := sol + c x a else c := rhs[z] substitute “acyclic” predecessors for each non-backedge node z C pred(z) do rhs[z] := rhs[r] + c x p((z, z))

1 4

12 c := 1.0 13 rha := 0.2 * H + 0.3 * G + 0.5 *B

z)

apply loop breaking rule: sum of infinite

H, G, F, . .

Input: n = C Output: p = 0.2818

frequency analysis for availability

4 7

substitute(A) sol := 0.2818

7

rha := 0.0

of computations,

unchanged

t

Final probability

and a trace of its execution.

flow analysis preserves reuse opportunities across merges), a more complicated analysis (e.g., redundancy of array bound checks) would be improved by their approach. After the analysis, their algorithm recombines separated paths that present no useful opportunities. It is likely that path recombination can be integrated with code motion, as presented in this paper, to further reduce the code growth. In a global view, we have identified four main issues with path-sensitive program optimizations [8]: a) solving non-distributive problems without conservative approximation (e.g. non-linear constant propagation), b) collecting distinct opportunities (e.g., variable has different constant along each path), c) exploiting distinct opportunities (e.g., enabling folding of path-dependent constants through restructuring), and d) directing the analysis effort towards hot paths. In the approach of Ammons and Larus, all four issues are attacked uniformly by separation of hot paths, their subsequent individual analysis, and recombination. Our approach is to reserve restructuring for the actual transformation. This implies a different overall strategy: a) we solve non-distributive problems precisely along all paths by customizing the data-flow name space [9], b) we collect distinct opportunities through demand-driven analysis as in branch

ually between the motion-only and the complete PRE in response to the program profile (PgPRE). 4. We demonstrate that speculation can be navigated precisely by edge profiles alone. 5. Path profiles are used to integrate the three transformations and balance their power at the level of CMP paths. While PRE is significantly improved through effective program transformations presented in this paper, a large orthogonal potential lies in detecting more redundancies. Some techniques have used powerful analysis to uncover more value reuse than the traditional PRE analysis [9, 111. However, using only code motion, they fail to completely exploit the additional reuse opportunities. Thus, the transformations presented here are applicable in other styles of PRE as well, for example in elimination of loads. Ammons and Larus [4] developed a constant propagation optimization based on restructuring, namely on peeling of hot paths. In their analysis/transformation framework, restructuring is used not only to exploit optimization opportunities previously detected by the analysis, as is our case, but also to improve the analysis precision by eliminating control flow merges from the hot paths. Even though our PRE cannot benefit from hot path separation (our distributive data-

13

elimination [lo], which is itself a form of constant propagation, c) we exploit all profitable opportunities with economical transformations, and d) avoid infrequent program regions using the approximation frequency analysis (the last three presented in this paper). ‘7

P31

P. Cousot and R. Cousot. Abstract intrepretation: a unified lattice model for static analysis of programs by construction or approximation of fixpoints. In Conference Record of the 4th ACM Symposium on Principles of Programming Languages, pages 238-252, January 1977.

[I41

Dhanajay M. Dhamdhere, Barry K. Rosen, and Kenneth F. Zadeck. How to analyze large programs eWciently and informatively. In Proceedings of the ACM SIGPLAN ‘92 Conference on Programming Language Design and ImplementaJuly 1992. tion, pages 212-223,

Acknowledgments

We are indebted to the elcor and Impact compiler teams for providing their experimental infrastructure. Sadun Anik, Ben-Chung Cheng, Brian Dietrich, John Gyllenhaal, and Scott Mahlke provided invaluable help during the impleComments from Glenn Ammentation and experiments. mons, Evelyn Duesterwald, Jim Larus, Mooly Sagiv, Bernhard Steffen, and the anonymous reviewers helped to improve the presentation of the paper. This research was partially supported by NSF PYI Award CCR-9157371, NSF grant CCR-9402226, and a grant from Hewlett-Packard to the University of Pittsburgh.

M. Dhamdhere. 1151 Dhananjay global optimization algorithm

Transactions 13(2):291-294,

Alfred

PI

F. E. Allen and J. Cocke. A procedure. CA CM, 19(3):137-147,

and

ACM Systems,

Stadel. “global

A solution optimization

to a problem with by suppression of

A CM Transactions on Programming October 1988. and Systems, 10(4):635-640,

Languages

P71K.

Drechsler and Steffen’s 28(5):635-640,

and M. Stadel.

A variation

lazy code motion. May

ACM

of Knoop,

Riithing,

SIGPLAN

Notices,

1993.

Duesterwald, Rajiv Gupta, and Mary Lou Soffa. A practical framework for demand-driven interprocedural data flow analysis. ACM Transactions on Programming Languages and Systems, 19(6):992-1030, November 1997.

V. Aho,

Principles,

Ravi

Sethi,

Techniques,

and Jeffrey

D. Ullman. Compilers Wesley, 1986.

and Tools. Addison

[191R.

Gupta, D. profile-directed

program data flow analysis March

1976.

PLAN ‘98 Conference on Programming and Implementation, 1998.

Language

Gupta, D. Berson, and J.Z. Fang. Path profile guided parredundancy elimination using speculation. In IEEE International Conference on Computer Languages, May 1998.

WI

Richard E. Hank, Wen-Mei W. Hwu, and B. Ramakrishna Rau. Region-based compilation: An introduction and motivation. In 28th Annual IEEE/ACM International Symposium on Microarchitecture, Ann Arbor, Michigan, 1995.

WI

Susan Horwitz, Interprocedural

Language Design and ImplementaMay

14S-159,21-

[231

1996.

161A.

Ayers, R. Schooler, and R. Gottlieb. Aggressive inlining. In Proceedings of the ACM SIGPLAN ‘97 Conf. on Prog. June 1997. Language Design and Impl., pages 134-145,

Bali and James R. Larus. Efficient [71 Thomas In 29th Annual IEEE/ACM International Microarchitecture, pages 46-57, 1996.

P31Rastislav

University

PhD

Bodik and Sadun Anik. Path-sensitive value-flow In Conference Record of the 25th A CM SIGPLA NSIGACT Symposium on Principles of Programming Languages, January 1998. Bodik, Rajiv conditional

Gupta, branch

and Mary elimination.

Mooly Sagiv. Demand In Proceedings of the

of

Johan Janssen and Henk Corporaal. Controlled node splitting. In Compiler Conslruction, 6th International Conjerence, volume 1060 of Springer Lecture Notes in Computer Science, pages 44-58, Sweden, April 1996. Steffen.

Optimal

Trans. on Progr.

W. Y. Chen, R. A. Bringmann, R. E. Hank, [251 S. A. Mahlke, W.M. W. Hwu, B. R. Rau, and M. S. Schlansker. Sentinel scheduling for VLIW and superscalar processors. A CM Transactions on Computer Systems, 11(4):376-408,1993.

thesis,

Rastislav analysis.

Rastislav procedural

Reps, and Analysis.

Oliver Riithing, and Bernhard [241 Jens Knoop, code motion: Theory and practice. ACM Languages and Systems, 16(4):1117-1155,1994.

path profiling. Symposium on

Bodik. Path-Sensitive Compilation. of Pittsburgh, in preparation.

Thomas Dataflow

Third ACM SIGSOFT Symposium on the Foundations Software Engineering, pages 104-115, October 1995.

of

pages

In

on Mi-

tial

151 Joel Auslander, Matthai Philipose, Craig Chambers, Susan J. Eggers, and Brian N. Bershad. Fast, effective dynamic the A CM SIGPLAN ‘96 Concompilation. In Proceedings tion,

Fang. Resource-sensitive for code optimization.

PO1 R.

Design

ference on Programming

Berson, and J.Z. data flow analysis

90th Annual IEEE/ACM International Symposium croarchitecture, pages 358-368, December 1997.

and James L. Larus. Improving data-flow [41 Glenn Ammons analysis with path profiles. In Proceedings of the ACM SIG-

PI

Languages

of the

1991.

I161

Thomas Ball, and James R. Larus. Exploit[31 Glenn Ammons, ing hardware performance counters with flow and context sensitive profiling. In Proceedings of the ACM SIGPLAN ‘97 Conf. on Prog. Language Design and Impl., pages 8596, 1997.

Dll

April

adaptation and Renvoise.

[I81 Evelyn

PI

WI

on Programming

K. Drechsler and M. Morel and Renvoise’s partial redundancies”.

References

PI

Practical of Morel

WI

E. Morel of partial

and C. Renviose. redundancies.

1271 G. Ramalingam.

Lou Soffa. InterIn Proceedings

Data

Global

CACM, flow

ings of the ACM SIGPLAN Design and Implementation,

optimization

by supression 1979.

22(2):96-103,

frequency

In Proceed-

analysis.

‘96 Conj. on Progr.

Language

pages

1996.

267-277,

June

of the A CM SIGPLAN ‘97 Conf. on Prog. Language Design and Impl., pages 146-158, June 1997.

[=I

Preston Briggs and Keith D. Cooper. Effective partial redundancy elimination. In Proceedings of the Conference on Programming Language Design and Implementation, pages 159-170, June 1994.

Barbara G. Ryder and Marvin gorithms for data flow analysis. 18(3):277-316, September 1986.

WI

Mooly Sagiv, Thomas Reps, and Susan Horwitz. Precise interproceduraldataflow analysis with applications to constant propagation. Theoretical Computer Science, 167(1-2):131170,1996.

F. Chow, S. Ghan, R. Kennedy, S.-M. Liu, R. Lo, and P. Tu. A new algorithm for partial redundancy elimination based on SSA form. In Proceedings of Ihe ACM SIGPLAN ‘97 Conf. on Prog. Language Design and Impl., pages 273-286, June 1997.

[301 Bernhard

Steffen.

Int. Static Analysis LNCS, pages 22-41,

14

C. Paull.

ACM

Elimination

Computing

al-

Surveys,

Property oriented expansion. In Proc. Symposium (SAS’96), volume 1145 of Germany, September 1996. Springer.

Our partners will collect data and use cookies for ad personalization and measurement. Learn how we and our ad partner Google, collect and use data. Agree & Close