Conversation
JeffBezanson
approved these changes
Jul 21, 2020
In #36508 we decided after some consideration not to add the `stmtinfo` to the `CodeInfo` object, since this info would never be used for codegen. However, this also means that external AbstractInterpreters that would like to cache pre-optimization results cannot simply cache the unoptimized `CodeInfo` and then later feed it into the optimizer. Instead they would need to cache the whole OptimizationState object, or maybe convert to IRCode before caching. However, at the moment we eagerly drop the `OptimizationState` wrapper as soon was we decide not to run the optimizer. This refactors things to keep the OptimizationState around for unoptimized methods, only dropping it right before caching, in a way that can be overriden by an external AbstractInterpreter. We run into the inverse problem during costant propagation where inference would like to peek at the results of optimization in order to decide whether constant propagation is likely to be profitable. Of course, if optimization hasn't actually run yet for this AbstractInterpreter, this doesn't work. Factor out this logic such that an external interpreter can override this heuristic. E.g. for my AD interpreter, I'm thinking just looking at the vanilla function and checking its complexity would be a good heuristic (since the AD'd version is supposed to give the same result as the vanilla function, modulo capturing some additional state for the reverse pass).
simeonschaub
pushed a commit
to simeonschaub/julia
that referenced
this pull request
Aug 11, 2020
In JuliaLang#36508 we decided after some consideration not to add the `stmtinfo` to the `CodeInfo` object, since this info would never be used for codegen. However, this also means that external AbstractInterpreters that would like to cache pre-optimization results cannot simply cache the unoptimized `CodeInfo` and then later feed it into the optimizer. Instead they would need to cache the whole OptimizationState object, or maybe convert to IRCode before caching. However, at the moment we eagerly drop the `OptimizationState` wrapper as soon was we decide not to run the optimizer. This refactors things to keep the OptimizationState around for unoptimized methods, only dropping it right before caching, in a way that can be overriden by an external AbstractInterpreter. We run into the inverse problem during costant propagation where inference would like to peek at the results of optimization in order to decide whether constant propagation is likely to be profitable. Of course, if optimization hasn't actually run yet for this AbstractInterpreter, this doesn't work. Factor out this logic such that an external interpreter can override this heuristic. E.g. for my AD interpreter, I'm thinking just looking at the vanilla function and checking its complexity would be a good heuristic (since the AD'd version is supposed to give the same result as the vanilla function, modulo capturing some additional state for the reverse pass).
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
In #36508 we decided after some consideration not to add the
stmtinfoto the
CodeInfoobject, since this info would never be used forcodegen. However, this also means that external AbstractInterpreters
that would like to cache pre-optimization results cannot simply cache
the unoptimized
CodeInfoand then later feed it into the optimizer.Instead they would need to cache the whole OptimizationState object,
or maybe convert to IRCode before caching. However, at the moment
we eagerly drop the
OptimizationStatewrapper as soon was wedecide not to run the optimizer. This refactors things to keep
the OptimizationState around for unoptimized methods, only dropping
it right before caching, in a way that can be overriden by
an external AbstractInterpreter.
We run into the inverse problem during costant propagation where
inference would like to peek at the results of optimization in
order to decide whether constant propagation is likely to be
profitable. Of course, if optimization hasn't actually run yet
for this AbstractInterpreter, this doesn't work. Factor out
this logic such that an external interpreter can override this
heuristic. E.g. for my AD interpreter, I'm thinking just looking
at the vanilla function and checking its complexity would be
a good heuristic (since the AD'd version is supposed to give
the same result as the vanilla function, modulo capturing
some additional state for the reverse pass).