Skip to content

Refactor OptimizationState lifetime#36700

Merged
Keno merged 1 commit intomasterfrom
kf/optcache
Aug 11, 2020
Merged

Refactor OptimizationState lifetime#36700
Keno merged 1 commit intomasterfrom
kf/optcache

Conversation

@Keno
Copy link
Copy Markdown
Member

@Keno Keno commented Jul 17, 2020

In #36508 we decided after some consideration not to add the stmtinfo
to the CodeInfo object, since this info would never be used for
codegen. However, this also means that external AbstractInterpreters
that would like to cache pre-optimization results cannot simply cache
the unoptimized CodeInfo and then later feed it into the optimizer.
Instead they would need to cache the whole OptimizationState object,
or maybe convert to IRCode before caching. However, at the moment
we eagerly drop the OptimizationState wrapper as soon was we
decide not to run the optimizer. This refactors things to keep
the OptimizationState around for unoptimized methods, only dropping
it right before caching, in a way that can be overriden by
an external AbstractInterpreter.

We run into the inverse problem during costant propagation where
inference would like to peek at the results of optimization in
order to decide whether constant propagation is likely to be
profitable. Of course, if optimization hasn't actually run yet
for this AbstractInterpreter, this doesn't work. Factor out
this logic such that an external interpreter can override this
heuristic. E.g. for my AD interpreter, I'm thinking just looking
at the vanilla function and checking its complexity would be
a good heuristic (since the AD'd version is supposed to give
the same result as the vanilla function, modulo capturing
some additional state for the reverse pass).

@Keno Keno closed this Jul 22, 2020
@Keno Keno reopened this Jul 22, 2020
In #36508 we decided after some consideration not to add the `stmtinfo`
to the `CodeInfo` object, since this info would never be used for
codegen. However, this also means that external AbstractInterpreters
that would like to cache pre-optimization results cannot simply cache
the unoptimized `CodeInfo` and then later feed it into the optimizer.
Instead they would need to cache the whole OptimizationState object,
or maybe convert to IRCode before caching. However, at the moment
we eagerly drop the `OptimizationState` wrapper as soon was we
decide not to run the optimizer. This refactors things to keep
the OptimizationState around for unoptimized methods, only dropping
it right before caching, in a way that can be overriden by
an external AbstractInterpreter.

We run into the inverse problem during costant propagation where
inference would like to peek at the results of optimization in
order to decide whether constant propagation is likely to be
profitable. Of course, if optimization hasn't actually run yet
for this AbstractInterpreter, this doesn't work. Factor out
this logic such that an external interpreter can override this
heuristic. E.g. for my AD interpreter, I'm thinking just looking
at the vanilla function and checking its complexity would be
a good heuristic (since the AD'd version is supposed to give
the same result as the vanilla function, modulo capturing
some additional state for the reverse pass).
@Keno Keno merged commit cf82c19 into master Aug 11, 2020
@Keno Keno deleted the kf/optcache branch August 11, 2020 01:42
simeonschaub pushed a commit to simeonschaub/julia that referenced this pull request Aug 11, 2020
In JuliaLang#36508 we decided after some consideration not to add the `stmtinfo`
to the `CodeInfo` object, since this info would never be used for
codegen. However, this also means that external AbstractInterpreters
that would like to cache pre-optimization results cannot simply cache
the unoptimized `CodeInfo` and then later feed it into the optimizer.
Instead they would need to cache the whole OptimizationState object,
or maybe convert to IRCode before caching. However, at the moment
we eagerly drop the `OptimizationState` wrapper as soon was we
decide not to run the optimizer. This refactors things to keep
the OptimizationState around for unoptimized methods, only dropping
it right before caching, in a way that can be overriden by
an external AbstractInterpreter.

We run into the inverse problem during costant propagation where
inference would like to peek at the results of optimization in
order to decide whether constant propagation is likely to be
profitable. Of course, if optimization hasn't actually run yet
for this AbstractInterpreter, this doesn't work. Factor out
this logic such that an external interpreter can override this
heuristic. E.g. for my AD interpreter, I'm thinking just looking
at the vanilla function and checking its complexity would be
a good heuristic (since the AD'd version is supposed to give
the same result as the vanilla function, modulo capturing
some additional state for the reverse pass).
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants