Conversation
sbt#1520 originally fixed sbt#1514 by reimplementing part of chain resolver to check all resolvers to find the latest snapshot artifacts. This behavior did not work well with Maven repositories where sbt was failing to calculate the correct publication dates. Now that sbt#2075 fixes the Maven integration issue we should enable this flag back again. The build user can opt out by: updateOptions := updateOptions.value.withLatestSnapshots(false)
The "latest snapshot" chain resolver was assuming that there's at least one artifact per module. I think that was the root cause of sbt#1616.
Fixes sbt#1514. Enable latest SNAPSHOT option by default
[0.13.10] Fix bincompat issue introduced in sbt#2268
withInterProjectFirst when set to true will prioritize inter-project resolver over all other resolver and Ivy cache. This aimed to workaround the fact that on Maven Test configuration is considered private, and thus project dependency with `test->test` configuration may not under some condition. The condition is when someone resolves `x:y:1.0` and you have a subproject named and versioned exactly that, and some other subproject tries to depend on it. This happens when the project does not change the version number on the Github.
Maven compatibility changes (intransitive warnings and "configuration not public")
This small set of changes, together with the compiler-bridge I wrote (https://github.com/smarter/dotty-bridge) enables us to compile code using Dotty in sbt, see https://github.com/smarter/dotty-example-project for an example.
Make sbt aware of Dotty
Both continuations and macro-config set scalaVersion explicitly but since sbt relies now on Scala 2.10 it's not needed anymore. In particular, we can upgrade continuations to 2.10 which makes it easier to work with Java 8.
…cleanup Upgrade Scala version in scripted tests
Call `initialize` in case symbol's `info` hadn't been completed during normal compilation. Also, normalize to the class symbol immediately. Add a TODO regarding only looking at class symbols, and thus ignoring the term symbol for objects, as the corresponding class symbol has all the relevant info.
The only aspect of the self variable that's relevant for incremental compilation is its explicitly declared type, and only when it's different from the type of the class that declares it. Technically, any self type that's a super type of the class could be ignored, as it cannot affect external use (instantiation/subclassing) of the class.
Motivated because we want to make it more robust & configurable.
Original motivation was to diagnose a cyclic type representation,
likely due to an f-bounded existential type, as illustrated by the following:
```
class Dep {
// The API representation for `bla`'s result type contains a cycle
// (an existential's type variable's bound is the existential type itself)
// This results in a stack overflow while showing the API diff.
// Note that the actual result type in the compiler is not cyclic
// (the f-bounded existential for Comparable is truncated)
def bla(c: Boolean) = if (c) new Value else "bla"
}
class Value extends java.lang.Comparable[Value] { def compareTo(that: Value): Int = 1 }
```
Limit nesting (`-Dsbt.inc.apidiff.depth=N`, where N defaults to `2`),
and number of declarations shown for a class/structural type
(via `sbt.inc.apidiff.decls`, which defaults to `0` -- no limit).
Limiting nesting is crucial in keeping the size of api diffs of large programs
within a reasonable amount of RAM...
For example, compiling the Scala library, the API diff with nesting at `4`
exhausts 4G of RAM...
For refinement types, the Structure was already restricted
to declarations (and not inherited members), but all base types
were still included for a refinement's parents, which would
create unwieldy, and even erroneous (cyclic) types by expanding
all constituents of an intersection type to add all base types.
Since the logic already disregarded inherited members, it seems
logical to only include direct parents, and not all ancestor types.
```
class Dep {
def bla(c: Boolean) = if (c) new Value else "bla"
}
class Value extends java.lang.Comparable[Value] { def compareTo(that: Value): Int = 1 }
```
Specialize two implementations for each value of the `inherit` boolean argument. Also use a more direct way of distinguishing declared and inherited members. backwards compat for source-dependencies/inherited-dependencies
Also a bit more complete: handle SelectFromTypeTree, consider the self type an inheritance dependency, and flatten any refinement types in inherited types, to get to the symbols of their parents, instead of the useless symbol of the refinement class. Include inheritance dependencies in regular ones Also, update test to reflect the self type is now seen as an inheritance dependency. self types are local, so don't treat them like inherited types note inheritanceSymbols dealiases, where allSymbols is constructed differently fix NPE in source-dependencies/macro-annotation
Since pickled annotated types and symbols only mention static annotations, whereas compilation from source sees all annotations, we must explicitly filter annotations in the API representation using the same criteria as the pickler, so that we generate the same API when compiling from source as when we're loading classfiles.
Further categorize debug output as api diff ([diff]), and invalidation ([inv]). Since we log a ton of diagnostics, make it a bit easier to find the relevant bits.
This commit enables control of whether a compiler instance should be reused between compiling groups of Scala source files. Check comments in the code for why this can be useful to control.
Add a pending test that shows a problem with instability of representing self variables. This test covers the bug described in sbt#2504. In order to test API representation of a class declared either in source file or unpickled from a class file, ScalaCompilerForUnitTesting has been extended to extract APIs from multiple compiler instances sharing a classpath.
The reason for instability is a bit tricky so let's unpack what the previous code checking if there's self type declared was doing. It would check if `thisSym` of a class is equal to a symbol representing the class. If that's true, we know that there's no self type. If it's false, then `thisSym` represents either a self type or a self variable. The second (type test) was supposed to check whether the type of `thisSym` is different from a type of the class. However, it would always yield false because TypeRef of `thisSym` was compared to ClassInfoType of a class. So if you had a self variable the logic would see a self type (and that's what API representation would give you). Now the tricky bit: `thisSym` is not pickled when it's representing just a self variable because self variable doesn't affect other classes referring to a class. If you looked at a type after unpickling, the symbol equality test would yield true and we would not see self type when just a self variable was declared. The fix is to check equality of type refs on both side of the type equality check. This makes the pending test passing. Also, I added another test that checks if self types are represented in various combinations of declaring a self variable or/and self type. Fixes sbt#2504.
Per review request, add two more test cases for self variable (no self type).
Dependency descriptors that are created from POM files do not specify their artifacts. In those cases, it is correct to create a default artifact.
Fix instability of self variable API representation
Deprecate the Build trait
Some scripted tests override the default local repository, which produces errors with the compiler because sbt cannot find the sources for the compiler interface. This fix proposes to publish the compiler interface sources to an alternate local repository before running the scripted tests. This alternate repository is added to the scripted sbt's configuration, so that sbt is finally able to find the compiler interface sources.
Publish compiler interface to an alternate local repository for Scripted tests
Provides a workaround flag `incOptions := incOptions.value.withIncludeSynthToNameHashing(true)` for name hashing not including synthetic methods. This will not be enabled by default in sbt 0.13. It can also enabled by passing `sbt.inc.include_synth=true` to JVM.
Adds withIncludeSynthToNameHashing.
Add toString to IvyPaths for debugging purpose
Unlike other scripted macro tests, the call site of this macro is `Provider.tree(0)`, which does not introduce internal member reference. Instead the macro itself calls `Bar.bar(0)`. Due to sbt#2560, the expanded tree is not traversed, and thus the reference to `Bar` is not caught during incremental compilation.
traverse(tree: Tree) used to call super.traverse(tree) at the end. sbt/sbt@0f61629 brought the traversing call to inside of the pattern matching. For the case of MacroExpansionOf(original), it amounts to not traveling the macro-expanded code. See sbt/src/sbt-test/source-dependencies/macro-nonarg-dep for the repro.
Fixes incremental compiler missing member ref from macro expansion sbt#2560
This is a backport of sbt/zinc#87 When `B2.scala` replaces `B.scala` in the new test `types-in-used-names-a`, the name hash of `listb` does not change because the signature of `C.listb` is still `List[B]`, however users of `C.listb` have to be recompiled since the subtyping relationships of its type have changed. This commit does this by extending the definition of "used names" to also include the names of the types of trees, even if these types do not appear in the source like `List[B]` in `D.scala` (since `B` has been invalidated, this will force the recompilation of `D.scala`). This commit does not fix every issue with used types as illustrated by the pending test `types-in-used-names-b`, `B.scala` is not recompiled because it uses the type `T` whose hash has not changed, but `T` is bounded by `S` and `S` has changed, so it should be recompiled. This should be fixable by including the type bounds underlying a `TypeRef` in `symbolsInType`. The test `as-seen-from-a` that did not work before shows that we may not have to worry about tracking prefixes in `ExtractAPI` anymore, see the discussion in sbt/zinc#87 for more information.
eed3si9n
pushed a commit
that referenced
this pull request
Apr 5, 2019
This is a huge refactor of Watched. I produced this through multiple rewrite iterations and it was too difficult to separate all of the changes into small individual commits so I, unfortunately, had to make a massive commit. In general, I have tried to document the source code extensively both to facilitate reading this commit and to help with future maintenance. These changes are quite complicated because they provided a built-in like api to a feature that is implemented like a plugin. In particular, we have to manually do a lot of parsing as well as roll our own task/setting evaluation because we cannot infer the watch settings at project build time because we do not know a priori what commands the user may watch in a given session. The dynamic setting and task evaluation is mostly confined to the WatchSettings class in Continuous. It feels dirty to do all of this extraction by hand, but it does seem to work correctly with scopes. At a high level this commit does four things: 1) migrate the watch implementation to using the InputGraph to collect the globs that it needs to monitor during the watch 2) simplify WatchConfig to make it easier for plugin authors to write their own custom watch implementations 3) allow configuration of the watch settings based on the task(s) that is/are being run 4) adds an InputTask implemenation of watch. Point #1 is mostly handled by Point #3 since I had to overhaul how _all_ of the watch settings are generated. InputGraph already handles both transitive inputs and triggers as well as legacy watchSources so not much additional logic is needed beyond passing the correct scoped keys into InputGraph. Point #3 require some structural changes. The watch settings cannot in general be defined statically because we don't know a priori what tasks the user will try and watch. To address this, I added code that will extract the task keys for all of the commands that we are running. I then manually extract the relevant settings for each command. Finally, I aggregate those settings into a single WatchConfig that can be used to actually implement the watch. The aggregation is generally straightforward: we run all of the callbacks for each task and choose the next watch state based on the highest priority Action that is returned by any of the callbacks. Because I needed Extracted to pull out the necessary settings, I was forced to move a lot of logic out of Watched and into a new singleton, Continuous, that exists in the main project (Watched is in the command project). The public footprint of Continuous is tiny. Even though I want to make the watch feature flexible for plugin authors, the implementation and api remain a moving target so I do not want to be limited by future binary compatibility requirements. Anyone who wants to live dangerously can access the private[sbt] apis via reflection or by adding custom code to the sbt package in their plugin (a technique I've used in CloseWatch). Point #2 is addressed by removing the count and lastStatus from the WatchConfig callbacks. While these parameters can be useful, they are not necessary to implement the semantics of a watch. Moreover, a status boolean isn't really that useful and the sbt task engine makes it very difficult to actually extract the previous result of the tasks that were run. After this refactor, WatchConfig has a simpler api. There are fewer callbacks to implement and the signatures are simpler. To preserve the _functionality_ of making the count accessible to the user specifiable callbacks, I still provided settings like watchOnInputEvent that accept a count parameter, but the count is actually tracked externally to Watched.watch and incremented every time the task is run. Moreover, there are a few parameters of the watch: the logger and transitive globs, that cannot be provided via settings. I provide callback settings like watchOnStart that mirror the WatchConfig callbacks except that they return a function from Continuous.Arguments to the needed callback. The Continuous.aggregate function will check if the watchOnStart setting is set and if it is, will pass in the needed arguments. Otherwise it will use the default watchOnStart implementation which simulates the existing behavior by tracking the iteration count in an AtomicInteger and passing the current count into the user provided callback. In this way, we are able to provide a number of apis to the watch process while preserving the default behavior. To implement #4, I had to change the label of the `watch` attribute key from "watch" to "watched". This allows `watch compile` to work at the sbt command line even thought it maps to the watchTasks key. The actual implementation is almost trivial. The difference between an InputTask[Unit] and a command is very small. The tricky part is that the actual implementation requires applying mapTask to a delegate task that overrides the Task's info.postTransform value (which is used to transform the state after task evaluation). The actual postTransform function can be shared by the continuous task and continuous command. There is just a slightly different mechanism for getting to the state transformation function.
eed3si9n
pushed a commit
that referenced
this pull request
Jul 25, 2020
There were two issues with the server tests on windows 1) Sometimes client connections fail on the first attempt but will succeed with retries 2) It is possible for named pipes to leak which causes subsequent tests to be unable to run 3) ClientTest did not handle lines with carriage returns correctly Issue #1 is fixed with retries. Issue #2 is fixed by using a unique temp directory for each test run. Issue #3 was fixed by simplifying the output stream cache to only cache the bytes returned and then letting scala do the line splitting for us in linesIterator. After these changes, all of the server tests work on appveyor.
eed3si9n
added a commit
that referenced
this pull request
Sep 19, 2024
It's a thin layer around custom, but specifically created for cross library building. Ref #2
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
No description provided.