SI-5294 SI-6161 Hard graft in asSeenFrom, refinements, and existentials [ci: last-only]#5263
SI-5294 SI-6161 Hard graft in asSeenFrom, refinements, and existentials [ci: last-only]#5263retronym merged 9 commits intoscala:2.12.xfrom
Conversation
|
Rework of the rework in #5041. |
864d9bf to
b464823
Compare
1cf2299 to
699197e
Compare
|
Review by @adriaanm. |
| elided += r | ||
| else | ||
| result += r | ||
| () |
There was a problem hiding this comment.
The improvements to LUBs now compute a sharper LUB for this expression, but that fails in bound checks. Adding () to avoid the LUB getting refchecked.
There was a problem hiding this comment.
this looks undesirable - can you give a minimized example?
There was a problem hiding this comment.
See the comments in the lub-from-hell*.scala tests. We actually had similar cases that failed before this change, this one "accidentally" worked thanks to a bug.
There was a problem hiding this comment.
Here's a minimal example of the pattern of an unlubbable pair of types:
scala> trait T[A, +Repr <: T[A, Repr]]; class U extends T[Int, U]; class V extends T[Char, V]
defined trait T
defined class U
defined class V
scala> trait T[A, +Repr <: T[A, Repr]]; class U extends T[Int, U]; class V extends T[Char, V]; val x = List(new U, new V)
<console>:14: error: type arguments [?,T[_ >: Char with Int <: AnyVal, Object]] do not conform to trait T's type parameter bounds [A,+Repr <: T[A,Repr]]
trait T[A, +Repr <: T[A, Repr]]; class U extends T[Int, U]; class V extends T[Char, V]; val x = List(new U, new V)
^
<console>:14: error: type arguments [?,T[_ >: Char with Int <: AnyVal, Object]] do not conform to trait T's type parameter bounds [A,+Repr <: T[A,Repr]]
trait T[A, +Repr <: T[A, Repr]]; class U extends T[Int, U]; class V extends T[Char, V]; val x = List(new U, new V)
^There was a problem hiding this comment.
I think the actual LUB should look more like:
T[_1, UV] forSome {
type _1 >: Int with Char <: AnyVal
type UV >: U with V <: T[_1, UV]
}
scala> val x = List[T[_1, UV] forSome { type _1 >: Int with Char <: AnyVal; type UV >: U with V <: T[_1, UV]}](new U, new V)
warning: there was one feature warning; re-run with -feature for details
x: List[T[_1,UV] forSome { type _1 >: Int with Char <: AnyVal; type UV >: U with V <: T[_1,UV] }] = List(U@319aa9ee, V@1d806de6)```There was a problem hiding this comment.
OK, thanks - I hope it's not too common in user code that the bugfix unshadows an invalid LUB.
% diff /home/jenkins/workspace/scala-2.12.x-validate-test/test/files/neg/lub-from-hell-2-neg.log /home/jenkins/workspace/scala-2.12.x-validate-test/test/files/neg/lub-from-hell-2.check
@@ -1,7 +1,7 @@
lub-from-hell-2.scala:3: error: type arguments [Any,Iterable[Any] with Int => Any with scala.collection.generic.Subtractable[Any,Iterable[Any] with Int => Any with scala.collection.generic.Subtractable[Any,Iterable[Any] with Int => Any]{def seq: Iterable[Any] with Int => Any}]{def seq: Iterable[Any] with Int => Any{def seq: Iterable[Any] with Int => Any}}] do not conform to trait Subtractable's type parameter bounds [A,+Repr <: scala.collection.generic.Subtractable[A,Repr]]
def foo(a: Boolean, b: collection.mutable.Set[Any], c: collection.mutable.ListBuffer[Any]) = if (a) b else c
^
-lub-from-hell-2.scala:4: error: type arguments [Any,scala.collection.mutable.Iterable[Any] with scala.collection.mutable.Cloneable[scala.collection.mutable.Iterable[Any] with scala.collection.mutable.Cloneable[scala.collection.mutable.Iterable[Any] with Cloneable with Int => Any] with Int => Any{def seq: scala.collection.mutable.Iterable[Any] with Cloneable with Int => Any}] with scala.collection.generic.Growable[Any] with Int => Any with scala.collection.generic.Subtractable[Any,Iterable[Any] with Int => Any with scala.collection.generic.Subtractable[Any,Iterable[Any] with Int => Any]{def seq: Iterable[Any] with Int => Any}] with scala.collection.generic.Shrinkable[Any] with scala.collection.script.Scriptable[Any]] do not conform to trait Subtractable's type parameter bounds [A,+Repr <: scala.collection.generic.Subtractable[A,Repr]]
+lub-from-hell-2.scala:4: error: type arguments [Any,scala.collection.mutable.Iterable[Any] with scala.collection.mutable.Cloneable[scala.collection.mutable.Iterable[Any] with scala.collection.mutable.Cloneable[scala.collection.mutable.Iterable[Any] with Cloneable with Int => Any] with Int => Any{def seq: scala.collection.mutable.Iterable[Any] with Cloneable with Int => Any}] with scala.collection.generic.Growable[Any] with Int => Any with scala.collection.generic.Shrinkable[Any] with scala.collection.generic.Subtractable[Any,Iterable[Any] with Int => Any with scala.collection.generic.Subtractable[Any,Iterable[Any] with Int => Any]{def seq: Iterable[Any] with Int => Any}] with scala.collection.script.Scriptable[Any]] do not conform to trait Subtractable's type parameter bounds [A,+Repr <: scala.collection.generic.Subtractable[A,Repr]]
def bar(a: Boolean, b: scala.collection.mutable.SetLike[Any,scala.collection.mutable.Set[Any]], c: scala.collection.mutable.Buffer[Any]) = if (a) b else c
^
two errors found |
|
|
/rebuild |
|
/rebuild Some failures were due to JVM crashes due to lack of memory on the build server. |
|
Alas, this will need a rebase. |
|
Rebased |
| // with the same base type depth. | ||
| // | ||
| // Notably, this will stably infer`Product with Serializable` | ||
| // as the type of `ase class C(); case class D(); List(C(), D()).head`, rather than the opposite order. |
|
Here's a miminization of the Slick failure in the community build: trait OuterA[Inner_] { type Inner = Inner_ }
trait OuterB { type Inner }
abstract class Test1 {
val o: OuterA[_]
def foo(a: Any): o.Inner
val etaExpanded: Any => o.Inner = foo // used to be disallowed, now disallowed (found: Any => Any, required: ...)
}
abstract class Test2 {
def test(o: OuterA[_]) {
def foo(a: Any): o.Inner = ???
val etaExpanded: Any => o.Inner = foo // used to be allowed, now disallowed (found: Any => Any, required ...)
}
}
abstract class Test3 {
def test(o: OuterB) {
def foo(a: Any): o.Inner = ???
val etaExpanded: Any => o.Inner = foo // used to be disallowed, now disallowed (method with dependent type (a: Any)o.Inner cannot be converted to function value)
}
} |
On Linux, the directory listing is not automatically sorted on Mac. This leads to non-determistic ids of Symbols of the classes in a directory, which in turn leads to instability of the ordering of parents within inferred refinement types. Notable, with this patch, we will stably infer: ``` scala> case class C(); case class D(); List(C(), D()).head defined class C defined class D res0: Product with Serializable = C() ``` rather than sometimes getting `Serializable with Product` on Linux. As such, I've removed the workarounds for this instability in two test cases.
Usually, `contains` should not look into class symbol infos.
For instance, we expect that:
```
scala> trait C { def foo: Int }; typeOf[C].contains(IntClass)
defined trait C
res1: Boolean = false
```
We do, however, look at the decls of a `RefinedType` in contains:
```
scala> typeOf[{ def foo: Int }].contains(IntClass)
res2: Boolean = true
```
Things get a little vague, however, when we consider a type ref
to the refinement class symbol of a refined type.
```
scala> TypeRef(NoPrefix, typeOf[{ def foo: Int }].typeSymbol, Nil)
res3: $r.intp.global.Type = AnyRef{def foo: Int}
scala> .contains(IntClass)
res4: Boolean = false
```
These show up in the first element of the base type seq of a refined
type, e.g:
```
scala> typeOf[{ def foo: Int }].typeSymbol.tpe_*
res5: $r.intp.global.Type = AnyRef{def foo: Int}
scala> typeOf[{ def foo: Int }].baseTypeSeq(0).getClass
res7: Class[_ <: $r.intp.global.Type] = class scala.reflect.internal.Types$RefinementTypeRef
scala> typeOf[{ def foo: Int }].typeSymbol.tpe_*.getClass
res6: Class[_ <: $r.intp.global.Type] = class scala.reflect.internal.Types$RefinementTypeRef
```
This commit takes the opinion that a `RefinementTypeRef` should be
transparent with respect to `contains`. This paves the way for fixing
the base type sequences of existential types over refinement types.
The implementation of `ContainsCollector` was already calling
`normalize`, which goes from `RefinementTypeRef` to `RefinedType`.
This commit maps over the result, which looks in the parents and
decls.
|
Looks like there's another instance of the implicit problem in specs2 3.6: |
Lazy base type seq elements are encoded as a refined type with
an empty scope and a list of type refs over some common type
symbol that will be merged when `BaseTypeSeq#apply` is called.
The first change in this commit is to mark the creation and consumption
of such elements with calls to `[is]IntersectionTypeForBaseTypeSeq`. They
are distinguished by using the actual type symbol rather than a refinement
class symbol, which in turn simplifies the code in
`BaseTypeSeq#typeSymbol`.
I have also made `lub` aware of this encoding: it is now able to "see through"
to the parents of such refined types and merge them with other base types
of the same class symbol (even other refined types representing lazy BTS
elements.)
To make this fix work, I also had to fix a bug in LUBs of multiple with
existential types. Because of the way the recursion was structured in
`mergePrefixAndArgs`, the order of list of types being merged changed
behaviour: quantified varialbles of existential types were being rewrapped
around the resultting type, but only if we hadn't encountered the first
regular `TypeRef`.
This can be seen with the following before/after shot:
```
// 2.11.8
scala> val ts = typeOf[Set[Any]] :: typeOf[Set[X] forSome { type X <: Y; type Y <: Int}] :: Nil; def merge(ts: List[Type]) = mergePrefixAndArgs(ts, Variance.Contravariant, lubDepth(ts)); val merged1 = merge(ts); val merged2 = merge(ts.reverse); (ts.forall(_ <:< merged1), ts.forall(_ <:< merged2))
ts: List[$r.intp.global.Type] = List(Set[Any], Set[_ <: Int])
merge: (ts: List[$r.intp.global.Type])$r.intp.global.Type
merged1: $r.intp.global.Type = scala.collection.immutable.Set[_ >: Int]
merged2: $r.intp.global.Type = scala.collection.immutable.Set[_53] forSome { type X <: Int; type _53 >: X }
res0: (Boolean, Boolean) = (false,true)
// HEAD
...
merged1: $r.intp.global.Type = scala.collection.immutable.Set[_10] forSome { type X <: Int; type _10 >: X }
merged2: $r.intp.global.Type = scala.collection.immutable.Set[_11] forSome { type X <: Int; type _11 >: X }
res0: (Boolean, Boolean) = (true,true)
```
Furthermore, I have fixed the computation of the base type sequences of
existential types over refinement types, in order to maintain the invariant
that each slot of the base type sequence of a existential has the same
type symbol as that of its underlying type. Before, what I've now called
a `RefinementTypeRef` was transformed into a `RefinedType` during
rewrapping in the existential, which led to it being wrongly considered as
a lazy element of the base type sequence. The first change above should
also be sufficient to avoid the bug, but I felt it was worth cleaning up
`maybeRewrap` as an extra line of defence.
Finally, I have added another special case to `BaseTypeSeq#apply` to
be able to lazily compute elements that have been wrapped in an existential.
The unit test cases in `TypesTest` rely on these changes. A subsequent commit
will build on this foundation to make a fix to `asSeenFrom`.
ASF was failing to recognize the correspondence between a
prefix if it has an abstract type symbol, even if it is bounded by
the currently considered class.
Distilling the test cases, this led to incorrect typechecking of the
RHS of `G` in:
```
trait T {
type A
trait HasH { type H[U] <: U }
type F[N <: HasH] = N#H[T]
type G[N <: HasH] = F[N]#A // RHS was incorrectly reduced to T.this.A
}
```
In the fuller examples (included as test cases), this meant that
type level functions written as members of `HList` could not be
implemented in terms of each other, e.g. defining `Apply[N]` as
`Drop[N]#Head` had the wrong semantics.
This commit checks checks if the prefix has the candidate class
as a base type, rather than checking if its type symbol has this
as a base class. The latter formulation discarded information about
the instantation of the abstract type.
Using the example above:
```
scala> val F = typeOf[T].member(TypeName("F")).info
F: $r.intp.global.Type = [N <: T.this.HasH]N#H[T]
scala> F.resultType.typeSymbol.baseClasses // old approach
res14: List[$r.intp.global.Symbol] = List(class Any)
scala> F.resultType.baseClasses // new approach
res13: List[$r.intp.global.Symbol] = List(trait T, class Object, class Any)
```
It is worth noting that dotty rejects some of these programs,
as it introduces the rule that:
> // A type T is a legal prefix in a type selection T#A if
> // T is stable or T contains no abstract types except possibly A.
> final def isLegalPrefixFor(selector: Name)(implicit ctx: Context)
However, typechecking the program above in this comment in dotty
yields:
<trait> trait T() extends Object {
type A
<trait> trait HasH() extends Object {
type H <: [HK$0] => <: HK$0
}
type F = [HK$0] => HK$0#H{HK$0 = T}#Apply
type G = [HK$0] => HK$0#H{HK$0 = T}#Apply#A
}
As the equivalent code [1] in dotc's `asSeenFrom` already looks for a base type
of the prefix, rather than looking for a superclass of the prefix's
type symbol.
[1] https://github.com/lampepfl/dotty/blob/d2c96d02fccef3a82b88ee1ff31253b6ef17f900/src/dotty/tools/dotc/core/TypeOps.scala#L62
- clarify the intent of tests
- Consolidate stripExistentialsAndTypeVars with similar logic in
mergePrefixAndArgs
- Refactor special cases in maybeRewrap
The name isn't great, but I'm struggling to come up with
a pithy way to describe the rogue band of types.
|
I rebased this to move the commits that I understood to be problematic ("Backwards compabiilty with normalizing contains" and "Remove normalization from Type#contains") to the end, and am running a community build on HEAD^^ of that branch (Improve RefinementTypeRef#normalize) |
|
Green! |
|
@adriaanm Thanks! I've dropped those commits and left a breadcrumb in scala/scala-dev#207 to pick up the investigation again. |
|
LGTM |
We'll just have to live with manifests not being able to resolve the difference between `scala.List[_]` and `scala.List[Any]` (dealiasing causes rewrapping causes existential extrapolation). They are equivalent types after all (and manifests are deprecated). A type tag will be more precise. This commit reverts scala#1900, which caused this regression. Recently, scala#5263 improved the situation a bit, but showed that the original tweak was not a good idea (too many exceptions). Also, I doubt using `=:=` will improve performance, even if it reduces allocation a bit, as it's far more expensive than `eq`...
Fixing SI-6161 / SI-5954 seemed like it would be straightforward, after I had pinpointed a logical error in
asSeenFrom: a type seen from a prefix with type arguments can have more base classes thantp.typeSymbol.baseClasses.asSeenFromwas only considering the latter when matching the prefix.However, the fix uncovered a number of latent bugs related to existential and refinement types, which I needed to fix first.
[is]IntersectionTypeForLazyBaseTypeElement. Also changed: the type symbol shared by all parents as the type symbol for theRefinedTypeavoiding the one special case inBaseTypeSeq#typeSymbolmergePrefixAndArgsneeds to be aware of this idiom. I also found it was unreliable (different answers based on the order of the list of types) with existentials.asSeenFromto compile t6161b.scala, without triggering regressions elsewhere.I've checked
t6161b.scalain dotc. It compiles that already, presumably because it is usingpre.baseTypeRef(thiscls).existsintoPrefix.Our
matchesPrefixAndArgsstill contains more special cases that I would like, related to the inconsistency of the base type sequence ofThisType(noted as SD#166), the fact that the base type sequence ofTypeVardoes not contain the bounds of the originating abstract type, and a tricky issue with aliases+refinement types.