Lazy alias relate causes us to no longer (incompletely) infer alias args when we don't normalize aliases to infer vars. That causes this code to (rightfully, IMO) fail:
trait Foo {
type Assoc<'a>;
}
fn foo<'a, T: Foo>(_: <T as Foo>::Assoc<'a>) {}
fn test<'a, T: Foo>() {
let y: fn(<T as Foo>::Assoc<'a>) = foo;
}
I'd like to prevent this from being more of an issue, but nothing comes to mind for a good scheme to weaken this inference in the old solver. Ideas?
Lazy alias relate causes us to no longer (incompletely) infer alias args when we don't normalize aliases to infer vars. That causes this code to (rightfully, IMO) fail:
I'd like to prevent this from being more of an issue, but nothing comes to mind for a good scheme to weaken this inference in the old solver. Ideas?