-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Duplicated record names in distinct types #4172
Description
Original bug ID: 4172
Reporter: acone
Assigned to: @pierreweis
Status: closed (set by @alainfrisch on 2014-02-17T15:44:20Z)
Resolution: fixed
Priority: normal
Severity: feature
Category: ~DO NOT USE (was: OCaml general)
Related to: #5525
Bug description
In OCaml, the following causes problems:
type t1 = {a:int; b:int}
type t2 = {a:string; c:string}
let make_t1() = {a=5; b=10}
It's clear to our eyes that make_t1 should return a t1, not a t2, but the compiler assumes that it returns a t2. It does this because I've used a label that's bound in t2 (namely 'a'), and t2 was declared later. So when I run this code I get a type error.
I understand that the compiler does not try to infer which of t1 or t2 is meant because this requires exponential time. I admit that exponential time is unacceptable, but I feel there are better solutions than simply defaulting to the most recently declared type.
-
(the slighly less crappy solution)
warn the user whenever a parameter in a record type declaration shadows a parameter in a previous record type declaration. -
(the solution I'd prefer)
whenever an expression's type is not uniquely specified by parameter names, require the user to state it's type explicitly.
For example, the following would be allowed:
type t1 = {a:int; b:int}
type t2 = {a:string; b:string}
let print_t1 (x : t1) = printf "(%d, %d)" x.a x.b
But the following would produce an error message like "Expression type can not be determined from parameter names. Please state its type explicitly.":
type t1 = {a:int; b:int}
type t2 = {a:string; b:string}
let print_t1 x = printf "(%d, %d)" x.a x.b
It should not insist, as it currently does, that I wanted x to be a t2.
Additional information
As it stands, the lack of this feature is a reason I sometimes avoid Caml. The standard solution--to prefix each parameter with the type's name--results in ugly, unreadable code.
The fix I'm proposing would not affect the computational complexity of type inference. It would simply require explicit typing for expressions that should need it.