inheritance: is circle an ellipse?
Jan. 24th, 2011 11:46 am![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
It's a known puzzle from the depths of object-oriented teaching: is Circle a subclass or Ellipse. From school we know it is; but as we read in books, for programmers, it is not. How come? Oh, "Liskov principle". They use Liskov principle to scare kids.
Actually we know that a circle is an ellipse with both axes being the same length.
But in programming, an ellipse is something different. It is at least stretchable. And it is also moveable. Somehow it is not rotatable, but we can skip it for simplicity.
An ellipse in programming is (almost) always horizontal; it can be moved (by changing coordinates of its center) and it can be stretched/compressed by changing its height and width. Sure a stretchable ellipse is neither an ellipse nor a circle, not even a stretchable circle. A stretchable circle stretches equally in all directions.
That's it, no? Questions? Objections?
Actually we know that a circle is an ellipse with both axes being the same length.
But in programming, an ellipse is something different. It is at least stretchable. And it is also moveable. Somehow it is not rotatable, but we can skip it for simplicity.
An ellipse in programming is (almost) always horizontal; it can be moved (by changing coordinates of its center) and it can be stretched/compressed by changing its height and width. Sure a stretchable ellipse is neither an ellipse nor a circle, not even a stretchable circle. A stretchable circle stretches equally in all directions.
That's it, no? Questions? Objections?
no subject
Date: 2011-01-24 10:56 pm (UTC)I do not see the importance of substitutability, but is not it true that some functors are contravariant?
no subject
Date: 2011-01-25 01:33 am (UTC)sure; it's just that if one wants a system where subtypes are subsets, and functions are extensional ( (f, g: A -> B, and for all x from A, f(x) = g(x)) implies f = g), then this one cannot be contravariant (assuming that I have not made a mistake when "proving" that).
> I do not see the importance of substitutability
I think it is quite essential in justifying rules for various mainstream systems of types lambda calculus (e.g. parametric polymorphism of ML and similar systems); although it really becomes explicit, when one enriches a system like that with subtypes.
So here is a logical fork, which depends on whether one likes this class of type systems. If one likes these type systems, then it probably matters that this class of type systems comes from people who think that substitutability is quite key here (although it still does not mean that one has to agree with them, but their arguments should be considered).
Alternatively, one could decide that one does not like this class of type systems, and that inference rules should be different (like I was thinking in the 90-s; now I don't have a firm opinion). Then it's quite likely that substitutability would not be involved.
no subject
Date: 2011-01-25 06:58 am (UTC)After refreshing it in my memory, the canonical way to think about it in the mainstream school of thought is as follows. One considers a universal set D of all untyped lambda-terms (or of their equivalence classes). Types are just some subsets of D. When a lambda-term belongs to the subset X, we say that this term is of type X. Subtypes are subsets. A term f is said to be of type A->B, if whenever term x is of type A, term f(x) is of type B. Here we have contravariance and substitutability, and they are the same thing (easy to see), but we don't have extensionality (because the terms can be equivalent on the inputs from A, but can behave quite differently, when the inputs are outside of A).
Also, the resulting type systems are quite horrible (IMO) in two ways. On one hand, they are strongly normalizable, and lambda-terms without normal form are never typable, so the recursion and Turing-completeness have to be added by ad-hoc methods. On another hand, the inference is usually too strong, so, for example, one can write a pathological sequence of well-typed programs in ML, such that the size of their types is exponential of the size of the program as a directed acyclic graph (and doubly exponential as a linear output) and hence the compiling time is also exponential. And this "compiler bug" is built-in as a design feature right in the heart of ML and languages with the similar type systems (it does not really occur in practice, one has to work to create this pathology, but it's an ugly property of the type system and of the language).