Talk:Polymorphism (computer science)/Archive 1

Latest comment: 16 years ago by 99.247.166.96 in topic Type classes
Archive 1

Break out strongly-typed polymorphism?

There is a great deal of material that applies only to strongly typed languages. What are peoples' thoughts on breaking this out into a separate article ("strongly typed polymorphism" or something like that)?

Subtyping polymorphism, parametric polymorphism

Much of the stuff under "Subtyping polymorphism" (including most of the stuff I just added) does not belong under "Parametric polymorphism", but I don't have time to fix that right now. — Daniel Brockman 16:19, Jun 1, 2004 (UTC)

Ad-hoc polymorphism

I've never come across ad-hoc polymorphism as described in the article (although of course I understand overloading and coercion), only the type described as parametric polymorphism. Seems to me that this could usefully be split into two sections: one on polymorphisms of data types and one on polymorphism of functions (in the broadest sense - to include procedures, members etc.).

A data type is polymorphic in that it has a member whose type may be parametric or ad-hoc (as per the article).

A type which is used to store a stack node would normally have two data members: the data held at a position in the stack and the next node below it. Manipulation of the stack itself doesn't need any knowledge of the data value type. How this type is applied to the stack node depends on the implementation language but falls neatly in the ad-hoc/parametric view explained in the article.

I think though that for functions (procedures, members etc.) there are three ways of looking at the issue which makes it easier to see what is going on:

  • Inclusional polymorphism (called subtype polymorphism in the article) is where a function that expects (or returns) one type may in fact take or return any legal subtype.
  • Parametric polymorphism is where the type is applied to the function in the source code (either explicitly or implicitly).
  • Operational polymorphism is where the function assumes that some other function/operator will work on the type and the code is written in terms of that. This is the contract that a template implementation of, for example, min() works in C++ and is also the way that most object-based languages work.

C++ is rare in that it allows all of the three forms. I wouldn't consider overloading a type of polymorphism in the same way that the others are. It really is a way of looking operation polymorphism - the overload itself is not polymorphic, but the function that makes use of the overload is assuming operational polymorphism on the types that it is using.

--KayEss 02:13, 16 Sep 2004 (UTC)

This isn't entirely consistent with the use of the terms I'm familiar with. I'm familiar with the term "parametric polymorphism" being applied to functions, and meaning roughly what you call "operational polymorphism". An ML "append-to-list" function is often described as parametric polymorphic, for example, since it can append an element to any type of list by being written generically using only operators that work on any type of list. --Delirium 14:53, Sep 16, 2004 (UTC)
The operational polymorphism comes into its own in weakly typed languages (I was particularly thinking of JavaScript/ECMAScript/JScript here). You write a function that takes an argument for which you make certain assumptions about the members available on that object. My guess is that a C++ programmer would probably think about min() in an operationaly polymorphic way, although strictly, you're right, it is parametric polymorphism. It was a (badly chosen) example... I guess the use of a min macro would be nearer the mark --KayEss 12:49, 18 Sep 2004 (UTC)

History of the term

I think this article could be improved by adding a little history about the term. Should be noted that was Christopher Strachey who invented it in 1967, and is his definition which's showed here. Latter, in 1985, Cardelli and Wagner [1]refinedthat definition, adding the concept of universal polymorphism("true polymorphism").

Also should be noted that some authors(Booch and Rumbaugh) call polymorphism only to inclusion polymorphism or polymorphism of intheritance, which leads to misunderstandings.

All of this is explained in the [2]comp.object Usenet's group FAQ.

I would do by myself, but English is not mi first language and I'm a little afraid of doing something unintelligible :-(

Removed sentence

I removed the following recently-added sentence from the end of the very first paragraph:

Polymorphism can be very useful in having many other applications use a common function or library, which is good because a function that does one thing does not have to be rewritten many different times to accomodate many datatypes.

It needs to be reformulated, but I'm not sure how. — Daniel Brockman 20:47, Dec 22, 2004 (UTC)

Whoever added it was right, though: something similar needs to be put there. — Daniel Brockman 20:50, Dec 22, 2004 (UTC)

Maybe: Polymorphism makes coding simpler by using the same function call/library for conceptually similar operations using different object types.


I don't like polymorphism definitions based heavily on "types". Many OOP languages don't even have formal types.

Well, then they don't really have polymorphism. :) Seriously, though, I take the opposite view to yours, and bristle when people use the word "polymorphism" for anything other than parametric polymorphism, which as far as I can tell makes no sense without types. I think this article is right to discuss the all the uses of the word that one finds in different communities of programmers, but I think it could do a better job in terms of overall coherence and explaining how the different meanings of "polymorphism" relate to each other. -- Cjoev 19:41, 29 Mar 2005 (UTC)

Type classes

I have removed the existing discussion of Haskell's type classes (from the subtype polymorphism section), and put a new bit about them in the ad-hoc polymorphism section. I'm not a Haskell expert; those who are may find things wrong with what I've done. There were several things wrong with the old paragraph on type classes that even I could see, including the fact that it was in the subtyping section. Here it is just in case anyone wants it:

"Haskell’s type system implements subtyping polymorphism using type classes. Analoguously to interfaces in Java, a type class declares a set of polymorphic operations that can be performed on any object whose type belongs to the class. This is then used to add constraints to types, limiting the set of objects that belong to them, while reciprocally expanding the set of things that can be done to the objects. For example, a list of ordered elements is typed (Ord a) => [a]. (The class Ord declares ordering relations such as =>, <, etc., and provides default implementations for most of them.) "

-- Cjoev 20:51, 29 Mar 2005 (UTC)

actually no, type classes are a form of ad hoc polymorphism.

Actually, no, they're neither. Typeclasses are a form of restricted parametric polymorphism. 99.247.166.96 (talk) 07:08, 16 March 2008 (UTC)

Polymorphism and types

Here's my argument for stretching polymorphism beyond data types: the notion, I think, is wider. It's been alive for a very long time, even if it was conceived differently. (Abelson and Sussman call it "message-passing", for instance - and that was 1985!) In the end, polymorphism is polymorphism. --VKokielov 04:50, 12 Apr 2005 (UTC)

Proposed Merge from "Theoretical foundations..."

I think TakuyaMurata's proposed merge of the content from Theoretical foundation of polymorphism needs some discussion. (I hope I manage to start a debate rather than a flamewar here, but I may be on shaky ground...) In particular, before we move material from that article into this one we must answer the question: does that article mean the same thing by the word "polymorphism" as this one does? I say the relationship is at best unclear and needs to be clarified if the merger is to make this article better rather than less good. This article lists several different kinds of polymorphism, and I don't think that article is really talking about any of them: It's not about parametric polymorphism, because all the functions in the example have simple (i.e., monomorphic) types; it's not about subtype polymorphism, because there is no issue of one type containing or subsuming another; it's not about ad-hoc polymorphism, and it's not about coercion. Is there some other meaning of polymorphism we haven't covered yet?

Perhaps the abstract exposition in the other article is good even though the extended example forming the bulk of it misses the mark? Maybe, but I find it difficult to understand. I think it's mostly focused on the idea of abstraction rather than polymorphism. I realize the two are related, but their relationship is very deep and subtle and should be explained as carefully and clearly as possible -- if we're not ready to do that we should leave it alone.

Cjoev 18:24, 29 May 2005 (UTC)

Space between all and nothing

I have some problems with the statement at the top of the page, saying that there are two kinds of polymorphism. I find it neither practical, nor really fair: there is a lot of space between all and nothing.

Typically one wants to constrain polymorphism, so that the set of actual types is potentially infinite, yet partly specified. Types may be used to express these partial specifications: this is the role of hierarchies. Hence a third kind of polymorphism: allow to express constraints relevant to a specific context. Marc Girod 06:29, 1 Jun 2005 (UTC)

This is a good point. Type classes, which are discussed briefly, are one form of this kind of constraint. Subtype-bounded polymorphism, which comes up all the time in type-theoretic studies of languages but I am ashamed to admit I don't really know the syntax for in any real language without looking it up, should also be mentioned but isn't. I think it should go somewhere under parametric polymorphism. Are there other forms of constraint as well? Cjoev 14:44, 1 Jun 2005 (UTC)

Classes of polymorphism

I think Haskell type classes are an important case of polymorphism when we consider classifications of polymorphism.

First of all, I think they introduce ad-hoc polymorphism (and overloading) to the language, because for each instance of the type class, the programmer introduces a new implementation of the methods in the class.

Second, they extend parametric polymorphism as they can be used as constraints on the type parameters. Thus it's possible to write a function that takes into account the argument types, without introducing new implementations. For example, with a type class Num for numbers, we can define general addition of vectors add :: Num a => [a] -> [a] -> [a]; add a b = zipWith (+) a b. I don't see anything ad-hoc in the implementation of function add but extended parametric polymorphism.

Further, type classes can have subclass-superclass relations. This means a function restricted to instances of Num still works for instances of all subclasses of Num as well. This is included in subtype polymorphism. (Edit: No it's not! Subtype polymorphism is about types and type classes are not types! --TuukkaH 00:43, 12 December 2005 (UTC))

I propose something in the lines of the following:

  1. move the type class section into a separate article about type classes (Done.)
  2. in parametric polymorphism, extend that with a constraint we can take advantage of the subtype or adhoc polymorphism defined for the parameters (class or type class) (Done.)
  3. in overloading, clarify that overloading in haskell is provided by type classes (Done.)
  4. present subtype polymorphism after parametric and ad-hoc, as a combination of the two: writing generic functions that take advantage of overloaded (or overrided) functions of a class or type class.

What do you think? --TuukkaH 16:55:58, 2005-09-07 (UTC)

A few other illustrative cases still occured to me. For a parametrically polymorphic function map :: (a -> b) -> [a] -> [b], we can pass another function id :: (a -> a) to get map id :: [a] -> [a]. To me it seems like a -> a must then be a subtype of a -> b. Does parametric polymorphism without subtyping polymorphism even exist?
To bring type classes in here, filter :: (a -> Bool) -> [a] -> [a] and (==4) :: Num a => a -> Bool give us filter (==4) :: Num a => [a] -> [a]. Thus, Num a => a -> Bool would be a subtype of a -> Bool. --TuukkaH 10:15:44, 2005-09-09 (UTC)
To answer myself, the last two examples went too far and parametric and subtyping polymorphism are independent. Parametric type such as a -> a isn't strictly a type and thus can't be a subtype. a is a type variable and substituting it for a type gives a type. Similarily, a -> b is a type scheme that can be instantiated with some types a and b to get an actual type such as Integer -> String or Integer -> Integer. Now these can be subtypes, but it hasn't got anything to do with parametric polymorphism or, for that matter, type classes. --TuukkaH 00:43, 12 December 2005 (UTC)

Templates

I'm one of "some" people that think that templates (I'm coming from C++) are a kind of polymorphism and so they deserve a section of their own. Also generic programming and Haskell class types sound pretty similar, so it would be nice to draw a parallel. If I knew enough about type classes I would do it myself. Cheers.PizzaMargherita 22:11, 25 October 2005 (UTC)

There is an article on templates and template-like things at generic programming. I agree there needs to be some better cross-referencing between these two articles. --Delirium

Audience

I think the problem with this and related pages is that we're all over the map of abstraction -- trying to talk to working programmers, would-be programmers, computer-science theoreticians, maybe even mathematicians and logicians. That's presumably how this page acquired the phrase "more abstract implementations" (that I deleted) -- a phrase that would make any beginning Java or C++ student jump out the window.

We need to focus on one default audience (I suggest the mid-level programmers) and make it clear when we go "up" to more theoretical discussion or "down" to more practical discussion. Otherwise these pages are going to remain unintelligible and largely useless.

And anyone who wants to put something back in about datatype polymorphism needs to cite.

TH 09:16, 10 December 2005 (UTC)

TH, I agree that it's challenging to write for such a wide audience as Wikipedia receives, but I think it's even more of a problem that the people who want to contribute to computing articles have so different backgrounds (programming languages, fields of computer science) and still use same terms. In my view it's good to take examples from real, implemented programming languages, but term definitions and categorizations from theoretical articles.
I reverted your recent edits because it was obvious to me that you have a more specific point of view than this article. I tried to follow you in making the introduction easier to read, what do you think of it now? Besides, you and other readers might rather be looking for polymorphism in object-oriented programming so I put in that too.
Regarding citations, is List<int> enough and did you read the citations we already have in the article? --TuukkaH 22:22, 11 December 2005 (UTC)

Yes,

Thank you for helping correct my excesses.

I still think it needs a lot of work. I will merely nibble for a while.

I took out this paragraph:

" Polymorphism has another view, a dual, called inheritance, which describes computer modules as sharing implementation (specifically, subclasses inheriting the methods of a superclass). "

because:

The use of the word "view" here is not correct English -- someone may say "the castle has another view -- from the back window of the tower, you can see over the ridge into the next valley" -- but it's a solecism to say that "the castle has another view -- from the curve in the street in the village, you can glimpse the castle's turret".

The use of the word "dual" here also is not correct English -- "dual" as a noun has only an obscure definition relating to obsolete grammar, meaning exactly two individuals.

As neither of these words, as used in this article, is to be found in a standard unabridged dictionary, I have attempted to help the original author by recasting what it appears to me that he or she was trying to say.

Yes, I have read the entire article. List<int> is a good citation. But could you tell me -- would you consider List<Object> and List<Animal> to be equally good examples?

TH 07:26, 16 December 2005 (UTC)

That's the point of parametric polymorphism, you can use any type as the value of the parameter. If you're thinking of the plain old Java List that only takes subclasses of Objects, I'd still say it is a polymorphic data type, even if only in the sense of subtype polymorphism. In Java, the only non-polymorphic data types would then be of the kind class IntList { int[] store; }. I don't know much C++ but I think List<Animal> wouldn't allow subtypes even though List<Animal *> would (because Dogs and Animals are not of the same size but Animal * and Dog * are). Anyway, what we're really referring to here is what SML introduced with a syntax like datatype 'a list = Nil | Node of ('a * ('a list)).
Nice to see you interested in improving this article and being able to tell what's correct grammar -- English is a second language to me. But to be honest, I still wouldn't think the edits you've made this far should stay in the article. You say you've read the entire article, but I wanted to ask if you've read the four articles that are referred to at the bottom of the page. I admit I haven't studied all the mathematical structure of Cardelli's paper, so anyone's free to tell me if I contradict it, and that's an objective point of reference. If we can add newer articles that relate to Cardelli, even better, and we should refer to them too!
You edited the article to say that type classes in Haskell would be equivalent to interfaces or abstract classes in Java. Whichever meaning of 'equivalent' we take, I'd say that's stretching too far. In terms of this article, type classes are a form of ad-hoc polymorphism and somehow related to parametric polymorphism, whereas Java interfaces and abstract classes are a form of subtyping polymorphism tied into code-reuse and have nothing to do with parametric polymorphism. I could've done the same mistake, though, so I should restrain myself from too bold edits as well ;-)
You're probably right in that the "another view, a dual" shouldn't be there at all, especially if neither of us can understand it. But I think it was in the article because someone wanted to tell that inheritance is in a way the other side of the coin, polymorphism being about type and inheritance being about sharing implementation. Now my problem is that you dropped the implementation part and I can't understand what you wrote about top-down vs. bottom-up. There's also the problem of 'equivalent' which I wouldn't use here either. Could we find an explanation of this relation somewhere?
The disambiguation isn't that important, but to me 'unrelated' sounds like we shouldn't have the link at all, whereas I think we should have it, because polymorphic code is similarily named and is also a topic in computer science where something has many forms. Those would be the relations. --TuukkaH 17:43, 16 December 2005 (UTC)
A disclaimer: I am a theoretician. First of all, in the jargon of logicians, type theorists and category theorists (among others), it's perfectly usual to use "dual" as a noun. There are lots of pairs of things that are said to be "dual to" each other, and in those cases either one is said to be "the dual of" the other. I would never have said that inheritance and polymorphism were dual to each other, however. I think the point of that assertion was that polymorphism is about a single entity (object, class, function, etc) having many types, while inheritance is about the same code being used as the implementation of more than one different type. I suppose that's true but it's hardly essential to the nature of either one and I think it's confusing.
Second, I think type classes are extremely important, and am saddened (not to say outraged) that they've been removed from the article. I think my thoughts on this may belong on a different part of the page, however. For this thread, suffice it to say that they are not the same thing as interfaces or abstract classes. (My personal opinion is that the decision of the Haskell designers to use "class" terminology so much like OO languages causes a lot of confusion; my professional opinion is that the parallels between type classes and, say, Java classes are strained at best (but not nonexistent); it is an objective fact, however, that they are not the same.)
Cjoev 22:59, 20 January 2006 (UTC)
Aha, looking more carefully at the article and discussion I see they are still mentioned here but discussed in a separate article. So never mind the outrage.
Cjoev 23:11, 20 January 2006 (UTC)
I hope you're ok with the move of type classes into an article of their own. They're not a simple concept, particularily in their relation to other kinds of polymorphism. My main reason for moving them out was that I haven't found a reference that would include them as a major type in a classification of polymorphisms. I welcome you to edit the new article if the subject interests you! --TuukkaH 09:00, 21 January 2006 (UTC)
Sure, that makes a lot of sense. (I don't feel any particular need to edit the new Type Classes article at the moment, because I think I actually wrote a fair bit of what got moved there and just about exhausted my knowledge of them.) I'm not sure what you mean by "a major type of polymorphism". If you mean that you can't find a citeable published source that lists them along with parametric and ad-hoc polymorphism, then I guess I'm not surprised. On the other hand, if you talk to theoretically-minded programming language researchers or look at articles published in ACM conferences like POPL and ICFP, you get the strong impression that that community considers them an important phenomenon. They deserve mention and linkage in this article, both under parametric polymorphism (which they resemble syntactically) and under overloading or ad-hoc polymorphism (which is the effect of how they're used). Since they are mentioned in those places, I am fine with the current situation. Cjoev 22:08, 23 January 2006 (UTC)

I left relation of polymorphism and inheritance for the Subtyping section.

I took out this paragraph:

" Seen from a different viewpoint, polymorphism is equivalent to inheritance. Polymorphism is a top-down approach: classes extending into more numerous and detailed subclasses. Inheritance is a bottom-up approach: classes sharing the common features of one or (in the case of multiple inheritance) multiple superclasses. "

which I had earlier tried to improve. Better silence than incorrect statements or gobbledegook. It occurred to me that this paragraph really applies only to inclusion (subtyping) inheritance. At that point the paragraph seemed not worth saving. I'm thinking of adding a sentence soon in the Subtyping Polymorphism section, though there's already a sentence there that nearly equates subtyping polymorphism and inhertance, so what's the big deal?

TH 21:12, 16 December 2005 (UTC)

Why Haskell?

Why does Wikipedia give so many column-inches to Haskell? Nobody is using it! It would be lucky to have 1 out of 1000 open-source projects -- see http://www.cs.berkeley.edu/~flab/languages.html. And it's a declarative language to boot -- not even Turing-complete, so far as I can tell.

Perhaps some Haskellomaniac once scattered some discussion throughout these computer science pages and since then everyone's assumed they're obligated to keep up with the Haskellophiles?

TH 21:33, 16 December 2005 (UTC)

I’m using Darcs all the time, and that’s written in Haskell. Pugs is written in Haskell. So I don’t know what to say about the suggestion that Haskell not be Turing-complete. But to try to answer your question, Haskell is a very clean language, and programs written in Haskell lend themselves to very precise reasoning about semantics. Haskell is very popular in academia, not because it is allegedly useless, but on the contrary because it is very useful as a tool to learn and reason about programming. I guess next you’ll be arguing that Scheme is useless and not Turing-complete. — Daniel Brockman 05:02, 17 December 2005 (UTC)

Is Haskell Turing-complete?

Well, I still don't know the answer. Can Haskell (or those other languages), for example -- I'm trying to pick something very mundane here -- can it read the customer table (or file), read the billing table (or file), read the payments table (or file), collate the data from all three files, compute the net amount due for each customer, write out a statement for each customer, and print accounting summaries for the accountants and the auditors?

Or does it just accept predicates (assertions) and then answer questions based on formal logic and the predicates that it has been given?

TH 22:26, 18 December 2005 (UTC)

This is ridiculous. Haskell is absolutely Turing complete. It's a full featured general purpose programming language. It's not actually fully declarative, it's just a lazily evaluated functional language (and a great one at that!). The fact that not many people are yet using it is no testament to its ability to get things done. I've done paid work in Haskell (I wrote a pipeline scheduler and register allocator for PPC/Altivec that came in at ~1200 lines which was 50% documentation, and that included a parser. I assure you it would have been at *least* 15000 lines of C, and I wouldn't have been able to finish it in the 3 weeks I had.) It has its faults, but is far and away the least broken programming language I've come across. It has lots and lots of great abstractions for making code clear and concise. It has a really expressive type system which is quite good at catching bugs at compile time. (I find that upwards of 95% of what would be bugs in my code in another language are caught by Haskell's type system.) It also performs quite well when you know a little about what you're doing. It's currently coming in second on the computer language shootout, right after gcc -- not that that means much, but it's actually quite practical in terms of performance. I highly recommend learning it, even if it's just to improve the way you write code in other languages. You can expect to see more of it in the future. -- CaleGibbard

Are you asking whether Haskell is Turing-complete or whether it can be used for real-world applications?

You certainly can use Haskell to implement Turing-machines, so it probably is Turing-complete, though I have not seen a formal proof of this.

I have not used Haskell for production code myself, but Haskell in Practice lists some free applications written in Haskell. Libraries and Tools For Haskell lists several libraries that interface with relational databases. — Tobias Bergemann 09:25, 19 December 2005 (UTC)

Re-reading your question above, you apparantly want to know how Haskell as a pure functional language can handle side-effects and input/output. See Monads in functional programming and the section on input/output in the Gentle Introduction to Haskell. — Tobias Bergemann 09:32, 19 December 2005 (UTC)

Revert

I have removed the following sentence:

In general, explicit coercion is not needed if there is a routine that receives an object of a given type and returns an equivalent object of another type. A routine call could be automatically inserted at compile- or run-time if implicit coercion is permitted.

Everything here seems either redundant or false. -- Cjoev 00:11, 28 March 2006 (UTC)

Page move

Since polymorphic code is also a computer-science-related polymorphism, and there's a convenient term for this other kind of polymorphism, I've moved this page from Polymorphism (computer science) to Type polymorphism. (I mention this mainly because my summary, ""Polymorphism (computer science)" is redundant.", was inadvertently confusing due to a typo: I meant ambiguous, not redundant.) If anyone disagrees, go ahead and move it back (although do be sure to refix all the double-redirects!). — Simetrical (talk • contribs) 19:00, 19 May 2006 (UTC)

Subtyping polymorphism and OOP

I think it should be noted that subtyping polymorphism in Object oriented languages (in particular C++) is also possible without using late binding. i.e:

class base {
public:
 void foo () { cout << "foo";}
};

class derived: public base{
public:
 void bar () {cout << "bar";}
};

void func (base * b)
{
 b->foo();
}

int main ()
{
 base b;
 derived d;

 func (&b);
 func (&d);
}

We're not doing late binding there even though we are doing subtyping polymorphism with func().

Am I wrong ?

190.49.68.218 15:54, 12 February 2007 (UTC)

Unclear definitions of parametric and ad-hoc polymorphism

The definitions of parametric and ad-hoc polymorphism shown here are not quite clear. Benjamin Pierce's book Types and Programming Languages does a pretty good job of describing these (p. 340 - 341). Both kinds of polymorphism allow a single piece of code to work with multiple types. The difference is that parametric polymorphism works the same regardless of the type, while ad-hoc polymorphism behaves differently depending on the type that is used. Java and C# generics are forms of parametric polymorphism. The Java instanceof operator and the C# is operator work as a weak form of ad-hoc polymorphism, allowing one piece of code to behave differently depending on the type. C++ templates can act as either parametric or ad-hoc polymorphism. By contrast, ML allows only parametric polymorphism.

Krazdon 17:53, 22 April 2007 (UTC)
The thing is that C++ templates look like parametric polymorphism but they are really something like syntactic shortcuts for creating several different classes, as they must be compiled on demand for each actual type parameter used on the template. Generic classes or functions are not actually part of the C++ type system. Contrast this with java which actually compiles a generic class as a generic class, not once for each parameter that uses it. The distinction is subtle but important, because it elucidates which one really has a type system that has parametric polymorphism. As a demonstration, try using the c++ template on a new type without recompiling the template. It will not work. Read the last section of http://www.cplusplus.com/doc/tutorial/templates.html to see how the templates class implementations must be kept with the definitions to support this on-demand compilation. For more on polymorphism and the distinction between parametric and ad-hoc, see Cardelli's 1985 paper listed in the references of this article. Gendut
I suppose it depends to what extent we're talking about the interface visible to a programmer, versus the compilation strategy used on the backend. That the C++ compiler actually compiles what appears to be parametric polymorphism down to separate bits of code for each type isn't really that interesting from a user's point of view, except that as you note some clunky design makes them have to be aware of where their classes are defined. But in principle the two aren't really that connected—an ML implementation could, if it wished, compile down to specialized functions instead of polymorphic functions and the user would be none the wiser except for an increase in code size. There might even sometimes be efficiency reasons to do so selectively. --Delirium (talk) 05:53, 3 December 2007 (UTC)

Dynamic typing vs. dynamic polymorphism

Changed the sentence "[...] sometimes referred to as dynamic polymorphism or dynamic typing [...]" since dynamic typing has absolutely nothing to do with dynamic polymorphism. For instance, languages like C++, Java or C# do support dynamic polymorphism, but have a static typing system. Theoretically, there could also be dynamically typed languages with parametric polymorphism, even though that wouldn't make much sense (AFAIK).

--80.108.109.173 14:12, 5 May 2007 (UTC)

Haskell's type classes and ad-hoc polymorphism

I read the "Ad-hoc polymorphism" section as saying that (a) Haskell's type classes are a form of ad-hoc polymorphism, and (b) ad-hoc polymorphism does not involve late binding (presumably, by that it means dynamic dispatch). This implies that Haskell's type classes are not a form of dynamic dispatch - which is not true. When used in the form of type constraints, they work by passing around dictionaries and doing dynamic dispatch using those dictionaries. So either this section is misleading or it's actually incorrect - I'm not sure which. Either way, an edit is needed. — greenrd 00:50, 18 May 2007 (UTC)

Opening paragraph, confusion

With all due respect, the revisions by 141.162.101.50 to the opening paragraphs only make this article more confusing. I recommend reverting to the previous opening. Clconway 15:47, 18 July 2007 (UTC)

Clconway, I agree, but why did you revert my revisions as well? Megacz 20:03, 18 July 2007 (UTC)
Megacz, I assume you are referring to the following passage:
Some languages which are statically typed and lack both ad-hoc polymorphism and type classes have longer function names such as print_int, print_string, etc. (see C, Objective Caml, but not Haskell)
There are a couple of things wrong with this sentence. First, C is not statically typed in the same sense as OCaml. Second, type classes are a form of ad-hoc polymorphism not a distinct concept (recall Wadler's paper on type classes is titled "How to make ad-hoc polymorphism less ad hoc"). Third, "but not Haskell" is confusing and reeks of language evangelism/POV. The point of "(see C, Objective Caml)" is give a few examples of languages that suffer from the problem, not to list languages that don't suffer from it. Clconway 22:38, 18 July 2007 (UTC)
Thanks, Clconway! To address your concerns, I have:
  1. added Strachey's definition of ad-hoc polymorphism (which Wadler cites)
  2. changed "statically" to "not dynamically" (ie no run-time type information to use for dispatch)
  3. modified the wording to mention type classes as a form of ad-hoc polymorphism (rather than alternative)
  4. removed all references to specific languages in the evangelistic sentence.
Megacz 18:00, 19 July 2007 (UTC)
Suits me. I've just done a quick edit to integrate your new text with the paragraphs following. I removed a bit of material there that seemed unnecessarily POV-ish. I'm going to hold off on reverting the opening graphs until others chime in. Also, it looks like the Strachey cite is now duplicated in the References section. Clconway 19:54, 19 July 2007 (UTC)
I'm removing the current intro, below, which is redundant.
The Greek meaning of the words "poly" and "morph" together imply that "a single entity can take on multiple forms". In the field of computer science, there are two fundamentally different types of polymorphism; subtype polymorphism, and parametric polymorphism. A concrete example can be found in the C++ language. In C++, subtype polymorphism is realized via the inheritance mechanism, and is also known as runtime polymorphism or dynamic polymorphism. In C++, parametric polymorphism is realized via the template mechanism and is also known as compile time polymorphism or static polymorphism. If one does not know what 'multiple form' an entity will take on until runtime, then inheritance is the only mechanism of choice. If, however, one or more specific examples of the possible "multiple forms" is known at compile time, template polymorphism can be used as well as inheritance polymorphism.
The C++ Standard Template Library (STL), for example, makes heavy use of parametric polymorphic behavior. One of the greatest accomplishments of the STL has been to provide for loose coupling between data structures and algorithms that use them, by parameterizing the data accessor constructs known as iterators[clarification needed] . One would not be entirely incorrect to think of an iterator as an extension of the pointer concept. The key difference being that the iterator's type (or types; remember, single entity taking on multiple forms, even simultaneously) is determined at compile time.
Clconway 15:29, 1 November 2007 (UTC)
Sounds good to me. Still somewhat confusing, but that wasn't an improvement. The fundamental problem seems to be that several sub-areas of CS have somewhat different definitions of what "polymorphism" means, and even those who roughly agree on a definition emphasize different parts (e.g. type-system theory vs. language-feature practicalities), making an overview that satisfies everyone and yet is still readable a bit tricky. --Delirium 21:50, 1 November 2007 (UTC)
Archive 1