CS1 error on Firehole Composites

edit

  Hello, I'm Qwerfjkl (bot). I have automatically detected that this edit performed by you, on the page Firehole Composites, may have introduced referencing errors. They are as follows:

  • A "missing periodical" error. References show this error when the name of the magazine or journal is not given. Please edit the article to add the name of the magazine/journal to the reference, or use a different citation template. (Fix | Ask for help)

Please check this page and fix the errors highlighted. If you think this is a false positive, you can report it to my operator. Thanks, Qwerfjkl (bot) (talk) 03:53, 18 May 2023 (UTC)Reply

ArbCom 2023 Elections voter message

edit

Hello! Voting in the 2023 Arbitration Committee elections is now open until 23:59 (UTC) on Monday, 11 December 2023. All eligible users are allowed to vote. Users with alternate accounts may only vote once.

The Arbitration Committee is the panel of editors responsible for conducting the Wikipedia arbitration process. It has the authority to impose binding solutions to disputes between editors, primarily for serious conduct disputes the community has been unable to resolve. This includes the authority to impose site bans, topic bans, editing restrictions, and other measures needed to maintain our editing environment. The arbitration policy describes the Committee's roles and responsibilities in greater detail.

If you wish to participate in the 2023 election, please review the candidates and submit your choices on the voting page. If you no longer wish to receive these messages, you may add {{NoACEMM}} to your user talk page. MediaWiki message delivery (talk) 00:46, 28 November 2023 (UTC)Reply

derivatives of Lagrange polynomials

edit

To be honest, while I somewhat cleaned up the formulas about higher derivatives of lagrange polynomials that someone else added to Lagrange polynomial, I haven't ever thought too extensively about the subject. I reverted your change though, as from what I can tell all of the problems near nodes boil down to the inclusion of terms like   which shouldn't really be too big a problem to just simplify to 1 where applicable, in practical implementation. I don't think your proposed alternative suggestion of converting to monomial basis is a good general idea: it's likely to be numerically unstable and give poor practical results. If you want to improve this section, maybe you can find some good sources discussing the topic? (The current section is entirely unsourced and I'm not 100% convinced it is exactly correct.) I imagine some paper or another has gone into more detail about both abstract formulas and concrete numerical implementation. –jacobolus (t) 03:05, 17 January 2024 (UTC)Reply

Those derivative formulas are getting very expensive for higher order derivatives. They will also always be inaccurate close to a node.
Power basis form is fine for many practical applications of Lagrange interpolation (such as the finite element method) where there are a fairly small number of nodes and the domain is the unit interval or the interval  .
So your reversion is definitely not helpful for readers of this article. Jrheller1 (talk) 17:46, 18 January 2024 (UTC)Reply
First of all, your addition is unsourced. Wikipedia should only make claims backed by sources. (See Wikipedia:Original Research and Wikipedia:Verifiability.) I'm fairly convinced sources could be found explicitly supporting the previous version, but if not it should be modified/removed. Secondly, the recommendation to convert to monomial basis still seems like a poor choice to me; Wikipedia should especially not make this kind of claim without a source.
However, I agree these derivative formulas get practically ridiculous.
In my opinion we should rewrite this section (I didn't initially add it), following Berrut & Trefethen (2004) who describe how to make a matrix for 1st or 2nd derivatives, with matrix multiplication taking values at the nodes to derivatives at the nodes. The derivatives at the nodes can then be thrown into the barycentric formula to compute derivatives at some arbitrary other point(s). My understanding is that this method gives pretty good results; maybe there's a paper analyzing the numerical stability etc. (Aside: It logically seems to me like the 2nd derivative matrix should just be the square of the 1st derivative matrix, and the matrix for an arbitrary other derivative should be some higher power. But I haven't played around with it.) –jacobolus (t) 18:08, 18 January 2024 (UTC)Reply
"Elementary numerical analysis" by Conte and de Boor does not even discuss derivatives of Lagrange polynomials. So they are clearly assuming that people will convert to power basis form to compute the derivatives.
Have you implemented in computer code the derivative formulas given in this article and also conversion of Lagrange polynomials to power basis form and computation of derivatives using the power basis form? I have done this and it is clear that the power basis method is very accurate when the nodes are uniformly spaced points on the unit interval. This also shows that the formulas given in the article are correct at points not very close to a node. Jrheller1 (talk) 18:42, 18 January 2024 (UTC)Reply
You should generally avoid polynomial interpolation of (more than a trivial number of) uniformly spaced nodes, because the result is very badly behaved; see Runge's phenomenon. (And converting to monomial basis is among the worst possible things you can do after that. See Trefethen (2011) "Six Myths of Polynomial Interpolation and Quadrature" citing Wilkinson (1984) "The perfidious polynomial") Indeed you should avoid equispaced data in general if you can help it: it's a provably bad point distribution; see Platte, Trefethen & Kuijlaars (2011) "Impossibility of fast stable approximation of analytic functions from equispaced samples". If you need to approximate a function and all you can get is equispaced data, there are are a variety of methods you can try. One recent one is Huybrechs & Trefethen (2023) "AAA interpolation of equispaced data", which greedily adds to a collection of nodes to exactly interpolate one at a time, at each step fitting an optimized rational function minimizing error at the remaining nodes. –jacobolus (t) 19:12, 18 January 2024 (UTC)Reply
To answer your other points: (1) I have not implemented the derivative formulas given in this article. As I said, I didn't add that section (though I did clean up the LaTeX a bit.) And (2) "they are clearly assuming that people will convert to power basis form to compute the derivatives" – speculation about what authors might have been assuming is, for better or worse, not a good enough source for claims in Wikipedia. –jacobolus (t) 19:15, 18 January 2024 (UTC)Reply
Polynomial interpolation at equally spaced points in the polynomial domain is an excellent method of approximating any smooth function very accurately, even for fairly high polynomial degrees and functions with large higher-order derivatives like the Runge function.
You just need to split up the domain of the function you are approximating. For example, splitting up the domain   of the Runge function into 20 equal length intervals and using a degree 5 polynomial interpolant over each interval results in a maximum approximation error of only  .
This is why the finite element method works so well. The finite element method is based on polynomial interpolation of a smooth function (the PDE solution) at equally spaced points in the polynomial domain. But the function domain is split up into many smaller regions (the cells of the finite element mesh). A good finite element mesh will also have smaller cells in regions where the PDE solution has larger higher-order derivatives in order to minimize the error of the approximate solution. Jrheller1 (talk) 22:40, 18 January 2024 (UTC)Reply
20 equal length intervals and using a degree 5 polynomial interpolant over each interval – this is not a very good approximation compared to a single degree 100 interpolant through Chebyshev nodes which gets you about 10 digits of precision. If you just need 6 digits, you can do with a degree ~60 polynomial. (You need a 188-degree polynomial to get full double precision, ~16 digits.) Note that for this even function, all of the odd-degree terms have coefficient zero, so this is 50 or 30 or 94 non-zero coefficients, respectively, so for fewer terms than your 20-little-chunks approximation we got to full double precision. Anyway, sure, if you limit yourself to 5th degree polynomials you can pretty well use whatever basis you want and more or less whatever set of nodes you want. This would fall into the "trivially small" category I mentioned before. But you should not pass off advice that only works out in the degree ~5 case as generically relevant, or people who follow it will end up shooting themselves in the foot when they try to apply it to a higher-degree situation. –jacobolus (t) 23:53, 18 January 2024 (UTC)Reply
That is amazing accuracy. I didn't know it was possible to get that kind of accuracy approximating the Runge function using a single degree 100 polynomial.
For my purposes (mainly curve and surface interpolation and the finite element method), degree six is a "large" degree for polynomial interpolation.
I will add some text saying that the converting to power basis method for computing derivatives is fine for relatively small polynomial degrees, but for larger degrees, some more sophisticated method is necessary. Then maybe you could clarify what that method is, since I don't know anything about that. Does that sound OK? Jrheller1 (talk) 05:14, 19 January 2024 (UTC)Reply
Unless you can cite a clear source, I don't think any of that discussion can really stay in Wikipedia (in theory, the same standard applies to any other topic). I'll leave you to work on the article for a bit if you want, but when I get a chance I'll come back and rewrite this section based on what I can find directly stated in published sources, which will involve wiping out any unsourced material, including the formulas that are currently there. –jacobolus (t) 06:05, 19 January 2024 (UTC)Reply
"I didn't know it was possible to get that kind of accuracy approximating the Runge function using a single degree 100 polynomial." By the way, you can do much (much) better for such functions with a rational approximation instead of a polynomial. Obviously Runge's function in particular is directly a low-degree rational function (so that's not so impressive to approximate), but the same also applies to arbitrary other functions of a pretty broad class. Polynomials necessarily force all of their poles to be at infinity, which makes it very hard to approximate any function which has a pole near the interval. A rational function can put poles anywhere in the extended complex plane. Even for functions with other kinds of singularities like branch cuts, etc. rational functions can make much better approximations than polynomials. The downside is that rational functions are trickier and not quite as uniform to deal with as polynomials. –jacobolus (t) 06:13, 19 January 2024 (UTC)Reply