From Wikipedia, the free encyclopedia
Mathematics desk
< August 20 << Jul | August | Sep >> August 22 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is a transcluded archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


August 21 Information

What's new in numerical methods?

I've studied this subject a little, primarily from the Numerical Recipes books which were written in the 1980s-90s or so (I guess there are newer editions). Has much changed in the field since then? Thanks. 2602:24A:DE47:BB20:50DE:F402:42A6:A17D ( talk) 09:02, 21 August 2020 (UTC) reply

This page for a course at Stanford on Numerical Linear Algebra (given by no one less than the late Gene Golub) gives some useful links. Whereas flesh-and-blood computers could rarely make more than one significant error per minute, thanks to the marvels of microelectronics present-day computers can easily produce several megaflops per second. Since we critically rely on their results being correct, there is an increases emphasis on correctness guarantees, which has many aspects. I have not kept abreast of the field, but my (superficial) impression is that almost everything in play today has firm roots in techniques that were already well known to the cognoscenti in the 80s.  -- Lambiam 13:17, 21 August 2020 (UTC) reply
Officially not much has changed, if you stick to the mathematical literature. But if your background is in theoretical or mathematical physics, you may have learned about methods that are much more powerful than what can be found in the mathematical literature, because these methods cannot be rigorously proven to work. Mathematics as practiced by mathematicians is an approach where you proceed via theorem and proof, and this yields very poor results when it comes to discovering powerful numerical methods. This because such results are then very general, and typically the more general a tools is, the less powerful it will be to tackle specific problems with. Another reason is that mathematical facts are one thing, a proof for that fact is something that's totally unrelated to that typically does not exist.
A good example of where non-rigorous methods have "proven" their value is in perturbation theory. The mathematical proof and theorem based approach to perturbation theory makes perturbation theory in practice rather useless to do computations with. You are only allowed to study the limit of small perturbations, and can only make qualitative statements about large perturbations. In physics, we ignore the mathematical objections, and have no problems with resumming a pertubative series to all orders and even consider the limit of an infinitely strong perturbation. While there are rigorous mathematical theorems that deal with this, they are usually not valid for the practical cases the method is used for in physics. But "not valid" doesn't mean that the results are not valid, it only means that the theorems don't cover such cases and it's unknown how to patch the relevant theorems.
Non-rigous methods based on perturbation theory have become much more popular in recent years due to the sue of computer algebra systems. Instead of letting your computer crunch numbers using a standard numerical algorithm, you can first perform a high order perturbative expansion and a resummation of that that may involve hundreds of pages of equations processed by a computer algebra system and then apply that to your model. This way, one can obtain results that are impossible to obtain using the best rigorously proven algorithms available.
Carl Bender has given a series of lectures about this topic, see here, this gives you a good overview of the basic ideas behind these method, but being an introductory course for students, it only treat a few of the easy to apply resummation methods in detail, mainly Padé-resummation. Count Iblis ( talk) 15:30, 21 August 2020 (UTC) reply
I emphatically disagree with Count Iblis that not much has changed if you look at mathematical literature, specifically because nVidia has developed mixed precision computations as the next wave of speedups. Also, new numerical methods are devised based on known physical laws for numerical weather simulation all the time. New multiplication algorithms have also been discovered since the 80's and 90's.-- Jasper Deng (talk) 18:45, 22 August 2020 (UTC) reply
Do you have a reference for recent mathematical advances in numerical weather prediction? The intrinsically chaotic nature of the atmospheric equations means that the uncertainty of the input data is a major limiting factor. High-precision arithmetic is essential to numerical computation, yet is generally not directly considered a sub-field of numerical analysis.  -- Lambiam 09:26, 26 August 2020 (UTC) reply
From Wikipedia, the free encyclopedia
Mathematics desk
< August 20 << Jul | August | Sep >> August 22 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is a transcluded archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


August 21 Information

What's new in numerical methods?

I've studied this subject a little, primarily from the Numerical Recipes books which were written in the 1980s-90s or so (I guess there are newer editions). Has much changed in the field since then? Thanks. 2602:24A:DE47:BB20:50DE:F402:42A6:A17D ( talk) 09:02, 21 August 2020 (UTC) reply

This page for a course at Stanford on Numerical Linear Algebra (given by no one less than the late Gene Golub) gives some useful links. Whereas flesh-and-blood computers could rarely make more than one significant error per minute, thanks to the marvels of microelectronics present-day computers can easily produce several megaflops per second. Since we critically rely on their results being correct, there is an increases emphasis on correctness guarantees, which has many aspects. I have not kept abreast of the field, but my (superficial) impression is that almost everything in play today has firm roots in techniques that were already well known to the cognoscenti in the 80s.  -- Lambiam 13:17, 21 August 2020 (UTC) reply
Officially not much has changed, if you stick to the mathematical literature. But if your background is in theoretical or mathematical physics, you may have learned about methods that are much more powerful than what can be found in the mathematical literature, because these methods cannot be rigorously proven to work. Mathematics as practiced by mathematicians is an approach where you proceed via theorem and proof, and this yields very poor results when it comes to discovering powerful numerical methods. This because such results are then very general, and typically the more general a tools is, the less powerful it will be to tackle specific problems with. Another reason is that mathematical facts are one thing, a proof for that fact is something that's totally unrelated to that typically does not exist.
A good example of where non-rigorous methods have "proven" their value is in perturbation theory. The mathematical proof and theorem based approach to perturbation theory makes perturbation theory in practice rather useless to do computations with. You are only allowed to study the limit of small perturbations, and can only make qualitative statements about large perturbations. In physics, we ignore the mathematical objections, and have no problems with resumming a pertubative series to all orders and even consider the limit of an infinitely strong perturbation. While there are rigorous mathematical theorems that deal with this, they are usually not valid for the practical cases the method is used for in physics. But "not valid" doesn't mean that the results are not valid, it only means that the theorems don't cover such cases and it's unknown how to patch the relevant theorems.
Non-rigous methods based on perturbation theory have become much more popular in recent years due to the sue of computer algebra systems. Instead of letting your computer crunch numbers using a standard numerical algorithm, you can first perform a high order perturbative expansion and a resummation of that that may involve hundreds of pages of equations processed by a computer algebra system and then apply that to your model. This way, one can obtain results that are impossible to obtain using the best rigorously proven algorithms available.
Carl Bender has given a series of lectures about this topic, see here, this gives you a good overview of the basic ideas behind these method, but being an introductory course for students, it only treat a few of the easy to apply resummation methods in detail, mainly Padé-resummation. Count Iblis ( talk) 15:30, 21 August 2020 (UTC) reply
I emphatically disagree with Count Iblis that not much has changed if you look at mathematical literature, specifically because nVidia has developed mixed precision computations as the next wave of speedups. Also, new numerical methods are devised based on known physical laws for numerical weather simulation all the time. New multiplication algorithms have also been discovered since the 80's and 90's.-- Jasper Deng (talk) 18:45, 22 August 2020 (UTC) reply
Do you have a reference for recent mathematical advances in numerical weather prediction? The intrinsically chaotic nature of the atmospheric equations means that the uncertainty of the input data is a major limiting factor. High-precision arithmetic is essential to numerical computation, yet is generally not directly considered a sub-field of numerical analysis.  -- Lambiam 09:26, 26 August 2020 (UTC) reply

Videos

Youtube | Vimeo | Bing

Websites

Google | Yahoo | Bing

Encyclopedia

Google | Yahoo | Bing

Facebook