Mathematics desk | ||
---|---|---|
< July 9 | << Jun | July | Aug >> | Current desk > |
Welcome to the Wikipedia Mathematics Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
Just ran across Skewes' number while deleting a nonsense page, and I'm just amazed by it. What is the practical benefit of theorising such a large number? Reading Ramsey theory, I can understand (slightly) that Graham's number, because it helps Ramsey's theory, helps us in predicting sequences of some sort of events, although I'm not sure what kind of events. I gather that the two numbers are somehow related (more than by just being very very very big numbers), but I can't see how Skewes' benefits anything "in the real world" [no slam on higher mathematics intended]. Understand, by the way, that I was a history major in college, so I'm (1) altogether unfamiliar with higher mathematics, and (2) accustomed to being asked about the utility of my field of study. Nyttend ( talk) 12:20, 10 July 2009 (UTC)
One practical area where theories about extremely large numbers can be useful is program verification. Say you have a computer program that defines three functions: 1) f(n) appears to compute something complicated and it's hard to tell quite what it's doing. 2) g(n) = f(n) + 1, and 3) h(n) = g(n) - f(n). You'd like to use equational reasoning to prove that h(n)=1 for all n, regardless of what f is. The problem is this reasoning can fail if f never returns. For example you could give the recursive "definition"
and subtracting f(n) from both sides, you get 1=0, not a good basis for sound proofs of anything ;). Of course if you try to treat that definition as an executable program and actually run it, it will simply recurse forever, not giving you an opportunity to show that it's wrong. So your "proof" that h(n)=1 is only valid if you can also prove that f actually terminates and returns a value for each n (i.e. it is a total function). If f doesn't always terminate, it might also imply (if you incorrectly assume that it does terminate) that 1=0, and the implication may be much less obvious than the blatant recursive example I gave, so it could go silently screwing up the results of some fancy automated theorem prover trying to reason about the program. For example, if f(n) is defined as the smallest counterexample to Goldbach's conjecture that is greater than n and it works by searching upwards from n, then determining whether even f(0) halts is a famous unsolved math problem.
So here's where the big numbers come in. In general, deducing whether some arbitrary function terminates is called the halting problem and it is unsolvable (there is provably no algorithm that can do it, as has even been proved in verse). You are ok only if f turns out to be one of the functions whose termination you can prove. And the termination might take an extremely large number of steps: for example, f might compute the Ackermann function or even a Goodstein sequence while still being provably total. Numbers like Skewes' and Graham's are pretty big by most everyday standards, but at least you can write down formulas for computing them. The size of the Goodstein sequence grows so fast, that you can only prove nonconstructively that it does eventually finish--the number of steps in it even for fairly small n makes Graham's number look tiny.
So, you've got a situation where you can make a valid and useful deduction (e.g. h(n)=1) about a piece of software only if you can prove that for every n, there's a number t so that f(n) finishes computing in under t steps, where t might be unimaginably enormous. But you don't to compute t or care how large it is; all you have to do is prove that t exists, to ensure that your reasoning about some other part of the program is actually sound. That, then is a practical use of theories involving enormous numbers.
Harvey Friedman has written some "Lecture notes on enormous integers" [1]. The math is fairly technical but you might be able to get a sense of the topic just from the english descriptions. 208.70.31.206 ( talk) 05:23, 11 July 2009 (UTC)
Hello, I am looking at a solution to the problem "Suppose f is analytic and non-vanishing on an open set U. Prove that log |f| is harmonic on U." The solution here is to show that Log(f(z)) is holomorphic on some neighborhood of each point of U, and then log |f(z)| is the real part of it so harmonic. But, I do not understand the log function very well, especially where it is holomorphic. So, the basic idea makes sense but the details of knowing that the composite is holomorphic does not make sense. I've looked through my undergrad book and it does not seem to tell where log is holomorphic (we can assume the principal branch). I also looked at the [Complex logarithm] article and it does not help me understand much either. It says under the section Logarithms of holomorphic functions that
If f is a holomorphic function on a connected open subset U of , then a branch of log f on U is a continuous function g on U such that eg(z) = f(z) for all z in U. Such a function g is necessarily holomorphic with g′(z) = f′(z)/f(z) for all z in U.
What I don't get is, f(z) could be 0 at some point and then this makes no sense for two reasons, e^z is never 0, and the derivative as shown would have a 0 in the denominator. Is the article wrong or do I not understand? Maybe a better question is, "Is the article wrong?" I think I do not understand either way. Any help would be much appreciated! StatisticsMan ( talk) 15:02, 10 July 2009 (UTC)
This is probably not a good answer (since it uses ideas beyond undergraduate curriculums), but this is my solution: if is analytic on some open set U, then is subharmonic there (f doesn't have to vanish). If, in addition, is nonvanishing, then is subharmonic and so is subharmonic. Hence, is harmonic. I guess my point is that it is possible to approach the problem from real analysis. (This is a very good problem, so I couldn't resist.) -- Taku ( talk) 22:29, 10 July 2009 (UTC)
The article Quaternion algebra claims:
One illustration of the strength of this analogy concerns unit groups in an order of a rational quaternion algebra: it is infinite if the quaternion algebra splits at and it is finite otherwise, just as the unit group of an order in a quadratic ring is infinite in the real quadratic case and finite otherwise.
Is this correct? I would expect the unit group of a rational quaternion algebra to be infinite in both cases, since a splitting quadratic field can be embedded in the quaternion algebra in infinitely many ways. -- Roentgenium111 ( talk) 15:59, 10 July 2009 (UTC)
What is the purpose for non base 10 math? I can understand the uses of Hex, or binary, but are there practical uses for base 5, or base 28 etc? Googlemeister ( talk) 19:54, 10 July 2009 (UTC)
That's a number right? Am I out of my mind? I was discussing 0.999... repeating and someone said that 0.333.. is a limit, and I told him that 0.333.. isn't a limit, it's a number. He then told me he has a bs in mathematics and ask for my credentials... am I going crazy? I know it can be expressed as a limit (as well as an infinite series), but is 0.333... itself a limit? Thanks-- 12.48.220.130 ( talk) 20:04, 10 July 2009 (UTC)
But how can it be a limit? The mathematical limit article says that it has to be a function in order to be a limit. And if it was a limit, wouldn't it mean that 2 or pi would also be limits?-- 12.48.220.130 ( talk) 20:47, 10 July 2009 (UTC)
Ok, but if I showed a bunch a mathematicians the symbol "2", the first thing that will come to their mind is "oh, that's a number", not "oh, that's a limit".-- 12.48.220.130 ( talk) 21:35, 10 July 2009 (UTC)
But that's when I think it gets messy. Because if you say "oh, 0.333.. is just shorthand for ", then what's to stop someone from saying is shorthand for or that two is shorthand for . Does it really matter that .333... is a repeating decimal in deciding whether to call it a limit or not?-- 12.48.220.130 ( talk) 13:33, 12 July 2009 (UTC)
But any number can be, by definition, an infinite series. Just because you chose to define 0.333.. as an infinite series doesn't mean that you can't arbitrarily not decide to define 2 or as an infinite series. That's just the number system we use. In base-10, 0.333.. is an infinite series. But in other number bases, such as base 4, 0.333.. = 1. And numbers that are infinite series in base 4 , might not be infinite series in base 10. So defining numbers by their decimal representation is flawed, especially considering that fractions came first.-- 12.48.220.130 ( talk) 22:28, 14 July 2009 (UTC)
OK, I've got two points on the surface of the earth, identified only by their lat/long coordinates in DMS. I'm wanting to know the "straight-line" distance between them, in miles or km.
First, I have to convert everything to one unit, logically degrees, but seconds might actually make the endgame easier since I recall 1 second of lat is close to 1 mile. Further, I recall that longtitudinal distance has to be reduced by the cosine of the latitude ... but I can't work out the rest of it.
BUT, maybe I don't have to think much harder than that, i.e. Pythagorus is Close Enough. For "small" distances, the corrections for a spherical or even ellipsoidal surface won't make a noticeable difference from a plane. For example, if the planar distance is 1000 km and the spherical distance is 999 or 1001, that's 1/10 of 1%. At what point does "small" become significant? -- DaHorsesMouth ( talk) 22:20, 10 July 2009 (UTC)
1. 1 nautical mile is about 1 minute of arc at the equator, not 1 second.
2. The simplest approach to finding spherical distance is convert the lat/lon to rectangular coordinates where the center of the earth is at the origin. You get two 3-dimensional vectors from which you can easily compute the dot product which gives you the angle between the vectors. Since you know the earth's radius, the angle lets you figure out the distance. 208.70.31.206 ( talk) 03:17, 11 July 2009 (UTC)
Proves once again that knowing what something is properly called is the single best starting point for learning about it. Thanks to all! Issue is
.
Mathematics desk | ||
---|---|---|
< July 9 | << Jun | July | Aug >> | Current desk > |
Welcome to the Wikipedia Mathematics Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
Just ran across Skewes' number while deleting a nonsense page, and I'm just amazed by it. What is the practical benefit of theorising such a large number? Reading Ramsey theory, I can understand (slightly) that Graham's number, because it helps Ramsey's theory, helps us in predicting sequences of some sort of events, although I'm not sure what kind of events. I gather that the two numbers are somehow related (more than by just being very very very big numbers), but I can't see how Skewes' benefits anything "in the real world" [no slam on higher mathematics intended]. Understand, by the way, that I was a history major in college, so I'm (1) altogether unfamiliar with higher mathematics, and (2) accustomed to being asked about the utility of my field of study. Nyttend ( talk) 12:20, 10 July 2009 (UTC)
One practical area where theories about extremely large numbers can be useful is program verification. Say you have a computer program that defines three functions: 1) f(n) appears to compute something complicated and it's hard to tell quite what it's doing. 2) g(n) = f(n) + 1, and 3) h(n) = g(n) - f(n). You'd like to use equational reasoning to prove that h(n)=1 for all n, regardless of what f is. The problem is this reasoning can fail if f never returns. For example you could give the recursive "definition"
and subtracting f(n) from both sides, you get 1=0, not a good basis for sound proofs of anything ;). Of course if you try to treat that definition as an executable program and actually run it, it will simply recurse forever, not giving you an opportunity to show that it's wrong. So your "proof" that h(n)=1 is only valid if you can also prove that f actually terminates and returns a value for each n (i.e. it is a total function). If f doesn't always terminate, it might also imply (if you incorrectly assume that it does terminate) that 1=0, and the implication may be much less obvious than the blatant recursive example I gave, so it could go silently screwing up the results of some fancy automated theorem prover trying to reason about the program. For example, if f(n) is defined as the smallest counterexample to Goldbach's conjecture that is greater than n and it works by searching upwards from n, then determining whether even f(0) halts is a famous unsolved math problem.
So here's where the big numbers come in. In general, deducing whether some arbitrary function terminates is called the halting problem and it is unsolvable (there is provably no algorithm that can do it, as has even been proved in verse). You are ok only if f turns out to be one of the functions whose termination you can prove. And the termination might take an extremely large number of steps: for example, f might compute the Ackermann function or even a Goodstein sequence while still being provably total. Numbers like Skewes' and Graham's are pretty big by most everyday standards, but at least you can write down formulas for computing them. The size of the Goodstein sequence grows so fast, that you can only prove nonconstructively that it does eventually finish--the number of steps in it even for fairly small n makes Graham's number look tiny.
So, you've got a situation where you can make a valid and useful deduction (e.g. h(n)=1) about a piece of software only if you can prove that for every n, there's a number t so that f(n) finishes computing in under t steps, where t might be unimaginably enormous. But you don't to compute t or care how large it is; all you have to do is prove that t exists, to ensure that your reasoning about some other part of the program is actually sound. That, then is a practical use of theories involving enormous numbers.
Harvey Friedman has written some "Lecture notes on enormous integers" [1]. The math is fairly technical but you might be able to get a sense of the topic just from the english descriptions. 208.70.31.206 ( talk) 05:23, 11 July 2009 (UTC)
Hello, I am looking at a solution to the problem "Suppose f is analytic and non-vanishing on an open set U. Prove that log |f| is harmonic on U." The solution here is to show that Log(f(z)) is holomorphic on some neighborhood of each point of U, and then log |f(z)| is the real part of it so harmonic. But, I do not understand the log function very well, especially where it is holomorphic. So, the basic idea makes sense but the details of knowing that the composite is holomorphic does not make sense. I've looked through my undergrad book and it does not seem to tell where log is holomorphic (we can assume the principal branch). I also looked at the [Complex logarithm] article and it does not help me understand much either. It says under the section Logarithms of holomorphic functions that
If f is a holomorphic function on a connected open subset U of , then a branch of log f on U is a continuous function g on U such that eg(z) = f(z) for all z in U. Such a function g is necessarily holomorphic with g′(z) = f′(z)/f(z) for all z in U.
What I don't get is, f(z) could be 0 at some point and then this makes no sense for two reasons, e^z is never 0, and the derivative as shown would have a 0 in the denominator. Is the article wrong or do I not understand? Maybe a better question is, "Is the article wrong?" I think I do not understand either way. Any help would be much appreciated! StatisticsMan ( talk) 15:02, 10 July 2009 (UTC)
This is probably not a good answer (since it uses ideas beyond undergraduate curriculums), but this is my solution: if is analytic on some open set U, then is subharmonic there (f doesn't have to vanish). If, in addition, is nonvanishing, then is subharmonic and so is subharmonic. Hence, is harmonic. I guess my point is that it is possible to approach the problem from real analysis. (This is a very good problem, so I couldn't resist.) -- Taku ( talk) 22:29, 10 July 2009 (UTC)
The article Quaternion algebra claims:
One illustration of the strength of this analogy concerns unit groups in an order of a rational quaternion algebra: it is infinite if the quaternion algebra splits at and it is finite otherwise, just as the unit group of an order in a quadratic ring is infinite in the real quadratic case and finite otherwise.
Is this correct? I would expect the unit group of a rational quaternion algebra to be infinite in both cases, since a splitting quadratic field can be embedded in the quaternion algebra in infinitely many ways. -- Roentgenium111 ( talk) 15:59, 10 July 2009 (UTC)
What is the purpose for non base 10 math? I can understand the uses of Hex, or binary, but are there practical uses for base 5, or base 28 etc? Googlemeister ( talk) 19:54, 10 July 2009 (UTC)
That's a number right? Am I out of my mind? I was discussing 0.999... repeating and someone said that 0.333.. is a limit, and I told him that 0.333.. isn't a limit, it's a number. He then told me he has a bs in mathematics and ask for my credentials... am I going crazy? I know it can be expressed as a limit (as well as an infinite series), but is 0.333... itself a limit? Thanks-- 12.48.220.130 ( talk) 20:04, 10 July 2009 (UTC)
But how can it be a limit? The mathematical limit article says that it has to be a function in order to be a limit. And if it was a limit, wouldn't it mean that 2 or pi would also be limits?-- 12.48.220.130 ( talk) 20:47, 10 July 2009 (UTC)
Ok, but if I showed a bunch a mathematicians the symbol "2", the first thing that will come to their mind is "oh, that's a number", not "oh, that's a limit".-- 12.48.220.130 ( talk) 21:35, 10 July 2009 (UTC)
But that's when I think it gets messy. Because if you say "oh, 0.333.. is just shorthand for ", then what's to stop someone from saying is shorthand for or that two is shorthand for . Does it really matter that .333... is a repeating decimal in deciding whether to call it a limit or not?-- 12.48.220.130 ( talk) 13:33, 12 July 2009 (UTC)
But any number can be, by definition, an infinite series. Just because you chose to define 0.333.. as an infinite series doesn't mean that you can't arbitrarily not decide to define 2 or as an infinite series. That's just the number system we use. In base-10, 0.333.. is an infinite series. But in other number bases, such as base 4, 0.333.. = 1. And numbers that are infinite series in base 4 , might not be infinite series in base 10. So defining numbers by their decimal representation is flawed, especially considering that fractions came first.-- 12.48.220.130 ( talk) 22:28, 14 July 2009 (UTC)
OK, I've got two points on the surface of the earth, identified only by their lat/long coordinates in DMS. I'm wanting to know the "straight-line" distance between them, in miles or km.
First, I have to convert everything to one unit, logically degrees, but seconds might actually make the endgame easier since I recall 1 second of lat is close to 1 mile. Further, I recall that longtitudinal distance has to be reduced by the cosine of the latitude ... but I can't work out the rest of it.
BUT, maybe I don't have to think much harder than that, i.e. Pythagorus is Close Enough. For "small" distances, the corrections for a spherical or even ellipsoidal surface won't make a noticeable difference from a plane. For example, if the planar distance is 1000 km and the spherical distance is 999 or 1001, that's 1/10 of 1%. At what point does "small" become significant? -- DaHorsesMouth ( talk) 22:20, 10 July 2009 (UTC)
1. 1 nautical mile is about 1 minute of arc at the equator, not 1 second.
2. The simplest approach to finding spherical distance is convert the lat/lon to rectangular coordinates where the center of the earth is at the origin. You get two 3-dimensional vectors from which you can easily compute the dot product which gives you the angle between the vectors. Since you know the earth's radius, the angle lets you figure out the distance. 208.70.31.206 ( talk) 03:17, 11 July 2009 (UTC)
Proves once again that knowing what something is properly called is the single best starting point for learning about it. Thanks to all! Issue is
.