Mathematics desk | ||
---|---|---|
< September 26 | << Aug | September | Oct >> | September 28 > |
Welcome to the Wikipedia Mathematics Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
I was wondering if there was a defined method for this sort of thing. I would guess that it would be something like this:
If you had a 3x3x3 matrix A, with these slices, (looking at it straight on going from front to back)
And these slices looking at it from the right, from back to front (same matrix)
You would do something like this to multiply it by itself:
I don't know if this works (I just extended the rows and columns to 3x3 matrices) or if you need to multiply even more matrices, but I was wondering if you guys had any thoughts? Aacehm ( talk) 00:40, 27 September 2011 (UTC)
Suppose I have a function that I know is holomorphic almost everywhere, and I know its power series expansion about some point, but further that this series doesn't converge everywhere and thus the function has singularities. Is there an easy way to judge: (a) whether the function is an injection, or otherwise; and (b) if the function has a branch point. The reason I'm interested is that I'd like to try it for a global-optimization procedure (inverting the series, which obviously will work properly only if the function is injective; I think the branch point stuff could mess with it as well): as far as I know, Newton-type methods are local-optimization ones, not global. Thanks. -- Leon ( talk) 06:06, 27 September 2011 (UTC)
I apologize if it is too trivial a question, but is an arbitrary polynomial invertible as a function if the domain is suitably restricted. It seems to me they are, but I cant quite justify it. Can someone explain why? Thanks - Shahab ( talk) 07:27, 27 September 2011 (UTC)
Can somone give me a example to an ultrafilter which is not principal? — Preceding unsigned comment added by 93.173.34.90 ( talk) 10:37, 27 September 2011 (UTC)
In a sense, nobody can show you such an example. The set of all cofinite subsets of an infinite set is a filter. Now look at some infinite subset whose complement is also infinite. Decide whether to add it or its complement to the filter. Once you've included it, all of its supersets are included and all subsets of its complement are excluded, and all of its intersections with sets already included are included, and all unions of its complement with sets already excluded are excluded, etc. Next, look at some infinite subset whose complement is also infinite and that has not yet been included or excluded, and make the same decision. And keep going....."forever". That last step is where Zorn's lemma or the axiom of choice gets cited. Michael Hardy ( talk) 17:32, 27 September 2011 (UTC)
I see. Thank you Both! — Preceding unsigned comment added by 93.173.34.90 ( talk) 17:41, 27 September 2011 (UTC)
What is the highest possible order of an explicit Runge-Kutta method? -- 84.62.204.7 ( talk) 20:27, 27 September 2011 (UTC)
What is the highest order of a known explicit Runge-Kutta method? -- 84.62.204.7 ( talk) 12:58, 28 September 2011 (UTC)
Hi wikipedians: I'm no mathematician, and I came across a formula in a paper I'm reading that I can't make sense of. Could someone help me with this? It says that for small values of p:
(1-p)^N ≈ e^(-Np)
Why is this? Any help would be appreciated. I don't have a digital copy of the paper or I would post a link. Thanks! Registrar ( talk) 21:12, 27 September 2011 (UTC)
Thanks both of you! The theory behind the first explanation isn't perfectly clear to me, but I can see from graphing that it works. The second explanation makes perfect sense. So thanks very much. Registrar ( talk) 21:37, 27 September 2011 (UTC)
The approximation is actually better than either explanation suggests, because of the fact that
or in other words,
Under what circumstances is this equality always true: ? Do the series and have to be absolutely convergent or just convergent? Widener ( talk) 21:30, 27 September 2011 (UTC)
If the limits for N to infinity of both exist (and these limits are, by definition, the summations to infinity), then because the sum of the limits is the limit of the sum, the summation to infinity of the sum of the two function is equal to the sum of the two summations to infinity. Count Iblis ( talk) 22:47, 27 September 2011 (UTC)
You can also use this in case of divergent summations. Suppose e.g. that is convergent and we write , but both and are divergent. Then define the functions:
If f(z) can be analytically continued to the entire complex plane, then h(z)= f(z) + g(z) and you can put z = 1 in here, despite the series for f(z) and g(z) not converging there. If f(z) and g(z) have poles at z = 1, then you can evaluate h(1) by computing the limit of f(z) + h(z) for z to 1. Count Iblis ( talk) 23:16, 27 September 2011 (UTC)
Mathematics desk | ||
---|---|---|
< September 26 | << Aug | September | Oct >> | September 28 > |
Welcome to the Wikipedia Mathematics Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
I was wondering if there was a defined method for this sort of thing. I would guess that it would be something like this:
If you had a 3x3x3 matrix A, with these slices, (looking at it straight on going from front to back)
And these slices looking at it from the right, from back to front (same matrix)
You would do something like this to multiply it by itself:
I don't know if this works (I just extended the rows and columns to 3x3 matrices) or if you need to multiply even more matrices, but I was wondering if you guys had any thoughts? Aacehm ( talk) 00:40, 27 September 2011 (UTC)
Suppose I have a function that I know is holomorphic almost everywhere, and I know its power series expansion about some point, but further that this series doesn't converge everywhere and thus the function has singularities. Is there an easy way to judge: (a) whether the function is an injection, or otherwise; and (b) if the function has a branch point. The reason I'm interested is that I'd like to try it for a global-optimization procedure (inverting the series, which obviously will work properly only if the function is injective; I think the branch point stuff could mess with it as well): as far as I know, Newton-type methods are local-optimization ones, not global. Thanks. -- Leon ( talk) 06:06, 27 September 2011 (UTC)
I apologize if it is too trivial a question, but is an arbitrary polynomial invertible as a function if the domain is suitably restricted. It seems to me they are, but I cant quite justify it. Can someone explain why? Thanks - Shahab ( talk) 07:27, 27 September 2011 (UTC)
Can somone give me a example to an ultrafilter which is not principal? — Preceding unsigned comment added by 93.173.34.90 ( talk) 10:37, 27 September 2011 (UTC)
In a sense, nobody can show you such an example. The set of all cofinite subsets of an infinite set is a filter. Now look at some infinite subset whose complement is also infinite. Decide whether to add it or its complement to the filter. Once you've included it, all of its supersets are included and all subsets of its complement are excluded, and all of its intersections with sets already included are included, and all unions of its complement with sets already excluded are excluded, etc. Next, look at some infinite subset whose complement is also infinite and that has not yet been included or excluded, and make the same decision. And keep going....."forever". That last step is where Zorn's lemma or the axiom of choice gets cited. Michael Hardy ( talk) 17:32, 27 September 2011 (UTC)
I see. Thank you Both! — Preceding unsigned comment added by 93.173.34.90 ( talk) 17:41, 27 September 2011 (UTC)
What is the highest possible order of an explicit Runge-Kutta method? -- 84.62.204.7 ( talk) 20:27, 27 September 2011 (UTC)
What is the highest order of a known explicit Runge-Kutta method? -- 84.62.204.7 ( talk) 12:58, 28 September 2011 (UTC)
Hi wikipedians: I'm no mathematician, and I came across a formula in a paper I'm reading that I can't make sense of. Could someone help me with this? It says that for small values of p:
(1-p)^N ≈ e^(-Np)
Why is this? Any help would be appreciated. I don't have a digital copy of the paper or I would post a link. Thanks! Registrar ( talk) 21:12, 27 September 2011 (UTC)
Thanks both of you! The theory behind the first explanation isn't perfectly clear to me, but I can see from graphing that it works. The second explanation makes perfect sense. So thanks very much. Registrar ( talk) 21:37, 27 September 2011 (UTC)
The approximation is actually better than either explanation suggests, because of the fact that
or in other words,
Under what circumstances is this equality always true: ? Do the series and have to be absolutely convergent or just convergent? Widener ( talk) 21:30, 27 September 2011 (UTC)
If the limits for N to infinity of both exist (and these limits are, by definition, the summations to infinity), then because the sum of the limits is the limit of the sum, the summation to infinity of the sum of the two function is equal to the sum of the two summations to infinity. Count Iblis ( talk) 22:47, 27 September 2011 (UTC)
You can also use this in case of divergent summations. Suppose e.g. that is convergent and we write , but both and are divergent. Then define the functions:
If f(z) can be analytically continued to the entire complex plane, then h(z)= f(z) + g(z) and you can put z = 1 in here, despite the series for f(z) and g(z) not converging there. If f(z) and g(z) have poles at z = 1, then you can evaluate h(1) by computing the limit of f(z) + h(z) for z to 1. Count Iblis ( talk) 23:16, 27 September 2011 (UTC)