Mathematics desk | ||
---|---|---|
< February 14 | << Jan | February | Mar >> | Current desk > |
Welcome to the Wikipedia Mathematics Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
In my physics class, we were working with integration to calculate the electric field at certain points from hypothetical charged objects. For instance, we would be given a charged hoop of uniform charge density, and we'd be required to find the electric field at a point P (usually on the x or y axis). We'd do this by breaking the object into infinitesimal pieces of charge, integrating all the contributions of the field from each charge to that point P. However, we were taught to integrate the components of the field vector, because simply integrating would not get us the resultant vector that represented the field at that point (of course, if we did integrate that, we'd be modeling a vector in the positive direction that had the magnitude of the integral, but it would not be the same as the field as vectors involve both magnitude and direction, something normal integration techniques do not give us).
This got me thinking (and here's where the math component of my question comes in): is there a vector analogue of integration such that the rules of vector addition, subtraction, etc. apply, in order to find such an integral vector? In the case of the electric field vector, say we had the positively-charged circular hoop centred about the origin and we were finding the field at the origin, our point P. Each infinitesimal electric field vector could be given by a parametric equation like , and if we can somehow integrate all vectors from , we'd get an integral vector of . Is there such a thing, and if there is, what does the notation look like (and how could we model this example)? If I didn't make sense somewhere, I'd be more than happy to clarify something. Thank you! 70.54.113.74 ( talk) 07:45, 15 February 2016 (UTC)
Do tensor products in category theory and in linear algebra refer to the same thing? -- Ancheta Wis (talk | contribs) 08:18, 15 February 2016 (UTC)
“ | The ordinary tensor product makes vector spaces... into monoidal categories. Monoidal categories can be seen as a generalization of these and other examples. | ” |
Let be a smooth function. Let be a point such that the gradient of f at x is zero. What is the vector along which the function decreases the most? Strictly speaking, what is ? Clearly, this vector is in the space spanned by the eigenvectors of the Hessian matrix of f with positive eigenvalues (as RDBury explained me in my last question). But what is it? Is it just the eigenvector corresponding to the biggest eigenvalue? Or maybe some linear combination of the eigenvectors? עברית ( talk) 19:30, 15 February 2016 (UTC)
GNFS is the fastest known way to find prime factors of numbers with 200 or so digits. However, go to the following URL:
http://www.worldofnumbers.com/topic1.htm
Search for information that starts with the date of December 3, 2014. You'll see a 251-digit composite number that they say is very difficult to find prime factors of even with GNFS. Any proposed way to find prime factors of numbers this large that you know of?? Georgia guy ( talk) 22:00, 15 February 2016 (UTC)
Mathematics desk | ||
---|---|---|
< February 14 | << Jan | February | Mar >> | Current desk > |
Welcome to the Wikipedia Mathematics Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
In my physics class, we were working with integration to calculate the electric field at certain points from hypothetical charged objects. For instance, we would be given a charged hoop of uniform charge density, and we'd be required to find the electric field at a point P (usually on the x or y axis). We'd do this by breaking the object into infinitesimal pieces of charge, integrating all the contributions of the field from each charge to that point P. However, we were taught to integrate the components of the field vector, because simply integrating would not get us the resultant vector that represented the field at that point (of course, if we did integrate that, we'd be modeling a vector in the positive direction that had the magnitude of the integral, but it would not be the same as the field as vectors involve both magnitude and direction, something normal integration techniques do not give us).
This got me thinking (and here's where the math component of my question comes in): is there a vector analogue of integration such that the rules of vector addition, subtraction, etc. apply, in order to find such an integral vector? In the case of the electric field vector, say we had the positively-charged circular hoop centred about the origin and we were finding the field at the origin, our point P. Each infinitesimal electric field vector could be given by a parametric equation like , and if we can somehow integrate all vectors from , we'd get an integral vector of . Is there such a thing, and if there is, what does the notation look like (and how could we model this example)? If I didn't make sense somewhere, I'd be more than happy to clarify something. Thank you! 70.54.113.74 ( talk) 07:45, 15 February 2016 (UTC)
Do tensor products in category theory and in linear algebra refer to the same thing? -- Ancheta Wis (talk | contribs) 08:18, 15 February 2016 (UTC)
“ | The ordinary tensor product makes vector spaces... into monoidal categories. Monoidal categories can be seen as a generalization of these and other examples. | ” |
Let be a smooth function. Let be a point such that the gradient of f at x is zero. What is the vector along which the function decreases the most? Strictly speaking, what is ? Clearly, this vector is in the space spanned by the eigenvectors of the Hessian matrix of f with positive eigenvalues (as RDBury explained me in my last question). But what is it? Is it just the eigenvector corresponding to the biggest eigenvalue? Or maybe some linear combination of the eigenvectors? עברית ( talk) 19:30, 15 February 2016 (UTC)
GNFS is the fastest known way to find prime factors of numbers with 200 or so digits. However, go to the following URL:
http://www.worldofnumbers.com/topic1.htm
Search for information that starts with the date of December 3, 2014. You'll see a 251-digit composite number that they say is very difficult to find prime factors of even with GNFS. Any proposed way to find prime factors of numbers this large that you know of?? Georgia guy ( talk) 22:00, 15 February 2016 (UTC)