This article is rated C-class on Wikipedia's
content assessment scale. It is of interest to the following WikiProjects: | |||||||||||||||||||||||||||||||
|
The amplitude of the phasor addition result should be stated unambiguously - that is the positive square-root of the right-hand side. And the phase-shift of the phasor addition result should not make use of arctan but of atan2 to get the quadrant right:
Phasors are used for a lot more than just AC circuit analysis. I think actually the exact definition may differ depending on the field you are worrying about, such as the Signal Processing, where phasors most definitely do have a frequency. This is probably the best general definition of this mathematical tool: [1]
Is this ever used in regular contexts besides power calculations? I imagine it is. - Omegatron 21:23, August 15, 2005 (UTC)
Here is
the rule:
Doesn't that imply:
which is not true. What am I missing?
-- Bob K 09:38, 21 August 2007 (UTC)
b, consider that a phasor is just the amplitude of the frequency domain representation of the sinusoid. Multiplying two phasors is essentially multiplication in the frequency domain which results in convolution, not multiplication, in the time domain. That's why we say that 'the product (or ratio) of two phasors is not itself a phasor'. Alfred Centauri 13:01, 21 August 2007 (UTC)
Thanks, but the same source provides this definition:
which simply means to me that is a shorthand notation. The "equals" sign should mean we can directly substitute the full mathematical notation for the shorthand:
Similarly,
I'm just doing the obvious application of the definition. If you still disagree, I would appreciate it if someone would show a trigonometric example (without phasors) of multiplication, and then illustrate how phasors simplify the math and arrive at the same answer, as the article implies, or did before I removed this statement:
FWIW, what we seem to be talking about here is the
analytic representation of a signal, . The analytic representations of and are and And the product of the analytic signals is which represents the real signal: which is not the product of and Therefore we have to be careful not to mislead people (and ourselves) about the multiplication of analytic signals. And I think the same goes for phasors. We really need an example of the multiplication property in action. When and why would someone want to use it?
-- Bob K 14:44, 21 August 2007 (UTC)
If
http://en.wikibooks.org/wiki/Circuit_Theory/Phasor_Arithmetic is "confused", then I am inclined to remove the link to it, because I don't know how to fix it. But that link was intended to clarify this statement:
After reading what you said, I think what's missing is a statement that multiplication between a phasor and a complex impedance is another phasor. But multiplication of two phasors (or squaring one) does not produce another phasor.
While we're on a roll, what do you think of this excerpt:
-- Bob K 16:34, 21 August 2007 (UTC)
Good idea! I also moved the trig stuff to a more appropriate article.
-- Bob K 21:54, 21 August 2007 (UTC)
I removed this statement from the Circuit Laws section for the reasons that it isn't quite correct and isn't needed anyhow to justify the use of phasors. The problem is that phasors are complex numbers in general. DC circuits do not have complex voltages or currents. So, while phasors generalize DC circuit analysis to AC circuits, we can't really go back the other way unless we want to admit complex DC sources. Alfred Centauri 01:32, 27 February 2006 (UTC)
But my point is precisely that equating DC with a zero frequency sinusoid, as you have done above, is not quite correct. Consider the following AC circuit:
A voltage source of 1 + j0 Vrms in series with a 1 ohm resistor and a current source of 0 - j1 Arms. The average power associated with the resistor is 1W and is independent of frequency, right? But wait; recall that the time domain voltage source and current source functions are given by:
Setting the frequency to zero we get:
With a 'DC' current of 0A, the power associated with the resistor is 0W but this result conflicts with the result above. Clearly, in the context of AC circuit analysis, it is not quite correct to say that DC is just zero frequency. Alfred Centauri 22:25, 28 March 2006 (UTC)
Here's something else to consider. The rms voltage of a DC source is simply the DC voltage value. The rms voltage of an AC voltage source is the peak voltage over the square root of 2. Since this result is independent of frequency, it seems reasonable to believe that the rms voltage of a zero frequency cosine is equal to the rms value for any non-zero frequency cosine. However, if we insert a frequency of zero into a cosine, the result is the constant peak value, not the constant rms value. Once again, it does not appear to be entirely correct to say that a DC source is a sinusoidal source of zero frequency. Alfred Centauri 23:36, 28 March 2006 (UTC)
Not true! Look at the expression for i_s(t). That is a real source my friend regardless of frequency. It is the phasor representation of this source that is complex. Further, look at the 2nd example I give. No complex sources there, right? Alfred Centauri 02:16, 29 March 2006 (UTC)
Phasor analysis can and is used for power calculations. Although your statement is correct that the product of two phasors is not a phasor, this fact does not imply your assertion. After all, the fact that impedance, being the ratio of two phasors, is not a phasor does not make impedance any less useful a concept. Similarly, the complex power given by the product of the voltage and conjugate current (rms) phasors, while not being a phasor, is nonetheless a valid and useful quantity whose real part is the time average power (not the 'rms' power!) and whose imaginary part is the peak reactive power.
Your statement that "the rms value of a zero-frequency current at will be 0." is not even wrong. There is no such thing as an rms value at pi/2. The rms value of a unit amplitude sinusoid - regardless of frequency or phase - is . Alfred Centauri 00:29, 7 November 2006 (UTC)
There's no room for disagreement here. By definition, the rms value of a sinusoid is:
Note that this result holds in the limit as the period T goes to infinity. Thus, your assertion that the phase is important in determining the rms value is obviously false - even in the case where the frequency is zero (infinite period). If this isn't clear to you yet, then think about the time average power deliverd to a resistor by a sinsoidal voltage source with an arbitrarily large period (T = 15 billion years for example) compared to that delivered by a true DC voltage source. Alfred Centauri 05:50, 7 November 2006 (UTC)
While is true for finite t, it does not hold if t goes to infinity (0 * infinity can by any number). However, to take the rms value of a zero frequency sinusoid, we must in fact integrate over all time. It is for this reason that the rms value of the zero frequency sinusoid on the left is while the rms value of the zero frequency sinusoid on the right is . Alfred Centauri 06:16, 7 November 2006 (UTC)
Well, I think that the phasors for electronics is an application of those concepts....
Gabriel
The electronics phasors dont behave like the vectors , and they dont obey the rules of vectors studied in Physics (Statics & Dynamics)
Nauman —Preceding unsigned comment added by 202.83.173.10 ( talk) 11:55, 24 October 2007 (UTC)
Nauman is right ,Electronics phasors dont behave the same way as the physics phasors , they are different , for reference see Fundamentals of Electric Circuits by , Sergio Franco , chapter 10 , AC Response . —Preceding unsigned comment added by 202.83.164.243 ( talk) 14:53, 11 November 2007 (UTC)
I removed the following text from the intro:
(Important note: The phasor approach is for "steady state" calculations involving sinusoidal waves. It cannot be used when transients are involved. For calculations involving transients, the Laplace transform approach is often used instead.)
I don't believe that this statement is necessary even if it were true but, that fact is, phasors can be used to analyze transients. The bottom line is that the complex coefficents of a Fourier series or the inverse Fourier integral are phasors.
It is usually stated that phasor analysis assumes sinusoidal steady state at one frequency and this is true as far as it goes. However, it is quite straightforward to extend phasor analysis to a circuit with multiple frequency sinusoidal excitation. When one does this, it is clear this extension is nothing other than frequency domain analysis. In other words, phasor analysis is frequency domain analysis at a fixed frequency. Alfred Centauri 23:22, 27 April 2007 (UTC)
This article ( Phasor (electronics)) describes essentially the same thing as Phasor (physics). I believe there is no reason to maintain two different articles. The main difference between them is that this article describes the concept of phasors from the viewpoint of an engineer who uses it in a specific domain, while the other article is more general, but lacks some details that this one has. —The preceding unsigned comment was added by 129.177.44.96 ( talk) 13:32, 2 May 2007 (UTC).
I agree they describe the same thing. The physics article takes a vector approach, while the electronics article is based on complex numbers. The electronics article is far more practical, while the physics article is far more theoretical, and IMO, less useful. Concatenating the electronics article to the physics article as is would probably be a good idea. Neither article is very long, though the electronics article could do without the power engineering section - it doesn't add much to the concept of phasors.
All I can say is that the Phasor (electronics) page helped me pass ECE 203. The more general physics page wouldn't have helped nearly as much.
THEY ARE THE SAME KNOWLEDGE. THEY ARE THE SAME MATERIAL, THEY ARE SYNOMINOUS. To make people happy just put the same concepts at both places.
Would it be okay to call "Phasor (Physics)" "Phasor" and rename "Phasor (electronics)" to "Application of phasors in electronics" or something of the like? All redundant material introducing abstract phasors could be deleted from the latter, and it could be considered building on the former. —1st year EE student —Preceding
unsigned comment added by
128.54.192.216 (
talk) 16:07, 27 September 2007 (UTC)
Definately merge them. Mpassman ( talk) 18:18, 17 November 2007 (UTC)
I like the section on Phasor arithmetic and I suggest noting that the role of linearity is also an important component of the technique. If, for example, the differential equation in the example were not linear then phasors would be for naught. gaussmarkov 0:28, 6 September 2007 (UTC)
I think it should be better explained why the Re{} operator is usually dropped before some complicated algebra and then reapplied at the end. e.g. how the differential operator can be moved inside the Re{} operator. - Roger 23:14, 1 November 2007 (UTC)
Orthogonality (or something a little more elusive) is the reason for wanting to do operations in the complex domain, as I will try to explain. Linearity, loosely defined as operations that affect the Re and Im components independently, is a characteristic of certain mathematical operations that may be moved inside the Re operator without changing the net effect. Such operations include differentiation, integration, time-delay, multiplication by a (real-valued) scalar, and addition. No orthogonality is required to do these things.
If we limited our phasor arithmetic to those kinds of operations, the Im part we chose would not matter, and there would be no benefit at all. The benefit comes from choosing an orthogonal waveform to the Re component, such that:
which has the useful property:
whereas:
We see that in the complex domain, a time-delay can be represented more simply by just the product with a complex scalar (which is why impedances are complex).
So it appears that the motivation for working in the complex domain is to simplify the effect of time-delays / phase-shifts caused by linear devices with "memory".
-- Bob K 15:27, 12 November 2007 (UTC)
The word "phasor" has two meanings. One meaning includes the factor, and the other excludes it. Until recently these were relegated to different articles, one called "Phasor (physics)" and the other called "Phasor (engineering)" (I think). But there was fairly strong sentiment to merge the articles, which resulted in this one. The new introduction appears to be heading back to the "Phasor (engineering)" article.
-- Bob K ( talk) 17:29, 17 December 2007 (UTC)
OK, I looked back. It was "Phasor (electronics)". I'm sure we can come up with a suitable introduction. It really is the same concept in both disciplines. But the physicists are less inclined to solve circuit equations.
-- Bob K ( talk) 23:13, 17 December 2007 (UTC)
I think that there should be separated article dedicated to phasor diagrams - describing their making and meaning.
-- User:Vanished user 8ij3r8jwefi 17:06, 21 May 2008 (UTC)
sinusoids are easily expressed in terms of phasors, which are more convenient to work with than sine and cosine functions. a phasor is a complex number that represents the amplitude and phase of a sinusoid. phasors provide a simple means of analyzing linear circuits —Preceding unsigned comment added by 210.48.147.2 ( talk) 07:00, 27 July 2008 (UTC)
Does anyone have images of phasors in relation to FM and PM modulation? That would be a good section to add. —Preceding unsigned comment added by Daviddoria ( talk • contribs) 17:27, 14 September 2008 (UTC)
The section called "Definition" shows only cosine functions. The term "sine wave" is used to refer to expressions that involve only cos(...) expressions, I think that this customary paradox of mathematical terminology should be mentioned even though the definition in the article refers only to the complex number representation.
Tashiro ( talk) 17:43, 1 February 2009 (UTC)
The Wikipedia article on sine waves defines them by using the sin() function. It says that cosine waves are "sinusoidal". I understand that this is essentially a quibble, but the history of when people walked out of class may be that they walked out of class 40 years ago, so which facts are remembered is rather random after that period of time. It would only require one sentence to clarify this. Tashiro ( talk) 06:50, 9 February 2009 (UTC)
I made "sine waves" an internal link. The
linked article already contains this clarification:
A cosine wave is said to be "sinusoidal", because which is also a sine wave with a phase-shift of π/2.
-- Bob K ( talk) 22:04, 10 February 2009 (UTC)
I teach a course on linear circuits, including a section on Phasors. The illustration titled "Graph of a phasor as a rotating vector" is both helpful and awkward at the same time. Some students find it harder to understand a typical phasor diagram (stationary, drawn on paper) after looking at the spinning phasor illustrated. I have two suggestions.
1.) In the top half of the illustration, some students cannot fathom that in the present illustration the time axis itself is sinking down the screen as the vector rotates. Instead, fix the time axis in place and let the dot move both vertically and horizontally. One could show exactly one cycle of a (stationary) sinusoid so that as the phasor rotates through one revolution a dot on the sinusoid sweeps upwards following the function, then rolls over to the bottom and starts upwards again with the next rotation of the phasor. A dashed line to show the dot's projection on the time axis would also help. Be sure the sinusoid has some initial phase offset, for example,
y(t) = cos(t + 2pi/3)
Label the horizontal y(t) axis with "+1" on the right and "-1" on the left. Label the vertical axis with a "t" at or near the top and with a few tic marks labeled in radians from 0 to 2pi.
2.) On the bottom half of the illustration create a strobe-light effect to illustrate a stationary view of the phasor, as would be drawn on paper. Show the rotating phasor dimmed to grey and dark blue most of the time. When the dot in the top half of the illustration rolls over to the t = 0 spot, show the bottom half with a bright white background and bright blue lines (as is presently shown). Then as the phasor continues to rotate, return to the dimmed view in the bottom half of the illustration. What I have in mind is the effect of a strobe light that flashes at time t = 0 and every period (T, 2T, 3T, etc.) thereafter, thus illustrating how we interpret a phasor diagram drawn on paper. Students need to understand that what we draw statically on a page of paper represents the angular relations at time t = 0.
(I sure wish I had the tools to put such an illustration together and just show it!)
Dfdeboer ( talk) 22:58, 18 November 2009 (UTC)
The Phasor Addition section took me a little bit of time to make sense of. The step by step calculation is trivial for the first few step (combining terms in the Re operator, etc), and then out of nowhere: Boom! A3 and theta3 are introduced with a fairly complex definition for each. It eventually made sense to me when I drew out the phasor diagram and looked at the x/y components, but something should help fill that gap. -- Gradenko ( talk) 07:44, 23 January 2011 (UTC)
I am not sure how to fix them, so it would be great if somebody else could fix the syntax so that the equations start showing up. Cheers. Vignesh.Mukund ( talk) 11:20, 10 February 2014 (UTC)
The article had quite a poorly developed lead. I've added more essentials, including the origin as it is commonly attributed in (US) EE textbooks, but upon glancing on some Google Books snippets from Kline's biography of Steinmetz, I think the origin story was more complex. Alas I don't have immediate access to Kline's, but it's something to keep in mind (and go to) if counterclaims appear. 86.121.137.79 ( talk) 22:11, 13 January 2015 (UTC)
Phasors are used almost everywhere in power engineering and their usefulness should be linked. There is also not a significant mention of lead and lag, which is important when we compare between voltages and currents. A picture would help to illustrate the example of lead and lag for example. Phase angles also should be mentioned as being relative to each other and that you can set a reference point(which is usually the voltage source.) — Preceding unsigned comment added by 204.98.7.11 ( talk) 06:15, 7 December 2016 (UTC)
This article is very good for we that have studied phasors. However, it is useless to a newcomer. They will leave with less knowledge,,, why am I explaining this? Try to fix it. Over and out. — Preceding unsigned comment added by Longinus876 ( talk • contribs) 15:41, 15 December 2017 (UTC)
In the first paragraph of this article, it is stated that phasors are also called sinors (in older textbooks), and even a reference was cited to this. So, it seems like this is true.
However, in the book Fundamentals of Electric Circuits by Charles Alexander and Matthew Sadiku, phasors and sinors are defined differently. In the case of phasors, they are defined as usual, that is as Ae^(jθ); but sinors are defined as Ae^(jθ)e^(jωt) = Ae^(j(θ+ωt)).
I think this should be cleared, so I'd like to confirm if, whoever who cited that sinors are the same as phasors, actually didn't confuse those two terms. Perhaps they could share a screenshot/photo of the page of the cited textbook.
Alej27 ( talk) 07:01, 31 March 2019 (UTC)
I was recently editing the article of Fourier transforms, and I found a citation to a source with a ton of information on phasors. It's about the difference between i and j as representations of the complex component of phasors, as i is the usual standard in theoretical physics and j is the usual standard in electrical engineering, and all of the sign conventions that relate to that. It seems to be of sufficient relevance and importance to warrant being included in this article, but this doesn't mention anything of the sort. I don't have sufficient interest to want to invest the time to add the info to this article, but I figured I'd mention it here so that someone with an interest could. The source is "Sign Conventions in Electromagnetic (EM) Waves" (PDF). 74.96.192.195 ( talk) 08:54, 8 October 2021 (UTC)
This article is rated C-class on Wikipedia's
content assessment scale. It is of interest to the following WikiProjects: | |||||||||||||||||||||||||||||||
|
The amplitude of the phasor addition result should be stated unambiguously - that is the positive square-root of the right-hand side. And the phase-shift of the phasor addition result should not make use of arctan but of atan2 to get the quadrant right:
Phasors are used for a lot more than just AC circuit analysis. I think actually the exact definition may differ depending on the field you are worrying about, such as the Signal Processing, where phasors most definitely do have a frequency. This is probably the best general definition of this mathematical tool: [1]
Is this ever used in regular contexts besides power calculations? I imagine it is. - Omegatron 21:23, August 15, 2005 (UTC)
Here is
the rule:
Doesn't that imply:
which is not true. What am I missing?
-- Bob K 09:38, 21 August 2007 (UTC)
b, consider that a phasor is just the amplitude of the frequency domain representation of the sinusoid. Multiplying two phasors is essentially multiplication in the frequency domain which results in convolution, not multiplication, in the time domain. That's why we say that 'the product (or ratio) of two phasors is not itself a phasor'. Alfred Centauri 13:01, 21 August 2007 (UTC)
Thanks, but the same source provides this definition:
which simply means to me that is a shorthand notation. The "equals" sign should mean we can directly substitute the full mathematical notation for the shorthand:
Similarly,
I'm just doing the obvious application of the definition. If you still disagree, I would appreciate it if someone would show a trigonometric example (without phasors) of multiplication, and then illustrate how phasors simplify the math and arrive at the same answer, as the article implies, or did before I removed this statement:
FWIW, what we seem to be talking about here is the
analytic representation of a signal, . The analytic representations of and are and And the product of the analytic signals is which represents the real signal: which is not the product of and Therefore we have to be careful not to mislead people (and ourselves) about the multiplication of analytic signals. And I think the same goes for phasors. We really need an example of the multiplication property in action. When and why would someone want to use it?
-- Bob K 14:44, 21 August 2007 (UTC)
If
http://en.wikibooks.org/wiki/Circuit_Theory/Phasor_Arithmetic is "confused", then I am inclined to remove the link to it, because I don't know how to fix it. But that link was intended to clarify this statement:
After reading what you said, I think what's missing is a statement that multiplication between a phasor and a complex impedance is another phasor. But multiplication of two phasors (or squaring one) does not produce another phasor.
While we're on a roll, what do you think of this excerpt:
-- Bob K 16:34, 21 August 2007 (UTC)
Good idea! I also moved the trig stuff to a more appropriate article.
-- Bob K 21:54, 21 August 2007 (UTC)
I removed this statement from the Circuit Laws section for the reasons that it isn't quite correct and isn't needed anyhow to justify the use of phasors. The problem is that phasors are complex numbers in general. DC circuits do not have complex voltages or currents. So, while phasors generalize DC circuit analysis to AC circuits, we can't really go back the other way unless we want to admit complex DC sources. Alfred Centauri 01:32, 27 February 2006 (UTC)
But my point is precisely that equating DC with a zero frequency sinusoid, as you have done above, is not quite correct. Consider the following AC circuit:
A voltage source of 1 + j0 Vrms in series with a 1 ohm resistor and a current source of 0 - j1 Arms. The average power associated with the resistor is 1W and is independent of frequency, right? But wait; recall that the time domain voltage source and current source functions are given by:
Setting the frequency to zero we get:
With a 'DC' current of 0A, the power associated with the resistor is 0W but this result conflicts with the result above. Clearly, in the context of AC circuit analysis, it is not quite correct to say that DC is just zero frequency. Alfred Centauri 22:25, 28 March 2006 (UTC)
Here's something else to consider. The rms voltage of a DC source is simply the DC voltage value. The rms voltage of an AC voltage source is the peak voltage over the square root of 2. Since this result is independent of frequency, it seems reasonable to believe that the rms voltage of a zero frequency cosine is equal to the rms value for any non-zero frequency cosine. However, if we insert a frequency of zero into a cosine, the result is the constant peak value, not the constant rms value. Once again, it does not appear to be entirely correct to say that a DC source is a sinusoidal source of zero frequency. Alfred Centauri 23:36, 28 March 2006 (UTC)
Not true! Look at the expression for i_s(t). That is a real source my friend regardless of frequency. It is the phasor representation of this source that is complex. Further, look at the 2nd example I give. No complex sources there, right? Alfred Centauri 02:16, 29 March 2006 (UTC)
Phasor analysis can and is used for power calculations. Although your statement is correct that the product of two phasors is not a phasor, this fact does not imply your assertion. After all, the fact that impedance, being the ratio of two phasors, is not a phasor does not make impedance any less useful a concept. Similarly, the complex power given by the product of the voltage and conjugate current (rms) phasors, while not being a phasor, is nonetheless a valid and useful quantity whose real part is the time average power (not the 'rms' power!) and whose imaginary part is the peak reactive power.
Your statement that "the rms value of a zero-frequency current at will be 0." is not even wrong. There is no such thing as an rms value at pi/2. The rms value of a unit amplitude sinusoid - regardless of frequency or phase - is . Alfred Centauri 00:29, 7 November 2006 (UTC)
There's no room for disagreement here. By definition, the rms value of a sinusoid is:
Note that this result holds in the limit as the period T goes to infinity. Thus, your assertion that the phase is important in determining the rms value is obviously false - even in the case where the frequency is zero (infinite period). If this isn't clear to you yet, then think about the time average power deliverd to a resistor by a sinsoidal voltage source with an arbitrarily large period (T = 15 billion years for example) compared to that delivered by a true DC voltage source. Alfred Centauri 05:50, 7 November 2006 (UTC)
While is true for finite t, it does not hold if t goes to infinity (0 * infinity can by any number). However, to take the rms value of a zero frequency sinusoid, we must in fact integrate over all time. It is for this reason that the rms value of the zero frequency sinusoid on the left is while the rms value of the zero frequency sinusoid on the right is . Alfred Centauri 06:16, 7 November 2006 (UTC)
Well, I think that the phasors for electronics is an application of those concepts....
Gabriel
The electronics phasors dont behave like the vectors , and they dont obey the rules of vectors studied in Physics (Statics & Dynamics)
Nauman —Preceding unsigned comment added by 202.83.173.10 ( talk) 11:55, 24 October 2007 (UTC)
Nauman is right ,Electronics phasors dont behave the same way as the physics phasors , they are different , for reference see Fundamentals of Electric Circuits by , Sergio Franco , chapter 10 , AC Response . —Preceding unsigned comment added by 202.83.164.243 ( talk) 14:53, 11 November 2007 (UTC)
I removed the following text from the intro:
(Important note: The phasor approach is for "steady state" calculations involving sinusoidal waves. It cannot be used when transients are involved. For calculations involving transients, the Laplace transform approach is often used instead.)
I don't believe that this statement is necessary even if it were true but, that fact is, phasors can be used to analyze transients. The bottom line is that the complex coefficents of a Fourier series or the inverse Fourier integral are phasors.
It is usually stated that phasor analysis assumes sinusoidal steady state at one frequency and this is true as far as it goes. However, it is quite straightforward to extend phasor analysis to a circuit with multiple frequency sinusoidal excitation. When one does this, it is clear this extension is nothing other than frequency domain analysis. In other words, phasor analysis is frequency domain analysis at a fixed frequency. Alfred Centauri 23:22, 27 April 2007 (UTC)
This article ( Phasor (electronics)) describes essentially the same thing as Phasor (physics). I believe there is no reason to maintain two different articles. The main difference between them is that this article describes the concept of phasors from the viewpoint of an engineer who uses it in a specific domain, while the other article is more general, but lacks some details that this one has. —The preceding unsigned comment was added by 129.177.44.96 ( talk) 13:32, 2 May 2007 (UTC).
I agree they describe the same thing. The physics article takes a vector approach, while the electronics article is based on complex numbers. The electronics article is far more practical, while the physics article is far more theoretical, and IMO, less useful. Concatenating the electronics article to the physics article as is would probably be a good idea. Neither article is very long, though the electronics article could do without the power engineering section - it doesn't add much to the concept of phasors.
All I can say is that the Phasor (electronics) page helped me pass ECE 203. The more general physics page wouldn't have helped nearly as much.
THEY ARE THE SAME KNOWLEDGE. THEY ARE THE SAME MATERIAL, THEY ARE SYNOMINOUS. To make people happy just put the same concepts at both places.
Would it be okay to call "Phasor (Physics)" "Phasor" and rename "Phasor (electronics)" to "Application of phasors in electronics" or something of the like? All redundant material introducing abstract phasors could be deleted from the latter, and it could be considered building on the former. —1st year EE student —Preceding
unsigned comment added by
128.54.192.216 (
talk) 16:07, 27 September 2007 (UTC)
Definately merge them. Mpassman ( talk) 18:18, 17 November 2007 (UTC)
I like the section on Phasor arithmetic and I suggest noting that the role of linearity is also an important component of the technique. If, for example, the differential equation in the example were not linear then phasors would be for naught. gaussmarkov 0:28, 6 September 2007 (UTC)
I think it should be better explained why the Re{} operator is usually dropped before some complicated algebra and then reapplied at the end. e.g. how the differential operator can be moved inside the Re{} operator. - Roger 23:14, 1 November 2007 (UTC)
Orthogonality (or something a little more elusive) is the reason for wanting to do operations in the complex domain, as I will try to explain. Linearity, loosely defined as operations that affect the Re and Im components independently, is a characteristic of certain mathematical operations that may be moved inside the Re operator without changing the net effect. Such operations include differentiation, integration, time-delay, multiplication by a (real-valued) scalar, and addition. No orthogonality is required to do these things.
If we limited our phasor arithmetic to those kinds of operations, the Im part we chose would not matter, and there would be no benefit at all. The benefit comes from choosing an orthogonal waveform to the Re component, such that:
which has the useful property:
whereas:
We see that in the complex domain, a time-delay can be represented more simply by just the product with a complex scalar (which is why impedances are complex).
So it appears that the motivation for working in the complex domain is to simplify the effect of time-delays / phase-shifts caused by linear devices with "memory".
-- Bob K 15:27, 12 November 2007 (UTC)
The word "phasor" has two meanings. One meaning includes the factor, and the other excludes it. Until recently these were relegated to different articles, one called "Phasor (physics)" and the other called "Phasor (engineering)" (I think). But there was fairly strong sentiment to merge the articles, which resulted in this one. The new introduction appears to be heading back to the "Phasor (engineering)" article.
-- Bob K ( talk) 17:29, 17 December 2007 (UTC)
OK, I looked back. It was "Phasor (electronics)". I'm sure we can come up with a suitable introduction. It really is the same concept in both disciplines. But the physicists are less inclined to solve circuit equations.
-- Bob K ( talk) 23:13, 17 December 2007 (UTC)
I think that there should be separated article dedicated to phasor diagrams - describing their making and meaning.
-- User:Vanished user 8ij3r8jwefi 17:06, 21 May 2008 (UTC)
sinusoids are easily expressed in terms of phasors, which are more convenient to work with than sine and cosine functions. a phasor is a complex number that represents the amplitude and phase of a sinusoid. phasors provide a simple means of analyzing linear circuits —Preceding unsigned comment added by 210.48.147.2 ( talk) 07:00, 27 July 2008 (UTC)
Does anyone have images of phasors in relation to FM and PM modulation? That would be a good section to add. —Preceding unsigned comment added by Daviddoria ( talk • contribs) 17:27, 14 September 2008 (UTC)
The section called "Definition" shows only cosine functions. The term "sine wave" is used to refer to expressions that involve only cos(...) expressions, I think that this customary paradox of mathematical terminology should be mentioned even though the definition in the article refers only to the complex number representation.
Tashiro ( talk) 17:43, 1 February 2009 (UTC)
The Wikipedia article on sine waves defines them by using the sin() function. It says that cosine waves are "sinusoidal". I understand that this is essentially a quibble, but the history of when people walked out of class may be that they walked out of class 40 years ago, so which facts are remembered is rather random after that period of time. It would only require one sentence to clarify this. Tashiro ( talk) 06:50, 9 February 2009 (UTC)
I made "sine waves" an internal link. The
linked article already contains this clarification:
A cosine wave is said to be "sinusoidal", because which is also a sine wave with a phase-shift of π/2.
-- Bob K ( talk) 22:04, 10 February 2009 (UTC)
I teach a course on linear circuits, including a section on Phasors. The illustration titled "Graph of a phasor as a rotating vector" is both helpful and awkward at the same time. Some students find it harder to understand a typical phasor diagram (stationary, drawn on paper) after looking at the spinning phasor illustrated. I have two suggestions.
1.) In the top half of the illustration, some students cannot fathom that in the present illustration the time axis itself is sinking down the screen as the vector rotates. Instead, fix the time axis in place and let the dot move both vertically and horizontally. One could show exactly one cycle of a (stationary) sinusoid so that as the phasor rotates through one revolution a dot on the sinusoid sweeps upwards following the function, then rolls over to the bottom and starts upwards again with the next rotation of the phasor. A dashed line to show the dot's projection on the time axis would also help. Be sure the sinusoid has some initial phase offset, for example,
y(t) = cos(t + 2pi/3)
Label the horizontal y(t) axis with "+1" on the right and "-1" on the left. Label the vertical axis with a "t" at or near the top and with a few tic marks labeled in radians from 0 to 2pi.
2.) On the bottom half of the illustration create a strobe-light effect to illustrate a stationary view of the phasor, as would be drawn on paper. Show the rotating phasor dimmed to grey and dark blue most of the time. When the dot in the top half of the illustration rolls over to the t = 0 spot, show the bottom half with a bright white background and bright blue lines (as is presently shown). Then as the phasor continues to rotate, return to the dimmed view in the bottom half of the illustration. What I have in mind is the effect of a strobe light that flashes at time t = 0 and every period (T, 2T, 3T, etc.) thereafter, thus illustrating how we interpret a phasor diagram drawn on paper. Students need to understand that what we draw statically on a page of paper represents the angular relations at time t = 0.
(I sure wish I had the tools to put such an illustration together and just show it!)
Dfdeboer ( talk) 22:58, 18 November 2009 (UTC)
The Phasor Addition section took me a little bit of time to make sense of. The step by step calculation is trivial for the first few step (combining terms in the Re operator, etc), and then out of nowhere: Boom! A3 and theta3 are introduced with a fairly complex definition for each. It eventually made sense to me when I drew out the phasor diagram and looked at the x/y components, but something should help fill that gap. -- Gradenko ( talk) 07:44, 23 January 2011 (UTC)
I am not sure how to fix them, so it would be great if somebody else could fix the syntax so that the equations start showing up. Cheers. Vignesh.Mukund ( talk) 11:20, 10 February 2014 (UTC)
The article had quite a poorly developed lead. I've added more essentials, including the origin as it is commonly attributed in (US) EE textbooks, but upon glancing on some Google Books snippets from Kline's biography of Steinmetz, I think the origin story was more complex. Alas I don't have immediate access to Kline's, but it's something to keep in mind (and go to) if counterclaims appear. 86.121.137.79 ( talk) 22:11, 13 January 2015 (UTC)
Phasors are used almost everywhere in power engineering and their usefulness should be linked. There is also not a significant mention of lead and lag, which is important when we compare between voltages and currents. A picture would help to illustrate the example of lead and lag for example. Phase angles also should be mentioned as being relative to each other and that you can set a reference point(which is usually the voltage source.) — Preceding unsigned comment added by 204.98.7.11 ( talk) 06:15, 7 December 2016 (UTC)
This article is very good for we that have studied phasors. However, it is useless to a newcomer. They will leave with less knowledge,,, why am I explaining this? Try to fix it. Over and out. — Preceding unsigned comment added by Longinus876 ( talk • contribs) 15:41, 15 December 2017 (UTC)
In the first paragraph of this article, it is stated that phasors are also called sinors (in older textbooks), and even a reference was cited to this. So, it seems like this is true.
However, in the book Fundamentals of Electric Circuits by Charles Alexander and Matthew Sadiku, phasors and sinors are defined differently. In the case of phasors, they are defined as usual, that is as Ae^(jθ); but sinors are defined as Ae^(jθ)e^(jωt) = Ae^(j(θ+ωt)).
I think this should be cleared, so I'd like to confirm if, whoever who cited that sinors are the same as phasors, actually didn't confuse those two terms. Perhaps they could share a screenshot/photo of the page of the cited textbook.
Alej27 ( talk) 07:01, 31 March 2019 (UTC)
I was recently editing the article of Fourier transforms, and I found a citation to a source with a ton of information on phasors. It's about the difference between i and j as representations of the complex component of phasors, as i is the usual standard in theoretical physics and j is the usual standard in electrical engineering, and all of the sign conventions that relate to that. It seems to be of sufficient relevance and importance to warrant being included in this article, but this doesn't mention anything of the sort. I don't have sufficient interest to want to invest the time to add the info to this article, but I figured I'd mention it here so that someone with an interest could. The source is "Sign Conventions in Electromagnetic (EM) Waves" (PDF). 74.96.192.195 ( talk) 08:54, 8 October 2021 (UTC)