![]() | This article is rated C-class on Wikipedia's
content assessment scale. It is of interest to the following WikiProjects: | ||||||||||||||||||||||||||||||||||||||||||||||||||
|
![]() | This article links to one or more target anchors that no longer exist.
Please help fix the broken anchors. You can remove this template after fixing the problems. |
Reporting errors |
Need to put an example for the discrete time case to make things a little clearer.
but i'm glad the article is there. this should be written so that it can be understood by someone who doesn't already understand it. to begin with, i think there needs to be a better introduction as to what is. what are the fundamental properties of this LTI operator ? first exactly what does it mean for to be linear (the additive superposition property) and then what does it mean for to be time-invariant. then from that derive the more general superposition property, then introduce the dirac delta impulse as an input and define the output of to be the impulse response. since the article is LTI, there is no need to introduce . all that does is obfuscate.
Mark, i hope you don't mind if i whack at this a bit in the near future. i gotta figure out how to draw a png image and upload it. r b-j 04:53, 28 Apr 2005 (UTC)
It appears that it is assumed that the LTI linear system can be represented by a convolution. However, in Zemanian's book on distributions a result due to Schwatrz and its proof are presented. The result has to do with sufficient conditions under which an LTI transformation can be represented by a convolution. I guess that continuity of the LTI transformation is one of the conditions. The result appears to be quite deep. Some other proofs in the literature may not be real proofs. This result is not as simple as one might think.
Yaacov
I deleted the word "integral" from my comment. The convolution of didtributions has a definition that does not appear to rely on integration.
Yaacov
I added: "Some other proofs in the literature may not be real proofs. This result is not as simple as one might think."
Yaacov
Is the bb font at all common for an operator? I've never seen that before, and I think bb font should be reserved for sets like the reals and complexes. A clumsy but informative notation that one of my professors uses is this:
,
where is the operator, to is the output variable, and ti is the input variable. To say that a system is TI,
.
I'm not sure if it's a good idea to use it here though... --
Jpkotta
06:49, 12 February 2006 (UTC)
Should discrete time be folded in with continuous time, or should there be two halves of the article?
By folded, I mean:
By two halves, I mean
I vote for the two halves option, because then it would be easier to split into two articles in the future. -- Jpkotta 06:46, 12 February 2006 (UTC)
I made a big update to the article, and most of it was to add a "mirror image" of the CT stuff for DT. There is a bit more to go, but it's almost done. -- Jpkotta 22:25, 21 April 2006 (UTC)
This page has the equation
which looks an awful lot like the application of a Green function
however this page doesn't even mention Green functions. Can someone explan when the two approaches can be applied? (My hunch right now is that Green functions can be used for linear systems that are not necessarily time-invariant.) —Ben FrantzDale 03:24, 17 November 2006 (UTC)
The first example starts out describing the delay operator then describes the difference operator. 203.173.167.211 23:03, 3 February 2007 (UTC)
Generally, it is not true that for a linear operator L (such that ), that
over arbitrary index sets (i.e. infinite sums). This is used heavily in LTI analysis.
The result does not follow from induction. So why should it be true for linear systems? I think linearity itself is not strong enough a condition to warrant the infinite-sum result. Are there deeper maths behind systems analysis that provide this result? (For example, restriction of linear systems to duals of certain maps is a sufficiently strong condition to imply this result.) 18.243.2.126 ( talk) 01:42, 13 February 2008 (UTC)
I've removed the recently-added footnote, because I'm not sure what it says is relevant. The explanation of the delay operator is purely an example of "it's easier to write", as by substitution, . The differentiation explanation is irrelevant, because when using z, we're in discrete time, and so would never differentiate w.r.t. continuous time. Oli Filth( talk) 21:28, 10 April 2008 (UTC)
Is this nonsense?:
-- Bob K ( talk) 00:52, 11 June 2008 (UTC)
(Eq.1) |
In the Overview section, this paragraph abruptly appears:
For all LTI systems, the eigenfunctions, and the basis functions of the transforms, are complex exponentials. This is, if the input to a system is the complex waveform for some complex amplitude and complex frequency , the output will be some complex constant times the input, say for some new complex amplitude . The ratio is the transfer function at frequency .
But complex amplitude and complex frequency are not really the inputs and outputs of the system. It's like saying that radios transmit and receive analytic signals. For the sake of those who don't already know the subject, wouldn't it be better to stick closer to reality than to mathematical abstractions?
-- Bob K ( talk) 11:32, 18 June 2008 (UTC)
The next paragraph after the one I quoted does mention that real signals are a subset. Perhaps it could be said in a more accessible way to more readers, but it's much better than nothing. This probably isn't the right place to explain complex frequency, but it would be an improvement if complex frequency was an internal link to an understandable article on that subject. Do we have such an article?
-- Bob K ( talk) 14:12, 18 June 2008 (UTC)
BTW, the article Phasor (sine waves) is an example of a more accessible approach, in my opinion, because it puts the complex representation into a context that more people can relate to. It motivates the introduction of complex amplitudes by using them to reduce a "real" problem to an elegant, easily solvable, equation:
And then it shows the additional steps to extract the "real" solution from the complex result. So from that perspective, the concept of complex amplitude is just an intermediate and temporary step in a longer process. It's actually the subset, not the superset. It is only one of the tools neded to understand all LTI systems (i.e., including realizable ones).
-- Bob K ( talk) 15:18, 18 June 2008 (UTC)
In the section LTI_system_theory#Time_invariance_and_linear_transformation we say:
If the linear operator is also time-invariant, then
For the choice
it follows that
But we can't make that choice, because is the variable of integration, and is a constant time offset.
The "proof" has been fudged so as to time-reverse the weighting function (as it was defined above) so that it looks like an impulse response. But an impulse response is a weighting function whose value at is the weight applied to the value of the input at time , where is the time of the desired output value. That means it is a time-reversed version of our definition of
So one way to fix the problem would be to start with the definition:
(Eq.2) |
But the will probably confuse people. So an alternative is to show that if is defined by:
the impulse response is Then we could point out that defining as an impulse response leads to Eq.2 (and vice versa).
-- Bob K ( talk) 03:16, 19 June 2008 (UTC)
In section LTI_system_theory#Time_invariance_and_linear_transformation we say:
If the linear operator is also time-invariant, then
If we let
then it follows that
But we can't "let " because is the variable of integration, and is a constant offset.
-- Bob K ( talk) 14:06, 19 June 2008 (UTC)
I read: "It can be shown that, given this superposition property, the scaling property follows for any rational scalar." Please correct me if I am wrong (I am not an expert), but I think it would be better to write "given this superposition property, the scaling property obviously follows for any rational scalar" (or any equivalent wording). Indeed, it seems to me that (using notations from the superposition property explanation) we simply have to take c2=0 to get the scaling property. The current wording let the reader thinks that the proof is not obvious, IMHO.-- OlivierMiR ( talk) 13:10, 27 January 2009 (UTC)
The included LTI.png image:
shows a transformation from "Time domain" to "Frequency domain." However, clearly the (or "Laplace") domain is shown, which is a generalization of the frequency domain. Either each should be changed to a or the "Frequency domain" should be changed to something like "Laplace domain" or " domain." — TedPavlic ( talk) 22:55, 28 January 2009 (UTC)
Another problem in the picture is the ridicule notation of the convolution as . It should read: , as it is the value at of the convolution of the functions and . Madyno ( talk) 19:13, 1 July 2020 (UTC)
Zero state response also discusses the analysis of linear systems, but does not make a specific restriction of the problems to time-invariant system. There is, as yet, no top-level article on Linear system theory that deals with both this and the more general case of time-variant linear systems. Is there a possible route for refactoring/merging this material? -- The Anome ( talk) 02:54, 17 February 2010 (UTC)
This paragraph:
It is really hard to imagine how a filter with a perfect LPF frequency response (as a sinc filter has) could be classified as unstable. Any sinusoidal input, for instance, is a bounded input with an obviously bounded output (either the same sinusoid or zero, depending on its frequency). However it indeed appears that the L1 criterion mentioned here would be violated by the sinc impulse response, which is stated to be an absolute test for stability/instability.
But never mind: I believe I see the problem. The sinc function extends to infinity in both negative and positive time so it cannot possibly be implemented as a causal filter, and this section is about causal filters (otherwise the concept of instability breaks down inasmuch as stable impulse responses with right hand plane zeros, if reversed in time, describe unstable systems). So I don't think the example is applicable.
And in any case, it could only be confusing to an average WP reader who is trying to LEARN about systems (since it's confusing to ME and I sort of thought I knew all about filter theory!). If the claim is true in some sense, then it stands more as a paradox or riddle than as useful information. Could someone remove this and put in a better example? And possibly (but here I'm not certain) restate the L1 criterion with a statement that it only applies to causal systems, or whatever qualifications are missing which make this result paradoxical or (I think) simply wrong? Interferometrist ( talk) 12:14, 3 March 2011 (UTC)
I worked for about 30 years in this subject and never once heard it referred to as LTI systems theory. Is the name just the invention of one person? What strange things can happen in Wikipedia. JFB80 ( talk) 05:48, 24 January 2016 (UTC)
I am not able to find a proper mathematical definition what a system in general is (a function that maps function (called input signals here) to functions (called output signals here)). There is this article here about LTI systems and an article about |time-invariant systems but there is no article about systems in general. I porpose that there should be a separate article discussing systems in general and defining some funcamental properties (time-invariance, linearity, ...) and showing examples of such systems. Fvultier ( talk) 18:14, 12 July 2017 (UTC)
The result of the move request was: consensus to move the page to the proposed title at this time, per the discussion below. Dekimasu よ! 06:11, 7 August 2018 (UTC)
Linear time-invariant theory → Linear time-invariant system – The current title doesn't make sense. Neither does the similar phrase in the lead. Dicklyon ( talk) 05:59, 31 July 2018 (UTC)
{{u|
Mark viking}} {
Talk}
10:04, 31 July 2018 (UTC)
{{u|
Mark viking}} {
Talk}
17:13, 31 July 2018 (UTC)References
@ Interferometrist:, thank you for your edits. The definition in the lede as currently written uses the titular terms themselves in the definition. It's like saying "a seed stitch is a stitch that uses a seed stitch technique to stitch". Also, WP:LEAD requires that unfamiliar terms be defined when used in the lede. Since the terms "linear" and "time-invariant" are part of the title of the subject article, it is reasonable to assume that folks coming to this article may be unfamiliar with those terms. Also, WP:LEAD advises that formulae be avoided in the lead when possible. My attempt at a rewrite was intended to address those issues. Can we rewrite the lede to be more accessible, without using the subject terms in the definition? Sparkie82 ( t• c) 01:35, 11 September 2020 (UTC)
I propose a more understandable introduction to this topic as follows:
Hi @ Interferometrist:! Given the recent revert, can you please give some examples of the following statement?:
And secondly, if the previous statement is true, does it imply that the phrase "a non-linear resistor" is wrong because such a device would not be called a resistor?
Thanks in advance. -- Alej27 ( talk) 01:16, 3 July 2021 (UTC)
I proposed that Time-invariant system be removed as a separate topic (it isn't) and its current contents be merged into this article. It may already be sufficiently dealt with, but if anyone sees content on the other page that would help here, please go ahead and edit that in. Interferometrist ( talk) 20:51, 5 July 2021 (UTC)
I proposed that Time-variant system be removed as a separate topic (it isn't. It isn't even a term used in practice) and its current contents be merged into this article. It may already be sufficiently dealt with, but if anyone sees content on the other page that would help here, please go ahead and edit that in. Interferometrist ( talk) 14:49, 6 July 2021 (UTC)
![]() | This article is rated C-class on Wikipedia's
content assessment scale. It is of interest to the following WikiProjects: | ||||||||||||||||||||||||||||||||||||||||||||||||||
|
![]() | This article links to one or more target anchors that no longer exist.
Please help fix the broken anchors. You can remove this template after fixing the problems. |
Reporting errors |
Need to put an example for the discrete time case to make things a little clearer.
but i'm glad the article is there. this should be written so that it can be understood by someone who doesn't already understand it. to begin with, i think there needs to be a better introduction as to what is. what are the fundamental properties of this LTI operator ? first exactly what does it mean for to be linear (the additive superposition property) and then what does it mean for to be time-invariant. then from that derive the more general superposition property, then introduce the dirac delta impulse as an input and define the output of to be the impulse response. since the article is LTI, there is no need to introduce . all that does is obfuscate.
Mark, i hope you don't mind if i whack at this a bit in the near future. i gotta figure out how to draw a png image and upload it. r b-j 04:53, 28 Apr 2005 (UTC)
It appears that it is assumed that the LTI linear system can be represented by a convolution. However, in Zemanian's book on distributions a result due to Schwatrz and its proof are presented. The result has to do with sufficient conditions under which an LTI transformation can be represented by a convolution. I guess that continuity of the LTI transformation is one of the conditions. The result appears to be quite deep. Some other proofs in the literature may not be real proofs. This result is not as simple as one might think.
Yaacov
I deleted the word "integral" from my comment. The convolution of didtributions has a definition that does not appear to rely on integration.
Yaacov
I added: "Some other proofs in the literature may not be real proofs. This result is not as simple as one might think."
Yaacov
Is the bb font at all common for an operator? I've never seen that before, and I think bb font should be reserved for sets like the reals and complexes. A clumsy but informative notation that one of my professors uses is this:
,
where is the operator, to is the output variable, and ti is the input variable. To say that a system is TI,
.
I'm not sure if it's a good idea to use it here though... --
Jpkotta
06:49, 12 February 2006 (UTC)
Should discrete time be folded in with continuous time, or should there be two halves of the article?
By folded, I mean:
By two halves, I mean
I vote for the two halves option, because then it would be easier to split into two articles in the future. -- Jpkotta 06:46, 12 February 2006 (UTC)
I made a big update to the article, and most of it was to add a "mirror image" of the CT stuff for DT. There is a bit more to go, but it's almost done. -- Jpkotta 22:25, 21 April 2006 (UTC)
This page has the equation
which looks an awful lot like the application of a Green function
however this page doesn't even mention Green functions. Can someone explan when the two approaches can be applied? (My hunch right now is that Green functions can be used for linear systems that are not necessarily time-invariant.) —Ben FrantzDale 03:24, 17 November 2006 (UTC)
The first example starts out describing the delay operator then describes the difference operator. 203.173.167.211 23:03, 3 February 2007 (UTC)
Generally, it is not true that for a linear operator L (such that ), that
over arbitrary index sets (i.e. infinite sums). This is used heavily in LTI analysis.
The result does not follow from induction. So why should it be true for linear systems? I think linearity itself is not strong enough a condition to warrant the infinite-sum result. Are there deeper maths behind systems analysis that provide this result? (For example, restriction of linear systems to duals of certain maps is a sufficiently strong condition to imply this result.) 18.243.2.126 ( talk) 01:42, 13 February 2008 (UTC)
I've removed the recently-added footnote, because I'm not sure what it says is relevant. The explanation of the delay operator is purely an example of "it's easier to write", as by substitution, . The differentiation explanation is irrelevant, because when using z, we're in discrete time, and so would never differentiate w.r.t. continuous time. Oli Filth( talk) 21:28, 10 April 2008 (UTC)
Is this nonsense?:
-- Bob K ( talk) 00:52, 11 June 2008 (UTC)
(Eq.1) |
In the Overview section, this paragraph abruptly appears:
For all LTI systems, the eigenfunctions, and the basis functions of the transforms, are complex exponentials. This is, if the input to a system is the complex waveform for some complex amplitude and complex frequency , the output will be some complex constant times the input, say for some new complex amplitude . The ratio is the transfer function at frequency .
But complex amplitude and complex frequency are not really the inputs and outputs of the system. It's like saying that radios transmit and receive analytic signals. For the sake of those who don't already know the subject, wouldn't it be better to stick closer to reality than to mathematical abstractions?
-- Bob K ( talk) 11:32, 18 June 2008 (UTC)
The next paragraph after the one I quoted does mention that real signals are a subset. Perhaps it could be said in a more accessible way to more readers, but it's much better than nothing. This probably isn't the right place to explain complex frequency, but it would be an improvement if complex frequency was an internal link to an understandable article on that subject. Do we have such an article?
-- Bob K ( talk) 14:12, 18 June 2008 (UTC)
BTW, the article Phasor (sine waves) is an example of a more accessible approach, in my opinion, because it puts the complex representation into a context that more people can relate to. It motivates the introduction of complex amplitudes by using them to reduce a "real" problem to an elegant, easily solvable, equation:
And then it shows the additional steps to extract the "real" solution from the complex result. So from that perspective, the concept of complex amplitude is just an intermediate and temporary step in a longer process. It's actually the subset, not the superset. It is only one of the tools neded to understand all LTI systems (i.e., including realizable ones).
-- Bob K ( talk) 15:18, 18 June 2008 (UTC)
In the section LTI_system_theory#Time_invariance_and_linear_transformation we say:
If the linear operator is also time-invariant, then
For the choice
it follows that
But we can't make that choice, because is the variable of integration, and is a constant time offset.
The "proof" has been fudged so as to time-reverse the weighting function (as it was defined above) so that it looks like an impulse response. But an impulse response is a weighting function whose value at is the weight applied to the value of the input at time , where is the time of the desired output value. That means it is a time-reversed version of our definition of
So one way to fix the problem would be to start with the definition:
(Eq.2) |
But the will probably confuse people. So an alternative is to show that if is defined by:
the impulse response is Then we could point out that defining as an impulse response leads to Eq.2 (and vice versa).
-- Bob K ( talk) 03:16, 19 June 2008 (UTC)
In section LTI_system_theory#Time_invariance_and_linear_transformation we say:
If the linear operator is also time-invariant, then
If we let
then it follows that
But we can't "let " because is the variable of integration, and is a constant offset.
-- Bob K ( talk) 14:06, 19 June 2008 (UTC)
I read: "It can be shown that, given this superposition property, the scaling property follows for any rational scalar." Please correct me if I am wrong (I am not an expert), but I think it would be better to write "given this superposition property, the scaling property obviously follows for any rational scalar" (or any equivalent wording). Indeed, it seems to me that (using notations from the superposition property explanation) we simply have to take c2=0 to get the scaling property. The current wording let the reader thinks that the proof is not obvious, IMHO.-- OlivierMiR ( talk) 13:10, 27 January 2009 (UTC)
The included LTI.png image:
shows a transformation from "Time domain" to "Frequency domain." However, clearly the (or "Laplace") domain is shown, which is a generalization of the frequency domain. Either each should be changed to a or the "Frequency domain" should be changed to something like "Laplace domain" or " domain." — TedPavlic ( talk) 22:55, 28 January 2009 (UTC)
Another problem in the picture is the ridicule notation of the convolution as . It should read: , as it is the value at of the convolution of the functions and . Madyno ( talk) 19:13, 1 July 2020 (UTC)
Zero state response also discusses the analysis of linear systems, but does not make a specific restriction of the problems to time-invariant system. There is, as yet, no top-level article on Linear system theory that deals with both this and the more general case of time-variant linear systems. Is there a possible route for refactoring/merging this material? -- The Anome ( talk) 02:54, 17 February 2010 (UTC)
This paragraph:
It is really hard to imagine how a filter with a perfect LPF frequency response (as a sinc filter has) could be classified as unstable. Any sinusoidal input, for instance, is a bounded input with an obviously bounded output (either the same sinusoid or zero, depending on its frequency). However it indeed appears that the L1 criterion mentioned here would be violated by the sinc impulse response, which is stated to be an absolute test for stability/instability.
But never mind: I believe I see the problem. The sinc function extends to infinity in both negative and positive time so it cannot possibly be implemented as a causal filter, and this section is about causal filters (otherwise the concept of instability breaks down inasmuch as stable impulse responses with right hand plane zeros, if reversed in time, describe unstable systems). So I don't think the example is applicable.
And in any case, it could only be confusing to an average WP reader who is trying to LEARN about systems (since it's confusing to ME and I sort of thought I knew all about filter theory!). If the claim is true in some sense, then it stands more as a paradox or riddle than as useful information. Could someone remove this and put in a better example? And possibly (but here I'm not certain) restate the L1 criterion with a statement that it only applies to causal systems, or whatever qualifications are missing which make this result paradoxical or (I think) simply wrong? Interferometrist ( talk) 12:14, 3 March 2011 (UTC)
I worked for about 30 years in this subject and never once heard it referred to as LTI systems theory. Is the name just the invention of one person? What strange things can happen in Wikipedia. JFB80 ( talk) 05:48, 24 January 2016 (UTC)
I am not able to find a proper mathematical definition what a system in general is (a function that maps function (called input signals here) to functions (called output signals here)). There is this article here about LTI systems and an article about |time-invariant systems but there is no article about systems in general. I porpose that there should be a separate article discussing systems in general and defining some funcamental properties (time-invariance, linearity, ...) and showing examples of such systems. Fvultier ( talk) 18:14, 12 July 2017 (UTC)
The result of the move request was: consensus to move the page to the proposed title at this time, per the discussion below. Dekimasu よ! 06:11, 7 August 2018 (UTC)
Linear time-invariant theory → Linear time-invariant system – The current title doesn't make sense. Neither does the similar phrase in the lead. Dicklyon ( talk) 05:59, 31 July 2018 (UTC)
{{u|
Mark viking}} {
Talk}
10:04, 31 July 2018 (UTC)
{{u|
Mark viking}} {
Talk}
17:13, 31 July 2018 (UTC)References
@ Interferometrist:, thank you for your edits. The definition in the lede as currently written uses the titular terms themselves in the definition. It's like saying "a seed stitch is a stitch that uses a seed stitch technique to stitch". Also, WP:LEAD requires that unfamiliar terms be defined when used in the lede. Since the terms "linear" and "time-invariant" are part of the title of the subject article, it is reasonable to assume that folks coming to this article may be unfamiliar with those terms. Also, WP:LEAD advises that formulae be avoided in the lead when possible. My attempt at a rewrite was intended to address those issues. Can we rewrite the lede to be more accessible, without using the subject terms in the definition? Sparkie82 ( t• c) 01:35, 11 September 2020 (UTC)
I propose a more understandable introduction to this topic as follows:
Hi @ Interferometrist:! Given the recent revert, can you please give some examples of the following statement?:
And secondly, if the previous statement is true, does it imply that the phrase "a non-linear resistor" is wrong because such a device would not be called a resistor?
Thanks in advance. -- Alej27 ( talk) 01:16, 3 July 2021 (UTC)
I proposed that Time-invariant system be removed as a separate topic (it isn't) and its current contents be merged into this article. It may already be sufficiently dealt with, but if anyone sees content on the other page that would help here, please go ahead and edit that in. Interferometrist ( talk) 20:51, 5 July 2021 (UTC)
I proposed that Time-variant system be removed as a separate topic (it isn't. It isn't even a term used in practice) and its current contents be merged into this article. It may already be sufficiently dealt with, but if anyone sees content on the other page that would help here, please go ahead and edit that in. Interferometrist ( talk) 14:49, 6 July 2021 (UTC)