![]() | This is the
talk page of a
redirect that targets the page: • Computability theory Because this page is not frequently watched, present and future discussions, edit requests and requested moves should take place at: • Talk:Computability theory |
I believe the Entscheidungsproblem was first proved unsolvable by Church, and months later by Turing. Goedel's theorems don't really talk about algorithms, so they don't directly apply. Of course, Goedel's trick of Goedel numbering and the Barber paradox is really the key to the Entscheidungsproblem and halting problem, everything else is details. --AxelBoldt
The context-free languages need a stack and a non-deterministic finite-state machine; deterministic won't do. --AxelBoldt
The languages that are accepted by a Turing machine are exactly those that are generated by formal grammars.
This is wrong. I doubt it is all formal grammars. I expect that it is context sensitive grammars the author was thinking of. Regular grammars are FSM equivalent, as I recall. -- BenBaker
No, the context sensitive grammars generate exactly the languages accepted by linear bounded nondeterministic Turing machines (Turing machines with only a linear amount of memory). The set of languages accepted by all possible Turing machines is exactly equal to the set of languages generated by all possible formal grammars. The sentence in the article is correct. See Chomsky hierarchy for more details. -- LC
I moved the following question from the article. I don't know the answer. (¿Isn't it 3 parameters?)
Hi Cwitty, I'm not sure what form Turing's definition took. But your change seems unhelpful, because it puts too much weight on the word "arbitrarily".
Chaitin's constant, for example, can be approximated from below with unlimited accuracy, simply by simulating all machines and adding in a component for those which halt. It is not computable, because there is no way of telling how close the approximation is at any point.
I'm pretty sure the "incorrect" defition you changed is actually equivalent to the "correct" one. -- Pde 06:54, 8 Nov 2003 (UTC)
I didn't edit the page, but I believe it should read "given ANY statement in [FOL]...", not "given A statement in [FOL]"...
also, it might be helpful to add that godel's general recursive functions are also equivalent to the notion of Turing-computable functions and lambda-definable functions.
I think the article should be rewritten to focus more on computable functions instead of Turing machines. This should be done in most articles in the computability category. Of course Turing machines are easier to grasp then the abstract concept of computable function, but by focussing on the abstract concept we can eliminate the redundant discussion of lamda calculus, recursive function and Church-Turing thesis in each computability article. I guess in many cases the proofs and theorems will get shorter too. We can always add a concrete example using Turing machines if the article gets to abstract MathMartin 23:59, 9 Nov 2004 (UTC)
As far as I remember (and correct me if I am wrong), this article used to have a lot of content. Now it is just 8 lines. Computability theory deserves more than just a short definition and two links. This is an ENCYCLOPEDIA, not a DICTIONARY. Who destroyed the old content?! ---- David November 13, 2005
It's become trendy in recent years, due to the influence of Robert I. Soare, to use the term "computability theory" to refer to what's traditionally called "recursion theory", and in fact recursion theory is a redirect here. However this article is not much about recursion theory (Turing degrees, effective descriptive set theory, that sort of thing), but more about models of in-principle-physically-realizable computation. Recursion theory is a branch of math logic, not of theory of computation.
So what do we do about this? I see there's a merge template to merge to computation. For most of the content in the current article that's probably fine. But there should also be an article about what math-logicians call computability theory or recursion theory. Perhaps we need a dab page under the name computability theory. -- Trovatore 04:26, 26 November 2005 (UTC)
To tell the truth I'm not sure the articles should be split at all. What's clear is that, whatever article is called recursion theory (or where recursion theory redirects) has to have a very different focus from the current article here; this current article seems to be mainly about things of Turing degree 0. In the end perhaps we should have just one article, but it shouldn't spend so much time talking about Turing degree 0 in fifteen different ways, but should move quickly into the real meat of the topic, which starts with oracles. -- Trovatore 06:05, 26 November 2005 (UTC)
The current article isn't really suited as the main article for all of Theory of Computation, as when I wrote it I was considered only computability theory and not the other major branch, complexity theory. So do we want separate articles on computability and complexity, or do we want to expand this article greatly to include some discussion of complexity theory as well? Clearly any article called "Theory of Computation" should at least mention P =? NP. —The preceding unsigned comment was added by Readams ( talk • contribs) 17:34, 26 November 2005.
For the moment, I've removed the following:
First, it's not clear just what connection between the incompleteness theorems and the halting problem is being claimed, but whatever it is, I'd want to see a ref that Chaitin was the first one to find it. Chaitin, like Wolfram, is a bit of a self-promoter, and is good enough at self-promotion that a lot of people give him credit for things that might not have been quite as original as they think. In particular Chaitin's Omega was by no means the first non-computable number to be defined. -- Trovatore 04:06, 2 December 2005 (UTC)
(Also see [1])
The author probably didn't read the enlightening discussion in the new edition of the Cinderella book.
It would take too much effort from me to explain it completely, but I am going to remove those uninformed assumptions that the tape of a turing machine is an infinitely long resource. In fact, at any moment, the turing machine has a finite ID, and thus the infinity talked of is potential infinity (much like the infinity of Z) Furthermore, the TM is a mathematical model. Every TM denotes a particular computation. The author probably doesn't recognize that. The potential infinity there is an idealization. It does not mean that physical machines cannot be TMs!
If you followed that naive logic, just because you can define finite state machines that would not fit into this universe, desktop pcs would not even be finite state machines.
Think about that for a while. What do I mean by "Every TM denotes a particular computation"?
I am hoping it is clear now, and now this bit about desktop pcs not being TMs is goodbye kansas. Why on earth do you think we are studying turing machines then?
The standard reason for the potentially infinite tape of the TM is to model the fact that an ongoing computation may extend the tape (allocate more memory). This is usually thought to be a property of desktop PCs as well. With some effort, it is possible to extend the PC with as much memory as you want. There are no hard limits (except physical limits which apply to EVERYTHING including FSMs). Did you ever connect two PCs with a network? Fine. Then you know what this means.
For the record, all of this can be shown rigorously.
-- Exa 15:41, 22 January 2006 (UTC)
Some time ago I proposed splitting out recursion theory as a separate article and renaming this content to computability theory (computer science). I now think that was a mistake. I felt that non-computer-science topics were being given short shrift in the article. It now seems to me that all research topics are given short shrift.
The exposition of the elementary material is too slow, and there's not enough outline of more advanced material. The tasks of defining what it means to be computable, and establishing that there are things that aren't computable, should be described briefly, with links to computable function and Turing machine for more details. Then there should be a series of sections, one for each identifiable theme within computability theory, with a blurb on each and a link to a more-detailed main article. Themes that come to mind are complexity theory (P vs NP, PSPACE, hierarchies, etc), structure of the Turing degrees, randomness (the sort of stuff Rod Downey and his former students do), and effective descriptive set theory; computer scientists may have others they want to add. -- Trovatore 16:09, 8 February 2006 (UTC)
I'd move the first part of this article to automata theory if this wasn't for the fact that this would leave little of the article behind. — Ruud 22:55, 8 February 2006 (UTC)
Please do not neglect the layman (obviously "interested", a modicum of intelligence/maturity also presumed) - not only should they not go away empty-handed, they should also not come to a dead end. In other words, if one comes to the article knowing nothing of computability, one should learn something about it, and also be told where to find more information if one gets stuck. At the same time I am in favour of articles covering advanced topics and current research, though I understand that Wikipedia is not intended for publishing original research. So how does one accomodate these requirements? Are there useful templates: e.g. from now on secondary school / undergraduate maths / postgraduate mathematical maturity required. Or: this presumes understanding of article X - which might get awkward if that article keeps changing level! -- PJTraill 00:23, 20 February 2006 (UTC)
All physical systems are finite state machines (more precisely, finite state Markov processes), the number of states being given by the exponential of the entropy (in units of Boltzmann's constant) of the system (which is, in fact, how entropy is defined); and the state transition function being that generated by the system's Hamiltonian.
The Bekenstein Bound, in turn, places a hard limit on the size of the state space and on the entropy of anything occupying a given region of space (and even the space, itself), no matter what its constituency.
More on that, below.
The finitude of the state space is, in fact, a centrally important feature of statistical and thermal physics, associated with the notions of recurrence times and ergodicity.
No machine, at the present time, uses more than a tiny fraction of the (finite) state space available to it, nor will any in the foreseeable future possibly until the full development of nanotechnology enables the employment of the atomic and subatomic structure of the physical system in question.
All systems occupying a finite volume -- including space, itself, and the fields pervading it -- no matter what their composition, will be subject to the Bekenstein Bound.
The Bekenstein Bound, more precisely, provides a hard-limit on the size of the state space associated with anything in any region of space bounded by a surface of area A. It is the exponential of A, itself, with A given in units of 1/4 of the Planck area. The hard-limit on the system's entropy is kA, where k is Boltzmann's constant and A, again, in units of 1/4 the Planck area.
It turns out that for a spherical region, the limit is exactly that which is reached the moment enough information and matter is crammed into the region to produce a black hole of the same size as the region, itself. The theoretical maximum entropy of the region is then realized as none other than what has already been asserted since the 1970's to be the entropy of the black hole, itself.
(See article on Black hole thermodynamics).
So, to make a long story short, the foregoing stands as a decisive counter-point to idea of a endlessly cramming more and more into a machine. If you cram too much storage, fields, light, electricity or matter (or even sound, pressure, tension, stress or energy) into a given machine, you'll eventually reach the point when the whole thing will collapse under its own gravity and become a black hole. At that point, the theoretical maximum will be reached for the region enclosed by the black hole.
Anything more crammed in will just make the black hole larger.
(But you'll never see it get larger, unless you fall in, nor will you see anything that's too close to the event horizon at all, since time's nearly frozen to a standstill on the event horizon and everything close to it is red shifted into invisibility).
Eventually, as per Hawking, the black hole will eventually radiate all of its energy away (and all of its information contents) as thermal radiation and evaporate away. (See Hawking Radiation and Black hole information paradox, the latter having led in just the past few years to a much deeper understanding on what an event horizon is, what it is not and its close relation to the notion of Rindler horizons).
None of the foregoing, it's worth pointing out, has any bearing on whether the universe, itself, is more than a FSM. That does not follow, even with the fact that all the finite-volumes within the universe have finite state spaces.
That is, it does not follow, unless the Universe, itself, is finite. It doesn't even need to follow that the Universe have any well-defined state space at all, in the first place, finite or infinite. As Smolin has pointed out, since the underlying state space (as per Quantum Gravity) is a certain space of 3-manifolds, and since the classification problem for 3-manifolds may very well be unsolveable, it's quite possible that there may not be any effectively defineable state space for the Universe at all. (The theory of 3-manifolds is closely related to knot theory, since a knot may be regarded as the complement of a manifold).
But the one thing that can be said is that the state space associated with the sky -- that is, the past light cone of a given spacetime point -- is most likely finite. That's because the past light cone has the structure of a hypersphere. That's true, regardless of whether the Universe is finite with positive curvature, infinite and spatially flat or infinite with negative curvature. In all cases, the sky is a hypersphere that has the Big Bang at its antipode. The Big Bang is the one and only event that is directly in the line of sight in every direction from every place in the Unverse at every time (though it is somewhat obscured by the cosmic microwave background or CMB).
Now, the classical non-trivial example of an infinite state machine (ISM) is that recognizing the Dyck language, given by the grammar
S -> 1; S -> u S v S
(1 = empty word); which has as its minimal state diagram
<-- [0] -- u --> [1] -- u --> [2] -- u --> [3] ... <-- v -- <-- v -- <-- v --
with an infinite number of states -- easily recognized as the state diagram of a particular 1-counter machine.
In theory, a quantum system like the electromagnetic field is supposed to provide a similar infinity of states, each frequency mode admitting a set of states containing 0, 1, 2, 3 or more quanta of that frequency (see article on Quantum Electrodynamics). However, with the Bekenstein Bound in place, this will invariably break down at a high enough energy (as does the rest of quantum fieid theory). The manner of breakdown, though, is still unknown. But it exists.
Other than theoretical construct of the quantum field (which, as per the above, breaks down), there is nothing that can embody even the deceptively simple automaton depicted by the state diagram above, since it has an infinite number of states. All the more so nor any of the varieties of other infinite state automata, including the Push Down automata or Turing Machines, never mind any instance of a Universal Turing Machine.
There are, of course, instances of each of these classes where the number of states that can be reached from the start state is finite. Then, since the minimal state diagram is finite, they are equally representable as finite state machines. But in the general case -- like that for the Dyck language above -- the number of states accessible from the start state is infinite, the minimal state diagram is infinite and it is simply not representable as a finite state machine.
The utility of Turing Machines or any of the other varieties of infinite state machine formalisms (push down automata, counter automata, etc.) is that they provide a more efficient means of representing finite state machines and the physical systems they may embody. Obviously, the more powerful a formalism, the greater utility it will have -- even for those things that could theoretically be represented by lesser means.
Greater utility also means greater relevance, not the other way around!
Oracles provide a more powerful mechanism, still, and despite the fact that they are strictly more powerful than TM's, they are neither no less of an idealization than any other infinite state automaton formalism, nor any less physically relevant. But theorists haven't yet made a great deal of use of them. There is, invariably, a notion of resource bounded oracles, for each variety of oracle, and this is bound to be just as relevant for finite state machines and physical systems as is that of the resource bounded TM.
The primary mode of application of any of the varieties of ISM is through the notion of "resource-boundedness". So, it is a seriously misleading notion to even assert that the exclusion of any ISM from the real of physical reality also means their irrelevance!
The whole foundation of complexity theory, in fact, is based on this notion of resource boundedness. And that lies at the center of computer science. So, of course, an unphysical model is relevant to physical devices! But neither the 1-counter machine, push down automaton, Turing Machine, nor any variety of oracle is fully realizable within the physical world as such.
What about Universal Turing Machines? Shouldn't they be included as well? 70.111.251.203 14:42, 7 March 2006 (UTC)
Found use of first person. While text is clear and easy to understand, it is still bad style. 210.50.201.22 05:35, 26 May 2006 (UTC)
If you really want to try to rewrite it without the first person, and make it even remotely close to as easy to understand, be my guest. This is a very common style for proofs. -- Readams 15:59, 6 June 2006 (UTC)
How is the following statement confusing? "Actually, these are computer programs written in general-purpose programming languages which are Turing Machines (the GP languages are, therefore, called Turing-complete), while the actual computers executing the programs are FSMs." I have included it into the article even before looking into the discussion to disambiguate the popular confusion: "Poll: Are PCs Turing Machines?" Thi is the actual and vast confusion. What I see at this page proves my doubts. Please explain what is so confusing in my sentence? Peahaps it is rather wrong or inappropriate? -- Javalenok 09:02, 13 June 2006 (UTC)
This discussion is copied from the user page.
You changed Church-Turing thesis to claim that it had been disproven. This is certainly not the case. Please feel free to discuss your changes on the talk page of the article. CMummert · talk 10:49, 12 April 2007 (UTC)
I don't understand this idea. What does it mean to decide a language? Unfree ( talk) 16:32, 30 January 2008 (UTC)
For anyone keeping this article on their watchlist: there is currently an attempt at merging a number of articles on (non-)recursive/(un)decidable/(un)computable sets/languages/decision problems into a single, thorough article. The experiment currently sits at Recursive languages and sets. Comments are very welcome. Pichpich ( talk) 00:43, 21 February 2008 (UTC)
This is a duplicate post from Talk:Algorithm#A_paradoxical_situation. To keep the discussion in one place, please comment there is you are interested. — Carl ( CBM · talk) 20:43, 19 March 2008 (UTC)
There is a discussion currently at Talk:Recursion theory#Reorganize the Computability articles about reorganising the computability articles to get some sort of hierarchy. In particular the thinking is that Computability should be an introduction rather than a disambiguation page. Dmcq ( talk) 11:26, 14 August 2009 (UTC)
![]() | This is the
talk page of a
redirect that targets the page: • Computability theory Because this page is not frequently watched, present and future discussions, edit requests and requested moves should take place at: • Talk:Computability theory |
I believe the Entscheidungsproblem was first proved unsolvable by Church, and months later by Turing. Goedel's theorems don't really talk about algorithms, so they don't directly apply. Of course, Goedel's trick of Goedel numbering and the Barber paradox is really the key to the Entscheidungsproblem and halting problem, everything else is details. --AxelBoldt
The context-free languages need a stack and a non-deterministic finite-state machine; deterministic won't do. --AxelBoldt
The languages that are accepted by a Turing machine are exactly those that are generated by formal grammars.
This is wrong. I doubt it is all formal grammars. I expect that it is context sensitive grammars the author was thinking of. Regular grammars are FSM equivalent, as I recall. -- BenBaker
No, the context sensitive grammars generate exactly the languages accepted by linear bounded nondeterministic Turing machines (Turing machines with only a linear amount of memory). The set of languages accepted by all possible Turing machines is exactly equal to the set of languages generated by all possible formal grammars. The sentence in the article is correct. See Chomsky hierarchy for more details. -- LC
I moved the following question from the article. I don't know the answer. (¿Isn't it 3 parameters?)
Hi Cwitty, I'm not sure what form Turing's definition took. But your change seems unhelpful, because it puts too much weight on the word "arbitrarily".
Chaitin's constant, for example, can be approximated from below with unlimited accuracy, simply by simulating all machines and adding in a component for those which halt. It is not computable, because there is no way of telling how close the approximation is at any point.
I'm pretty sure the "incorrect" defition you changed is actually equivalent to the "correct" one. -- Pde 06:54, 8 Nov 2003 (UTC)
I didn't edit the page, but I believe it should read "given ANY statement in [FOL]...", not "given A statement in [FOL]"...
also, it might be helpful to add that godel's general recursive functions are also equivalent to the notion of Turing-computable functions and lambda-definable functions.
I think the article should be rewritten to focus more on computable functions instead of Turing machines. This should be done in most articles in the computability category. Of course Turing machines are easier to grasp then the abstract concept of computable function, but by focussing on the abstract concept we can eliminate the redundant discussion of lamda calculus, recursive function and Church-Turing thesis in each computability article. I guess in many cases the proofs and theorems will get shorter too. We can always add a concrete example using Turing machines if the article gets to abstract MathMartin 23:59, 9 Nov 2004 (UTC)
As far as I remember (and correct me if I am wrong), this article used to have a lot of content. Now it is just 8 lines. Computability theory deserves more than just a short definition and two links. This is an ENCYCLOPEDIA, not a DICTIONARY. Who destroyed the old content?! ---- David November 13, 2005
It's become trendy in recent years, due to the influence of Robert I. Soare, to use the term "computability theory" to refer to what's traditionally called "recursion theory", and in fact recursion theory is a redirect here. However this article is not much about recursion theory (Turing degrees, effective descriptive set theory, that sort of thing), but more about models of in-principle-physically-realizable computation. Recursion theory is a branch of math logic, not of theory of computation.
So what do we do about this? I see there's a merge template to merge to computation. For most of the content in the current article that's probably fine. But there should also be an article about what math-logicians call computability theory or recursion theory. Perhaps we need a dab page under the name computability theory. -- Trovatore 04:26, 26 November 2005 (UTC)
To tell the truth I'm not sure the articles should be split at all. What's clear is that, whatever article is called recursion theory (or where recursion theory redirects) has to have a very different focus from the current article here; this current article seems to be mainly about things of Turing degree 0. In the end perhaps we should have just one article, but it shouldn't spend so much time talking about Turing degree 0 in fifteen different ways, but should move quickly into the real meat of the topic, which starts with oracles. -- Trovatore 06:05, 26 November 2005 (UTC)
The current article isn't really suited as the main article for all of Theory of Computation, as when I wrote it I was considered only computability theory and not the other major branch, complexity theory. So do we want separate articles on computability and complexity, or do we want to expand this article greatly to include some discussion of complexity theory as well? Clearly any article called "Theory of Computation" should at least mention P =? NP. —The preceding unsigned comment was added by Readams ( talk • contribs) 17:34, 26 November 2005.
For the moment, I've removed the following:
First, it's not clear just what connection between the incompleteness theorems and the halting problem is being claimed, but whatever it is, I'd want to see a ref that Chaitin was the first one to find it. Chaitin, like Wolfram, is a bit of a self-promoter, and is good enough at self-promotion that a lot of people give him credit for things that might not have been quite as original as they think. In particular Chaitin's Omega was by no means the first non-computable number to be defined. -- Trovatore 04:06, 2 December 2005 (UTC)
(Also see [1])
The author probably didn't read the enlightening discussion in the new edition of the Cinderella book.
It would take too much effort from me to explain it completely, but I am going to remove those uninformed assumptions that the tape of a turing machine is an infinitely long resource. In fact, at any moment, the turing machine has a finite ID, and thus the infinity talked of is potential infinity (much like the infinity of Z) Furthermore, the TM is a mathematical model. Every TM denotes a particular computation. The author probably doesn't recognize that. The potential infinity there is an idealization. It does not mean that physical machines cannot be TMs!
If you followed that naive logic, just because you can define finite state machines that would not fit into this universe, desktop pcs would not even be finite state machines.
Think about that for a while. What do I mean by "Every TM denotes a particular computation"?
I am hoping it is clear now, and now this bit about desktop pcs not being TMs is goodbye kansas. Why on earth do you think we are studying turing machines then?
The standard reason for the potentially infinite tape of the TM is to model the fact that an ongoing computation may extend the tape (allocate more memory). This is usually thought to be a property of desktop PCs as well. With some effort, it is possible to extend the PC with as much memory as you want. There are no hard limits (except physical limits which apply to EVERYTHING including FSMs). Did you ever connect two PCs with a network? Fine. Then you know what this means.
For the record, all of this can be shown rigorously.
-- Exa 15:41, 22 January 2006 (UTC)
Some time ago I proposed splitting out recursion theory as a separate article and renaming this content to computability theory (computer science). I now think that was a mistake. I felt that non-computer-science topics were being given short shrift in the article. It now seems to me that all research topics are given short shrift.
The exposition of the elementary material is too slow, and there's not enough outline of more advanced material. The tasks of defining what it means to be computable, and establishing that there are things that aren't computable, should be described briefly, with links to computable function and Turing machine for more details. Then there should be a series of sections, one for each identifiable theme within computability theory, with a blurb on each and a link to a more-detailed main article. Themes that come to mind are complexity theory (P vs NP, PSPACE, hierarchies, etc), structure of the Turing degrees, randomness (the sort of stuff Rod Downey and his former students do), and effective descriptive set theory; computer scientists may have others they want to add. -- Trovatore 16:09, 8 February 2006 (UTC)
I'd move the first part of this article to automata theory if this wasn't for the fact that this would leave little of the article behind. — Ruud 22:55, 8 February 2006 (UTC)
Please do not neglect the layman (obviously "interested", a modicum of intelligence/maturity also presumed) - not only should they not go away empty-handed, they should also not come to a dead end. In other words, if one comes to the article knowing nothing of computability, one should learn something about it, and also be told where to find more information if one gets stuck. At the same time I am in favour of articles covering advanced topics and current research, though I understand that Wikipedia is not intended for publishing original research. So how does one accomodate these requirements? Are there useful templates: e.g. from now on secondary school / undergraduate maths / postgraduate mathematical maturity required. Or: this presumes understanding of article X - which might get awkward if that article keeps changing level! -- PJTraill 00:23, 20 February 2006 (UTC)
All physical systems are finite state machines (more precisely, finite state Markov processes), the number of states being given by the exponential of the entropy (in units of Boltzmann's constant) of the system (which is, in fact, how entropy is defined); and the state transition function being that generated by the system's Hamiltonian.
The Bekenstein Bound, in turn, places a hard limit on the size of the state space and on the entropy of anything occupying a given region of space (and even the space, itself), no matter what its constituency.
More on that, below.
The finitude of the state space is, in fact, a centrally important feature of statistical and thermal physics, associated with the notions of recurrence times and ergodicity.
No machine, at the present time, uses more than a tiny fraction of the (finite) state space available to it, nor will any in the foreseeable future possibly until the full development of nanotechnology enables the employment of the atomic and subatomic structure of the physical system in question.
All systems occupying a finite volume -- including space, itself, and the fields pervading it -- no matter what their composition, will be subject to the Bekenstein Bound.
The Bekenstein Bound, more precisely, provides a hard-limit on the size of the state space associated with anything in any region of space bounded by a surface of area A. It is the exponential of A, itself, with A given in units of 1/4 of the Planck area. The hard-limit on the system's entropy is kA, where k is Boltzmann's constant and A, again, in units of 1/4 the Planck area.
It turns out that for a spherical region, the limit is exactly that which is reached the moment enough information and matter is crammed into the region to produce a black hole of the same size as the region, itself. The theoretical maximum entropy of the region is then realized as none other than what has already been asserted since the 1970's to be the entropy of the black hole, itself.
(See article on Black hole thermodynamics).
So, to make a long story short, the foregoing stands as a decisive counter-point to idea of a endlessly cramming more and more into a machine. If you cram too much storage, fields, light, electricity or matter (or even sound, pressure, tension, stress or energy) into a given machine, you'll eventually reach the point when the whole thing will collapse under its own gravity and become a black hole. At that point, the theoretical maximum will be reached for the region enclosed by the black hole.
Anything more crammed in will just make the black hole larger.
(But you'll never see it get larger, unless you fall in, nor will you see anything that's too close to the event horizon at all, since time's nearly frozen to a standstill on the event horizon and everything close to it is red shifted into invisibility).
Eventually, as per Hawking, the black hole will eventually radiate all of its energy away (and all of its information contents) as thermal radiation and evaporate away. (See Hawking Radiation and Black hole information paradox, the latter having led in just the past few years to a much deeper understanding on what an event horizon is, what it is not and its close relation to the notion of Rindler horizons).
None of the foregoing, it's worth pointing out, has any bearing on whether the universe, itself, is more than a FSM. That does not follow, even with the fact that all the finite-volumes within the universe have finite state spaces.
That is, it does not follow, unless the Universe, itself, is finite. It doesn't even need to follow that the Universe have any well-defined state space at all, in the first place, finite or infinite. As Smolin has pointed out, since the underlying state space (as per Quantum Gravity) is a certain space of 3-manifolds, and since the classification problem for 3-manifolds may very well be unsolveable, it's quite possible that there may not be any effectively defineable state space for the Universe at all. (The theory of 3-manifolds is closely related to knot theory, since a knot may be regarded as the complement of a manifold).
But the one thing that can be said is that the state space associated with the sky -- that is, the past light cone of a given spacetime point -- is most likely finite. That's because the past light cone has the structure of a hypersphere. That's true, regardless of whether the Universe is finite with positive curvature, infinite and spatially flat or infinite with negative curvature. In all cases, the sky is a hypersphere that has the Big Bang at its antipode. The Big Bang is the one and only event that is directly in the line of sight in every direction from every place in the Unverse at every time (though it is somewhat obscured by the cosmic microwave background or CMB).
Now, the classical non-trivial example of an infinite state machine (ISM) is that recognizing the Dyck language, given by the grammar
S -> 1; S -> u S v S
(1 = empty word); which has as its minimal state diagram
<-- [0] -- u --> [1] -- u --> [2] -- u --> [3] ... <-- v -- <-- v -- <-- v --
with an infinite number of states -- easily recognized as the state diagram of a particular 1-counter machine.
In theory, a quantum system like the electromagnetic field is supposed to provide a similar infinity of states, each frequency mode admitting a set of states containing 0, 1, 2, 3 or more quanta of that frequency (see article on Quantum Electrodynamics). However, with the Bekenstein Bound in place, this will invariably break down at a high enough energy (as does the rest of quantum fieid theory). The manner of breakdown, though, is still unknown. But it exists.
Other than theoretical construct of the quantum field (which, as per the above, breaks down), there is nothing that can embody even the deceptively simple automaton depicted by the state diagram above, since it has an infinite number of states. All the more so nor any of the varieties of other infinite state automata, including the Push Down automata or Turing Machines, never mind any instance of a Universal Turing Machine.
There are, of course, instances of each of these classes where the number of states that can be reached from the start state is finite. Then, since the minimal state diagram is finite, they are equally representable as finite state machines. But in the general case -- like that for the Dyck language above -- the number of states accessible from the start state is infinite, the minimal state diagram is infinite and it is simply not representable as a finite state machine.
The utility of Turing Machines or any of the other varieties of infinite state machine formalisms (push down automata, counter automata, etc.) is that they provide a more efficient means of representing finite state machines and the physical systems they may embody. Obviously, the more powerful a formalism, the greater utility it will have -- even for those things that could theoretically be represented by lesser means.
Greater utility also means greater relevance, not the other way around!
Oracles provide a more powerful mechanism, still, and despite the fact that they are strictly more powerful than TM's, they are neither no less of an idealization than any other infinite state automaton formalism, nor any less physically relevant. But theorists haven't yet made a great deal of use of them. There is, invariably, a notion of resource bounded oracles, for each variety of oracle, and this is bound to be just as relevant for finite state machines and physical systems as is that of the resource bounded TM.
The primary mode of application of any of the varieties of ISM is through the notion of "resource-boundedness". So, it is a seriously misleading notion to even assert that the exclusion of any ISM from the real of physical reality also means their irrelevance!
The whole foundation of complexity theory, in fact, is based on this notion of resource boundedness. And that lies at the center of computer science. So, of course, an unphysical model is relevant to physical devices! But neither the 1-counter machine, push down automaton, Turing Machine, nor any variety of oracle is fully realizable within the physical world as such.
What about Universal Turing Machines? Shouldn't they be included as well? 70.111.251.203 14:42, 7 March 2006 (UTC)
Found use of first person. While text is clear and easy to understand, it is still bad style. 210.50.201.22 05:35, 26 May 2006 (UTC)
If you really want to try to rewrite it without the first person, and make it even remotely close to as easy to understand, be my guest. This is a very common style for proofs. -- Readams 15:59, 6 June 2006 (UTC)
How is the following statement confusing? "Actually, these are computer programs written in general-purpose programming languages which are Turing Machines (the GP languages are, therefore, called Turing-complete), while the actual computers executing the programs are FSMs." I have included it into the article even before looking into the discussion to disambiguate the popular confusion: "Poll: Are PCs Turing Machines?" Thi is the actual and vast confusion. What I see at this page proves my doubts. Please explain what is so confusing in my sentence? Peahaps it is rather wrong or inappropriate? -- Javalenok 09:02, 13 June 2006 (UTC)
This discussion is copied from the user page.
You changed Church-Turing thesis to claim that it had been disproven. This is certainly not the case. Please feel free to discuss your changes on the talk page of the article. CMummert · talk 10:49, 12 April 2007 (UTC)
I don't understand this idea. What does it mean to decide a language? Unfree ( talk) 16:32, 30 January 2008 (UTC)
For anyone keeping this article on their watchlist: there is currently an attempt at merging a number of articles on (non-)recursive/(un)decidable/(un)computable sets/languages/decision problems into a single, thorough article. The experiment currently sits at Recursive languages and sets. Comments are very welcome. Pichpich ( talk) 00:43, 21 February 2008 (UTC)
This is a duplicate post from Talk:Algorithm#A_paradoxical_situation. To keep the discussion in one place, please comment there is you are interested. — Carl ( CBM · talk) 20:43, 19 March 2008 (UTC)
There is a discussion currently at Talk:Recursion theory#Reorganize the Computability articles about reorganising the computability articles to get some sort of hierarchy. In particular the thinking is that Computability should be an introduction rather than a disambiguation page. Dmcq ( talk) 11:26, 14 August 2009 (UTC)