Jeśli spojrzymy na twierdzenie o hierarchii DTIME, mamy dziennik z powodu narzutu w symulacji deterministycznej maszyny Turinga przez maszynę uniwersalną:
Nie mamy tego rodzaju kosztów ogólnych dla NTIME DSPACE. Podstawowe uzasadnienie wynika ze szczegółów dowodu, biorąc pod uwagę różnicę między symulatorami.
Moje pytanie jest następujące: bez uwzględnienia szczegółów dowodu na temat twierdzenia o hierarchii DTIME, czy istnieje uzasadnienie tego logu, czy może to być tylko konsekwencja dowodu i rozsądne byłoby wyobrazić sobie, że gdyby następnie
Moim zdaniem, biorąc pod uwagę, że wyjaśnienie symulacji jest dobrym uzasadnieniem, powinno być samo w sobie uzasadnione przez udowodnienie, że gdybyśmy mieli lepszy wynik, moglibyśmy stworzyć lepszą symulację.
źródło
Odpowiedzi:
The time hierarchy theorem is the subject of my diploma project, perhaps you want to view the comments on my question Lower bounds and class separation .
Looking back to this question and how it relates to what you are asking, I got an idea that might show that the multitape to single tape TM simulation overhead needed by the theorem's proof cannot be improved. Thus, another approach is needed if we wish to improve this result.
EDIT: This proof is incorrect, see the comments below for the exact reason. I am currently editing the answer to reflect that.
LetA be the language {0k1k|k≥0} .
On a single tape machine, there is anO(nlogn) algorithm (you can find details of this algorithm in chapter 7.1.2 of Sipser's book "Introduction to the Theory of Computation). In the same reference, you can see that a language is in o(n \log n) if and only if it is regular. Kaveh also provides the original papers for this claim in the question linked above.
In the comments of my question, Ryan Williams illustrates anO(n) algorithm for the same problem, using a 2-tape TM.
Assume now that there is a technique for simulating a multitape TM into a single tape TM that has a running time ofo(T(n)logT(n)) , where T(n) is the running time of the TM simulated. By applying it to the machine Ryan illustrates, we would get a single tape TM that would run in o(nlogn) . Therefore, A is regular , which is a contradiction. So, we conclude that an overhead of logT(n) is the best we can do when simulating multi tape machines with single tape machines.
Zdaję sobie sprawę, że to mocne stwierdzenie, więc mogę się mylić w mojej interpretacji.
There is a very known result that statesDTIME(n)≠NTIME(n) . Under the assumption that P≠NP I believe this result is improved to DTIME(nk)≠NTIME(n) , for any k .So, a very small non-deterministic class is much more powerful than any deterministic. So, given how powerful a resource non-deterministic time is, I would expect that a greater amount of deterministic time would be needed to make a TM more powerful to compensate for non-determinism's power.
źródło
For n-tape TMs a tight time hierarchy result similar to the space hierarchy theorem is proven by Furer in 1982. Thelg factor is not needed.
Thelg factor for the time hierarchy theorem stated in your post is only for single-tape TMs. Unless you are very committed to the single-tape model for some reason there is not a difference between space and time regarding the hierarchy theorems.
There are some reasons and arguments for using single-tape TMs for defining time complexity classes, but the usage of single-tape TMs for defining complexity classes is not universal, e.g. see Lance Fortnow and Rahul Santhanam's [2007] where they use multiple-tape TMs.
The original reference for the time hierarchy theorem is Hennie and Stearns [1966]. They prove the theorem for two-tape machines. Odifreddi's Classical Recursion Theory cites them and Hartmanis [1968] and describes a proof that looks similar to the one in Sipser's book.
However the proof for single tape TMs in Hartmanis's paper does not use simulation simply. It distinguished between two cases: 1. running-time beingΩ(n2) and 2. running-time being o(n2) .
For the first case it uses a simulation, and it seems that one can get rid of thelg factor if the simulations can be done more efficiently.
For the second case, the paper directly gives a language for the separation and doesn't use simulation at all. This uses particular properties of single-tape TMs with sub-quadratic running-time.
One should note that single-tape TMs with timeo(n2) are not as robust and there are quadratic lowerbounds (e.g. for Palindroms) on single-tape TMs whereas a two-tape TM can solve such problems in linear time.
As I said above, unless you are committed to single-tape TM model for some reason, even when time is sub-quadratic, there is not a gap to fill, the time hierarchy theorem is as tight as possible.
PS: if we are using multiple-tape TMs, i.e. a Turing machine in the class can have fixed but arbitrary number of tapes Fürer's result does not apply.
źródło
For a fixed number of tapes greater than one,Time(o(f))⊊Time(O(f) ) for time-constructible f . The logarithmic overhead comes from the tape reduction theorem, where any number of tapes can be converted into two tapes (or even just a single tape and a stack and with just oblivious movement).
If the number of tapes is not fixed, we do not really have a technique to constructively proveDTime(g)⊊DTime(f) without going through the tape reduction theorem. It is plausible that for every k , k+1 -tape machines cannot be simulated by k -tape machines without a logarithmic overhead.
However, that does not mean the time hierarchy theorem cannot be improved, or thatDTime(o(f))⊊DTime(O(f)) fails.
In fact, we already have the following.
Theorem: For everyε>0 and every f of the form na(logn)b (a and b rational; a>1 or a=1∧b≥0 ), DTime(O(f/(logf)ε)⊊DTime(O(f)) .
Proof: If every language with anO(f) decision algorithm can be decided in O(f/(logf)ε) time, then by padding the input, every language with an O(f(n)(logf(n))kε) decision algorithm can be decided in O(f(n)(logf(n))(k−1)ε) time (where k≥0 is fixed), and so for every constant c≥0 , DTime(O(f(n)(logf(n))c))=DTime(O(f(n))) , contradicting the time hierarchy theorem.
However, this nonconstructive proof has three limitations:f to be well-behaved (not only time-constructible, but also in a certain sense continuous).DTime(O(f)) but not in DTime(O(f/(logf)ε) . For a sufficiently large k , simulation of k -tape Turing machines is not in DTime(O(f/(logf)ε) , but we have not ruled out that even for ε=1 and f(n)=n2 , the least such k is >BB(BB(1000)) where BB is the busy beaver function.DTime(O(f/(logf)ε) algorithm will fail for some input, but we have not proved that it fails for some input for all but finitely many input sizes (though it would be very surprising if it did not).
* The proof requires
* We do not know a particular language that is in
* We do not know that the inclusion is robust. A
źródło