Pojęcie wielomianowych redukcji czasu (redukcje Cooka) jest abstrakcją bardzo intuicyjnej koncepcji: skutecznego rozwiązywania problemu za pomocą algorytmu dla innego problemu.
Jednakże, w teorii -completeness pojęcie -hardness jest rejestrowany za pomocą redukcji odwzorowania (redukcje Karp). Ta koncepcja „ograniczonych” redukcji jest o wiele mniej intuicyjna (przynajmniej dla mnie). Wydaje się nawet nieco wymyślony, ponieważ tworzy nieco mniej intuicyjne pojęcie twardości; przez to odnoszę się do faktu, że nie trywialnie zawiera . Chociaż w teorii złożoności jesteśmy bardzo przyzwyczajeni do koncepcji, że możliwość rozwiązania problemu, takiego jak , nie oznacza, że możemy rozwiązać , w warunkach naturalnych (które są przechwytywane przez Redukcje gotowania), zakładając, że mamy algorytm rozwiązywania, we can solve just by running the algorithm for and returning the opposite.
My question is why should we use Karp reductions for the theory of -completeness? What intuitive notion does it capture? How does it relates to the way we understand "hardness of computation" in the real world?
Odpowiedzi:
Like Turing reductions, many-one reductions came into complexity theory from computability/recursion theory literature. Cook and Karp reductions are natural complexity theoretic versions of similar existing reductions in computability.
There is a intuitive way of explaining many-one reductions: it is a restriction of Turing reductions where we can ask only a single question from the oracle and the oracle's answer will be our answer.
Now the question is why would we need to study this (and any other kind of reductions like truth-table, weak-truth-table, etc.)?
These reductions give a finer picture than Turing reductions. Turing reductions are too powerful to distinguish between many concepts. A very large portion of computability theory is devoted to the study of c.e./r.e. degrees. The notion of an c.e. set is central. We can have TM machine that can enumerate an infinite set, we may not be able to enumerate its complement. If you want to study c.e. sets then Turing reduction is too strong since c.e. sets are not closed under it. So many-one reductions are a (and maybe the) natural way of defining reductions for this purpose.
Other types of reductions are defined for similar reasons. If you are interested I would suggest checking Piergiorgio Odifreddi's "Classical Recursion Theory". It has a quite comprehensive chapter on different reductions and their relations.
Now for complexity theory the argument is similar. If you accept thatNP is a extremely natural class of problems and you want to study NP , then Cook reductions are too strong. The natural choice is a weaker reduction such that NP is closed under it and we can prove existence of a complete problem w.r.t. to those reduction for NP . Karp reductions are the natural choice for this purpose.
źródło
there are several questions on this site related to Cook vs Karp reductions. have not seen a very clear description of this for the neophyte because its somewhat inherently subtle in many ways and its an active/open area of research. here are some refs that may be helpful to address it. as wikipedia summarizes, "Many-one reductions are valuable because most well-studied complexity classes are closed under some type of many-one reducibility, including P, NP, L, NL, co-NP, PSPACE, EXP, and many others. These classes are not closed under arbitrary many-one reductions, however."
it seems fair to say that even advanced theorists are actively pondering the exact distinction and differences as in below refs and the full story wont be available unless important open complexity class separations are resolved, ie these questions seem to cut to the center of the known vs unknown.
[1] Cook versus Karp-Levin: Separating Completeness Notions If NP Is Not Small (1992) Lutz, Mayordomo
[2] Are Cook and Karp Ever the Same? Beigel and Fortnow
[3] More NP-Complete Problems (PPT) see slides 9-14 on history & Cook vs Karp reduction distinctions
źródło