• The error of the halting problem that Ben Bacarisse did not understand

    From olcott@3:633/10 to All on Mon Oct 13 20:24:00 2025
    On 10/14/2022 7:44 PM, Ben Bacarisse wrote:
    Python <python@invalid.org> writes:

    Olcott (annotated):

    If simulating halt decider H correctly simulates its input D until H
    correctly determines that its simulated D would never stop running

    [comment: as D halts, the simulation is faulty, Pr. Sipser has been
    fooled by Olcott shell game confusion "pretending to simulate" and
    "correctly simulate"]

    unless aborted then H can abort its simulation of D and correctly
    report that D specifies a non-halting sequence of configurations.

    I don't think that is the shell game. PO really /has/ an H (it's
    trivial to do for this one case) that correctly determines that P(P)
    *would* never stop running *unless* aborted. He knows and accepts that
    P(P) actually does stop. The wrong answer is justified by what would
    happen if H (and hence a different P) where not what they actually are.

    (I've gone back to his previous names what P is Linz's H^.)

    In other words: "if the simulation were right the answer would be
    right".

    I don't think that's the right paraphrase. He is saying if P were
    different (built from a non-aborting H) H's answer would be the right
    one.

    But the simulation is not right. D actually halts.

    But H determines (correctly) that D would not halt if it were not
    halted. That much is a truism. What's wrong is to pronounce that
    answer as being correct for the D that does, in fact, stop.


    As I have been saying for years now, H(D) does
    correctly report on the actual behavior specified
    by its input. More recently the direct execution
    of D() is not in the domain of H.

    *The halting problem breaks with reality*
    Formal computability theory is internally consistent,
    but it presupposes that ?the behavior of the encoded
    program? is a formal object inside the same domain
    as the decider?s input. If that identification is
    treated as a fact about reality rather than a modeling
    convention, then yes?it would be a false assumption.
    https://chatgpt.com/share/68ec6e96-7eb8-8011-90c7-86248034d475



    LLM systems have gotten 67-fold more powerful in the
    last year in that their context window (the number
    of words that they can keep in their head at one time)
    has increased from 3000 to 200,000.

    Because of this they have become enormously more
    powerful at semantic logical entailment. They can
    simultaneously handle the constraints of many
    complex premises to correctly derive the conclusions
    that deductively follow from those premises.

    When reasoning is entirely on the basis of a provided
    set of premises AI hallucination cannot occur.

    --


    And Peter Olcott is a [*beep*]

    It's certainly dishonest to claim support from an expert who clearly
    does not agree with the conclusions. Pestering, and then tricking,
    someone into agreeing to some vague hypothetical is not how academic
    research is done. Had PO come clean and ended his magic paragraph with
    "and therefore 'does not 'halt' is the correct answer even though D
    halts" he would have got a more useful reply.

    Let's keep in mind this is exactly what he's saying:

    "Yes [H(P,P) == false] is the correct answer even though P(P) halts."

    Why? Because:

    "we can prove that Halts() did make the correct halting decision when
    we comment out the part of Halts() that makes this decision and
    H_Hat() remains in infinite recursion"



    --
    Copyright 2024 Olcott

    "Talent hits a target no one else can hit;
    Genius hits a target no one else can see."
    Arthur Schopenhauer

    --- PyGate Linux v1.0
    * Origin: Dragon's Lair, PyGate