• Updated input to LLM systems proving HHH(DD)==0 within assumptions

    From olcott@3:633/10 to All on Sun Oct 12 08:50:05 2025
    Please think this all the way through without making any guesses

    <Input to LLM systems>
    Simulating Termination Analyzer HHH correctly simulates its input until:
    (a) Detects a non-terminating behavior pattern:
    abort simulation and return 0.
    (b) Simulated input reaches its simulated "return" statement:
    return 1.
    (c) If HHH must abort its simulation to prevent its own non-termination
    then HHH is correct to abort this simulation and return 0.

    typedef int (*ptr)();
    int HHH(ptr P);

    int DD()
    {
    int Halt_Status = HHH(DD);
    if (Halt_Status)
    HERE: goto HERE;
    return Halt_Status;
    }

    int main()
    {
    HHH(DD);
    }

    What value should HHH(DD) correctly return?
    </Input to LLM systems>

    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer


    --- PyGate Linux v1.0
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From olcott@3:633/10 to All on Mon Oct 13 11:26:49 2025
    On 10/12/2025 11:22 PM, dbush wrote:
    On 10/13/2025 12:12 AM, olcott wrote:
    On 10/12/2025 10:49 PM, dbush wrote:
    On 10/12/2025 11:43 PM, olcott wrote:
    On 10/12/2025 9:59 PM, dbush wrote:
    On 10/12/2025 10:57 PM, olcott wrote:
    On 10/12/2025 9:40 PM, dbush wrote:
    On 10/12/2025 10:34 PM, olcott wrote:
    On 10/12/2025 9:29 PM, dbush wrote:
    On 10/12/2025 10:20 PM, olcott wrote:
    On 10/12/2025 9:15 PM, dbush wrote:
    On 10/12/2025 9:56 PM, olcott wrote:
    On 10/12/2025 8:22 PM, dbush wrote:
    On 10/12/2025 9:20 PM, olcott wrote:
    On 10/12/2025 3:11 PM, dbush wrote:
    On 10/12/2025 11:47 AM, olcott wrote:
    On 10/12/2025 9:19 AM, dbush wrote:
    On 10/12/2025 9:50 AM, olcott wrote:
    Please think this all the way through without making >>>>>>>>>>>>>>>>>> any guesses

    <Input to LLM systems>
    Simulating Termination Analyzer HHH correctly >>>>>>>>>>>>>>>>>> simulates its input until:
    (a) Detects a non-terminating behavior pattern: >>>>>>>>>>>>>>>>>> ÿÿÿÿ abort simulation and return 0.
    (b) Simulated input reaches its simulated "return" >>>>>>>>>>>>>>>>>> statement:
    ÿÿÿÿ return 1.
    (c) If HHH must abort its simulation to prevent its >>>>>>>>>>>>>>>>>> own non- termination
    ÿÿÿÿ then HHH is correct to abort this simulation and >>>>>>>>>>>>>>>>>> return 0.

    These conditions make HHH not a halt decider because >>>>>>>>>>>>>>>>> they are incompatible with the requirements: >>>>>>>>>>>>>>>>>

    It is perfectly compatible with those requirements >>>>>>>>>>>>>>>> except in the case where the input calls its own >>>>>>>>>>>>>>>> simulating halt decider.

    In other words, not compatible.ÿ No "except".



    Given any algorithm (i.e. a fixed immutable sequence of >>>>>>>>>>>>>>>>> instructions) X described as <X> with input Y: >>>>>>>>>>>>>>>>>
    A solution to the halting problem is an algorithm H >>>>>>>>>>>>>>>>> that computes the following mapping:

    (<X>,Y) maps to 1 if and only if X(Y) halts when >>>>>>>>>>>>>>>>> executed directly
    (<X>,Y) maps to 0 if and only if X(Y) does not halt >>>>>>>>>>>>>>>>> when executed directly




    typedef int (*ptr)();
    int HHH(ptr P);

    int DD()
    {
    ÿÿ int Halt_Status = HHH(DD);
    ÿÿ if (Halt_Status)
    ÿÿÿÿ HERE: goto HERE;
    ÿÿ return Halt_Status;
    }

    int main()
    {
    ÿÿ HHH(DD);
    }

    What value should HHH(DD) correctly return? >>>>>>>>>>>>>>>>>> </Input to LLM systems>


    Error: assumes it's possible to design HHH to get a >>>>>>>>>>>>>>>>> correct answer.


    HHH(DD) gets the correct answer within its set >>>>>>>>>>>>>>>> of assumptions / premises


    Which is incompatible with the requirements for a halt >>>>>>>>>>>>>>> decider:



    Yes, but the requirements for a halt decider are inconsistent >>>>>>>>>>>>>> with reality.


    In other words, you agree with Turing and Linz that the >>>>>>>>>>>>> following requirements cannot be satisfied:



    Sure and likewise no Turing machine can
    give birth to a real live fifteen story
    office building. All logical impossibilities
    are exactly equally logical impossible.


    So we're in agreement: no algorithm exists that can tell us >>>>>>>>>>> if any arbitrary algorithm X with input Y will halt when >>>>>>>>>>> executed directly, as proven by Turning and Linz.

    In exactly the same way that: "this sentence is not true"
    cannot be proven true or false. It is a bogus decision
    problem anchored in

    a fundamentally incorrect notion of truth.
    The false assumption that such an algorithm *does* exist.

    Can we correctly say that the color of your car is fifteen feet >>>>>>>> long?
    For the body of analytical truth coherence is the key and
    incoherence rules out truth.


    There is nothing incoherent about wanting to know if any
    arbitrary algorithm X with input Y will halt when executed directly. >>>>>>>

    Tarski stupidly thought this exact same sort of thing.
    If a truth predicate exists then it could tell if the
    Liar Paradox is true or false. Since it cannot then
    there must be no truth predicate.

    Correct.ÿ If you understood proof by contradiction you wouldn't be
    questioning that.


    It looks like ChatGPT 5.0 is the winner here.
    It understood that requiring HHH to report on
    the behavior of the direct execution of DD()
    is requiring a function to report on something
    outside of its domain.

    False.ÿ It is proven true by the meaning of the words that a finite
    string description of a Turing machine specifies all semantic
    properties of the machine it describes, including whether that
    machine halts when executed directly.


    ChatCPT 5.0 was the first LLM to be able to prove
    that is counter-factual.

    Ah, so you don't believe in semantic tautologies?


    *They are the foundation of this whole system*
    Any system of reasoning that begins with a consistent
    system of stipulated truths and only applies the truth
    preserving operation of semantic logical entailment to
    this finite set of basic facts inherently derives a
    truth predicate that works consistently and correctly
    for this entire body of knowledge that can be expressed
    in language.

    The above system explained in depth to Claude AI https://claude.ai/share/d371aaa1-63fe-4ebb-87bf-db8cf152927f



    LLM systems are 67-fold more powerful than they were
    a year ago because their context widow increased from
    3,000 words to 200,000 words. This is how much stuff
    they can simultaneously keep "in their head".

    It is also very valuable to know that these systems are
    extremely reliable when their reasoning is limited to
    semantic entailment for a well defined set of premises.
    In this case AI hallucination cannot possibly occur.

    https://chatgpt.com/share/68ec6e96-7eb8-8011-90c7-86248034d475

    Has verified the details of the reasoning that proves
    the behavior of the directly executed DD() is outside of
    the domain of the function computed by HHH(DD). It also
    verified that HHH(DD) is correct to reject its input and
    provided all of the reasoning proving that this is correct.


    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- PyGate Linux v1.0
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)