• Re: New and improved version of cdecl

    From Keith Thompson@3:633/10 to All on Mon Oct 27 19:59:03 2025
    "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> writes:
    On 10/27/2025 5:30 PM, Keith Thompson wrote:
    [...]

    I can imagine either an enhanced version of the GNU autotools,
    or a new set of tools similar to it, that could support building
    software from source on Windows.

    https://vcpkg.io/en/packages?query=

    Not bad, well for me, for now. Builds like a charm, so far.

    [...]

    Looks interesting, but I don't think it's quite what I was talking about
    (based on about 5 minutes browsing the website).

    It seems to emphasize C and C++ *libraries* rather than applications.
    And I don't see that it can be used to build an existing autotools-based package (like, say, cdecl) on Windows.

    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    void Void(void) { Void(); } /* The recursive call of the void */

    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Janis Papanagnou@3:633/10 to All on Tue Oct 28 04:10:29 2025
    On 27.10.2025 18:44, bart wrote:
    On 27/10/2025 16:35, David Brown wrote:
    On 27/10/2025 12:22, bart wrote:


    /My syntax/ (as in my proposal) is bizarre,

    What was your proposal? - Anyway, it shouldn't be "bizarre"; it's
    under your design-control!

    but actual C type syntax isn't?!

    There were reasons for that choice. And the authors have explained
    them. - This doesn't make their choice any better, though, IMO.


    The latter is possibly the worst-designed feature of any programming
    language ever, certainly of any mainstream language. This is the syntax
    for a pointer to an unbounded array of function pointers that return a pointer to int:

    int *(*(*)[])()

    This, is not bizarre?!

    You need to know the concept behind it. IOW, learn the language and
    you will get used to it. (As with other features or "monstrosities".)

    Even somebody reading has to figure out which *
    corresponds to which 'pointer to', and where the name might go if using
    it to declare a variable.

    In the LTR syntax I suggested, it would be:

    ref[]ref func()ref int

    The variable name goes on the right. For declaring three such variables,
    it would be:

    ref[]ref func()ref int a, b, c

    Meanwhile, in C as it is, it would need to be something like this:

    int *(*(*a)[])(), *(*(*b)[])(), *(*(*c)[])()

    Or you have to use a workaround and create a named alias for the type
    (what would you call it?):

    typedef int *(*(*T)[])();

    T a, b, c;

    It's a fucking joke.

    Actually, this is a way to (somewhat) control the declaration "mess"
    so that it doesn't propagate into the rest of the source code and
    muddy each occurrence. It's also a good design principle (also when
    programming in other language) to use names for [complex] types.

    I take that option 'typedef' as a sensible solution of this specific
    problem with C's underlying declaration decisions.

    And yes, I needed to use a tool to get that first
    'int *(*(*)[])()', otherwise I can spend forever in a trial and error
    process of figuring where all those brackets and asterisks go.

    THIS IS WHY such tools are necessary, because the language syntax as it
    is is not fit for purpose.

    I never used 'cdecl' (as far as I recall). (I recall I was thinking
    sometimes that such a tool could be useful.) Myself it was sufficient
    to use a 'typedef' for complex cases. Constructing such expressions
    is often easier than reading them.

    [...]

    Yes, my ideal would be different from the output of cdecl. No, the
    author is not doing something "wrong". I live in a world where
    programming languages are used by more than one person, and those
    people can have different opinions.

    Find me one person who doesn't think that syntax like int *(*(*)[])()
    is a complete joke.

    Maybe the authors (and all the enthusiastic adherents) of "C"?

    Janis


    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Janis Papanagnou@3:633/10 to All on Tue Oct 28 04:23:32 2025
    On 27.10.2025 23:33, Waldek Hebisch wrote:
    David Brown <david.brown@hesbynett.no> wrote:
    [...]

    Sorry, "proof by analogy" is usually wrong. If you insist on
    analogies the right one would be function prototypes: old style
    function declarations where inherently unsafe and it was fixed
    by adding new syntax for function declarations and definitions,
    in parallel to old syntax. Now old style declarations are
    officially retired. Bart proposed new syntax for all
    declarations to be used in parallel with old ones, that is
    exaclty the same fix as used to solve unsafety of old
    function declarations.

    As far as I recall, Dennis Ritchie has written about the practical
    problem with "C" compilers having to support two different versions
    [of the function declaration topic] for compatibility reasons.

    Early and central "misdesigns" are not easy to address; it hurts.

    (That's one difference between the often discredited "design by
    committee" and a more casual growing design from a single person
    or interest group.)

    Janis


    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Janis Papanagnou@3:633/10 to All on Tue Oct 28 04:41:15 2025
    On 27.10.2025 21:48, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    [...]
    [...]

    In my personal opinion, C's declaration syntax, cleverly based
    on a somewhat loose "declaration follows use" principle,

    IMO that was the idea, and I would object to the word "cleverly".

    When I spoke with students, newbie "C" users, about that they were
    quite confused, not only by the "same" placement as in expressions
    but also by using the same symbol for conceptually different things.

    Personally I always found it better comprehensible where languages
    use something like, say,

    REF sometype x;

    and

    y = DEREF x

    in the first place.

    If you explain things that way people much easier understand it, as
    far as my experience goes.

    is a not
    entirely successful experiment that has caught on extraordinarily
    well, probably due to C's other advantages as a systems programming
    language. I would have preferred a different syntax **if** it had
    been used in the original C **instead of** the current syntax. [...]

    All else being equal, I would prefer a C-like language with clear left-to-right declaration syntax to C as it's currently defined.
    But all else is not at all equal.

    Indeed.


    And I think that a future C that supports *both* the existing
    syntax and your new syntax would be far worse than C as it is now. Programmers would have to learn both. Existing code would not
    be updated. Most new code, written by experienced C programmers,
    would continue to use the old syntax. Your plan to deprecate the
    existing syntax would fail.

    Yes.

    And that's why it will never happen. The ISO C committee would never consider this kind of radical change, even if it were shoehorned
    into the syntax in a way that somehow doesn't break existing code.

    But interestingly, as far as I recall, the C committee did exactly
    that with the function declaration syntax option (back then when
    going from K&R to a standard). Sure, they might handle that now
    differently since it's their standard to change (and not the K&R
    origin [or quasi standard]).

    Janis

    [...]


    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Chris M. Thomasson@3:633/10 to All on Mon Oct 27 23:45:17 2025
    On 10/27/2025 7:59 PM, Keith Thompson wrote:
    "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> writes:
    On 10/27/2025 5:30 PM, Keith Thompson wrote:
    [...]

    I can imagine either an enhanced version of the GNU autotools,
    or a new set of tools similar to it, that could support building
    software from source on Windows.

    https://vcpkg.io/en/packages?query=

    Not bad, well for me, for now. Builds like a charm, so far.

    [...]

    Looks interesting, but I don't think it's quite what I was talking about (based on about 5 minutes browsing the website).

    So far, it can be used to cure some "headaches" over in Windows land... ;^)


    It seems to emphasize C and C++ *libraries* rather than applications.
    And I don't see that it can be used to build an existing autotools-based package (like, say, cdecl) on Windows.


    Well, if what you want is not in that list, you are shit out of luck.
    ;^) It sure seems to build packages from source. For instance, I got
    Cairo compiled and up and fully integrated into MSVC. Pretty nice.

    At least its there. Although if it took a while to build everything,
    Bart would be pulling his hair out. But, beats manually building
    something that is not meant to be built on windows, uggg, sometimes,
    double uggg. Ming, cygwin, ect... vcpkg, has all of them, and used them
    to build certain things...

    I have built Cairo on Windows, and vcpkg is just oh so easy. Well, keep
    in mind, windows... ;^o


    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Chris M. Thomasson@3:633/10 to All on Mon Oct 27 23:47:49 2025
    On 10/27/2025 8:10 PM, Janis Papanagnou wrote:
    On 27.10.2025 18:44, bart wrote:
    On 27/10/2025 16:35, David Brown wrote:
    On 27/10/2025 12:22, bart wrote:


    /My syntax/ (as in my proposal) is bizarre,

    What was your proposal? - Anyway, it shouldn't be "bizarre"; it's
    under your design-control!

    but actual C type syntax isn't?!

    There were reasons for that choice. And the authors have explained
    them. - This doesn't make their choice any better, though, IMO.


    The latter is possibly the worst-designed feature of any programming
    language ever, certainly of any mainstream language. This is the syntax
    for a pointer to an unbounded array of function pointers that return a
    pointer to int:

    int *(*(*)[])()

    This, is not bizarre?!

    You need to know the concept behind it. IOW, learn the language and
    you will get used to it. (As with other features or "monstrosities".)

    Even somebody reading has to figure out which *
    corresponds to which 'pointer to', and where the name might go if using
    it to declare a variable.

    In the LTR syntax I suggested, it would be:

    ref[]ref func()ref int

    The variable name goes on the right. For declaring three such variables,
    it would be:

    ref[]ref func()ref int a, b, c

    Meanwhile, in C as it is, it would need to be something like this:

    int *(*(*a)[])(), *(*(*b)[])(), *(*(*c)[])()

    Or you have to use a workaround and create a named alias for the type
    (what would you call it?):

    typedef int *(*(*T)[])();

    T a, b, c;

    It's a fucking joke.

    Actually, this is a way to (somewhat) control the declaration "mess"
    so that it doesn't propagate into the rest of the source code and
    muddy each occurrence. It's also a good design principle (also when programming in other language) to use names for [complex] types.

    I take that option 'typedef' as a sensible solution of this specific
    problem with C's underlying declaration decisions.

    And yes, I needed to use a tool to get that first
    'int *(*(*)[])()', otherwise I can spend forever in a trial and error
    process of figuring where all those brackets and asterisks go.

    THIS IS WHY such tools are necessary, because the language syntax as it
    is is not fit for purpose.

    I never used 'cdecl' (as far as I recall). (I recall I was thinking
    sometimes that such a tool could be useful.) Myself it was sufficient
    to use a 'typedef' for complex cases. Constructing such expressions
    is often easier than reading them.

    [...]

    Yes, my ideal would be different from the output of cdecl. No, the
    author is not doing something "wrong". I live in a world where
    programming languages are used by more than one person, and those
    people can have different opinions.

    Find me one person who doesn't think that syntax like int *(*(*)[])()
    is a complete joke.

    Maybe the authors (and all the enthusiastic adherents) of "C"?

    Does extern "C" tend to use cdecl?


    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From bart@3:633/10 to All on Tue Oct 28 10:27:06 2025
    On 28/10/2025 06:45, Chris M. Thomasson wrote:
    On 10/27/2025 7:59 PM, Keith Thompson wrote:
    "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> writes:
    On 10/27/2025 5:30 PM, Keith Thompson wrote:
    [...]

    I can imagine either an enhanced version of the GNU autotools,
    or a new set of tools similar to it, that could support building
    software from source on Windows.

    https://vcpkg.io/en/packages?query=

    Not bad, well for me, for now. Builds like a charm, so far.

    [...]

    Looks interesting, but I don't think it's quite what I was talking about
    (based on about 5 minutes browsing the website).

    So far, it can be used to cure some "headaches" over in Windows land... ;^)


    It seems to emphasize C and C++ *libraries* rather than applications.
    And I don't see that it can be used to build an existing autotools-based
    package (like, say, cdecl) on Windows.


    Well, if what you want is not in that list, you are shit out of
    luck. ;^) It sure seems to build packages from source. For instance, I
    got Cairo compiled and up and fully integrated into MSVC. Pretty nice.

    At least its there. Although if it took a while to build everything,
    Bart would be pulling his hair out. But, beats manually building
    something that is not meant to be built on windows, uggg, sometimes,
    double uggg. Ming, cygwin, ect... vcpkg, has all of them, and used them
    to build certain things...

    I have built Cairo on Windows, and vcpkg is just oh so easy. Well, keep
    in mind, windows... ;^o


    PART I

    In the early days of testing my C compiler, I tried to build a
    hello-type test program using GTK2.

    GTK2 (I expect GTK4 is a lot worse!) was a complex library:

    * There were some 700 include files, spread over a dozen or two nested directories

    * Compiling my test involved over 1000 nested #include statements, 550
    unique header files, a dozen include search paths, and 330,000 lines of declarations to process

    * To link the result, GTK2 comes with 50 DLL files, totalling 50MB,
    although not all will be needed. All have version names, so it's not
    just a case of suppying a particular file name, it needs to have the
    correct version suffix.

    I managed this by trial and error. The input to the compiler needs to be:

    * A set of search paths to the needed include files

    * The exact names of the needed DLL files (their location is not needed, provided the location is part of the Windows 'Path' variable)

    Note we are not building anything from source; it is the simpler task of
    using a ready-built library! The test program might be two dozen lines of C.

    So, how does it all work normally? Apparently it's done with a program
    called 'PKG-CONFIG' which performs some magic based on some 'metadata' somewhere.

    However this was of no interest to me: I wanted a bare-compiler solution
    with minimal meta-dependencies

    PART II

    At a different point, I wanted to try GTK2 from my own language. Here
    the sticking point is creating bindings, in my syntax, for the 10,000 functions, types, structs, enums, and macros exported by the library.

    My C compiler has an extension which could do some of that
    automatically: it processes the library headers (via the method in Part
    I), and generated a single, flattened interface file containing all the necessary information.

    For GTK2, this was a single 25Kloc file, which I called gtk2.m. In my language, I would compile the library by having 'import gtk2' in one
    place in the program.

    However, 4000 of those 25000 lines were C macros; simple #defines could
    be converted, but the rest needed manual translation: a big task. (The
    method has worked however for smaller libraries like OpenGL and SDL2.)

    But here's the interesting thing: if, instead of generating bindings in
    my syntax, suppose I generated them as C?

    Then, instead of 700 headers, 1/3 million lines and dozens of folders,
    the GTK2 API could be expressed in a single 25Kloc header file.

    Why isn't such a process done anyway by the suppliers of the library?

    (SDL2 would also reduce from 80 headers of 50Kloc, to one header of 3Kloc.)

    PART III

    This was an idea I had for my language, but it never got implemented.

    At this point, a simple external library involves one or more DLL files,
    and an interface file needed by the compiler, which gives the API info.

    My idea was, why not put that interface file inside the DLL? Then you
    submit that DLL name to the compiler, and it can extract the necessary
    info, either via some special function, or an exported set of variables.

    Where the DLL structure is complex, like GTK2, there could be an
    accompanying small DLL that replaces those 700 files of headers. One
    with an obvious name, like 'gtk2.dll'.

    (In my language, such input gets specified once inside the lead module. Building any app is always 'mm prog'.)

    However, one remaining problem is finding where the DLL is located.

    Again, the idea could work in C too.

    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From bart@3:633/10 to All on Tue Oct 28 11:16:11 2025
    On 28/10/2025 02:35, Janis Papanagnou wrote:
    On 27.10.2025 16:11, bart wrote:

    That's meaningless, but if you're interested to know...
    Mostly (including my professional work) I've probably used C++.
    But also other languages, depending on either projects' requirements
    or, where there was a choice, what appeared to be fitting best (and
    "best" sadly includes also bad languages if there's no alternative).

    Which bad languages are these?

    The build-times have rarely been an issue; never in private context,
    and in professional contexts with MLOCS of code these things have
    been effectively addressed.

    Not really. There are the workarounds and compromises that I listed: compilation is avoided as much as possible. For that you need to use independent compilation, and require dependency graphs and external
    tools to manage the process.

    That CDECL took, what, 49 seconds on my machine, to process 68Kloc of C? That's a whopping 1400 lines per second!

    If we go back 45 years to machines that were 1000 times slower, the same process would only manage 1.4 lines per second, and it would take 13
    HOURS, to create an interactive program that explained what 'int (*(*(*)))[]()' (whatever it was) might mean.

    So, yeah, build-time is a problem, even on the ultra-fast hardware we
    have now.

    Bear in mind that CDECL (like every finished product you build from
    source) is a working, debugged program. You shouldn't need to do that
    much analysis of it. And here, its performance is not critical either:
    you don't even need fast code from it.



    (I recall you were unfamiliar with make
    files, or am I misremembering?)

    I know makefiles. Never used them, never will. You might recall that I
    create my own solutions.


    Now imagine further if the CPython interpreter was inself written and
    executed with CPython.

    So, the 'speed' of a language (ie. of its typical implementation, which
    also depends on the language design) does matter.

    If speed wasn't an issue then we'd all be using easy dynamic languages

    Huh? - Certainly not.

    *I* would! That's why I made my scripting languages as fast and capable
    as possible, so they could be used for more tasks.

    However, if I dare to suggest that even one other person in the world
    might also have the same desire, you'd say that I can't possibly know that.

    And yet here you are: you say 'certainly not'. Obviously *you* know
    everyone else's mindset!

    Speed is a topic, but as I wrote you have to put it in context

    Actually, the real topic is slowness. I'm constantly coming across
    things which I know (from half a century working with computers) are far slower than they ought to be.

    But I'm also coming across people who seem to accept that slowness as
    just how things are. They should question things more!

    I can't tell about the "many" that you have in mind, and about their
    mindset; I'm sure you either can't tell.

    I'm pretty sure there are quite a few million users of scripting languages.


    I'm using for very specific types of tasks "scripting languages" -
    and keep in mind that there's no clean definition of that!

    They have typical characteristics as I'm quite sure you're aware. For
    example:

    * Dynamic typing
    * Run from source
    * Instant edit-run cycle
    * Possible REPL
    * Uncluttered syntax
    * Higher level features
    * Extensive libraries so that you can quickly 'script' most tasks

    So, interactivity and spontaneity. But they also have cons:

    * Slower execution
    * Little compile-time error checking
    * Less control (of data structures for example)



    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Michael S@3:633/10 to All on Tue Oct 28 14:56:39 2025
    On Sun, 26 Oct 2025 15:45:34 -0700
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:

    Michael S <already5chosen@yahoo.com> writes:
    On Sun, 26 Oct 2025 14:56:56 -0700
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
    Michael S <already5chosen@yahoo.com> writes:
    On Fri, 24 Oct 2025 13:20:45 -0700
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
    [...]
    Free software still has to be usable. cdecl is usable for most
    of us.

    [...]


    I'd say that it is not sufficiently usable for most of us to
    actually use it.

    Why do you say that?

    I would guess that less than 1 per cent of C programmers ever used
    it and less than 5% of those who used it once continued to use it regularly.
    All numbers pulled out of thin air...

    So it's about usefulness, not usability. You're not saying that
    it works incorrectly or that it's difficult to use (which would be
    usability issues), but that the job it performs is not useful for
    most C programmers.

    (One data point: I use it occasionally.)


    Few minutes ago I typed 'pacman -S cdecl' at my msys2 command prompt.
    Then I hit Y at suggestion to proceed with installation. After another
    second or three I got it installed. Then tried it and even managed to
    get couple of declarations properly explained.
    So, now I also belong to less than 1 per cent :-)

    In the process I finally understand why the build process is none-trivial.
    It's mostly because of interactivity.
    It's very hard to build decent interactive program in portable subset
    of C. Or, may be, even impossible rather than hard.
    Personally, I consider interactivity of cdecl as UI mistake.
    For me, as a user, it's a minor mistake, because I can easily ignore interactivity and to use it as a normal command line utility:
    $ cdecl -e "FILE* uu"
    declare uu as pointer to FILE

    But if I were tasked with porting of cdecl to non-unixy environment
    then interactivity would be the biggest obstacle.





    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From bart@3:633/10 to All on Tue Oct 28 13:18:56 2025
    On 28/10/2025 12:56, Michael S wrote:
    On Sun, 26 Oct 2025 15:45:34 -0700
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:

    Michael S <already5chosen@yahoo.com> writes:
    On Sun, 26 Oct 2025 14:56:56 -0700
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
    Michael S <already5chosen@yahoo.com> writes:
    On Fri, 24 Oct 2025 13:20:45 -0700
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
    [...]
    Free software still has to be usable. cdecl is usable for most
    of us.

    [...]


    I'd say that it is not sufficiently usable for most of us to
    actually use it.

    Why do you say that?

    I would guess that less than 1 per cent of C programmers ever used
    it and less than 5% of those who used it once continued to use it
    regularly.
    All numbers pulled out of thin air...

    So it's about usefulness, not usability. You're not saying that
    it works incorrectly or that it's difficult to use (which would be
    usability issues), but that the job it performs is not useful for
    most C programmers.

    (One data point: I use it occasionally.)


    Few minutes ago I typed 'pacman -S cdecl' at my msys2 command prompt.
    Then I hit Y at suggestion to proceed with installation. After another
    second or three I got it installed. Then tried it and even managed to
    get couple of declarations properly explained.
    So, now I also belong to less than 1 per cent :-)

    In the process I finally understand why the build process is none-trivial. It's mostly because of interactivity.
    It's very hard to build decent interactive program in portable subset
    of C. Or, may be, even impossible rather than hard.


    I don't understand. What's hard about interactive programs?

    The problem below, which is in standard C and runs on both Windows and
    Linux, should give you all the interativity needed for a program like CDECL.

    It reads a line of input, and prints something based on that. In between
    would go all the non-interactive processing that it needs to do (parse
    the line and so on).

    So what's missing that could render this task impossible?

    (Obviously, it will need a keyboard and display!)

    ----------------------------------------
    #include <stdio.h>
    #include <string.h>

    int main() {
    char buffer[1000];

    puts("Type q to quit:");

    while (1) {
    printf("Cdecl> ");
    fgets(buffer, sizeof(buffer), stdin);
    if (buffer[0] == 'q') break;

    printf("Input was: %s\n", buffer);
    }
    }






    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From David Brown@3:633/10 to All on Tue Oct 28 15:59:29 2025
    On 28/10/2025 03:00, Janis Papanagnou wrote:
    On 27.10.2025 21:39, Michael S wrote:

    Lua is not Algol 68.

    Correct.
    Lua is a useful programming language.

    (I have no stakes here. Never used it.)


    It's usefulness is demonstrated by its widespread use. It is mostly
    used as a scripting or automation language integrated in other software, rather than as a stand-alone language. It is particularly popular in
    the gaming industry.

    Algol 68 is a great source of inspiration for designers of
    programming languages.

    Obviously.

    Useful programming language it is not.

    I have to read that as valuation of its usefulness for you.
    (Otherwise, if you're speaking generally, you'd be just wrong.)


    The uselessness of Algol 68 as a programming language in the modern
    world is demonstrated by the almost total non-existence of serious tools
    and, more importantly, real-world code in the language. It certainly
    /was/ a useful programming language, long ago, but it has not been
    seriously used outside of historical hobby interest for half a century.
    And unlike other ancient languages (like Cobol or Fortran) there is no
    code of relevance today written in the language. Original Algol was
    mostly used in research, while Algol 68 was mostly not used at all. As
    C.A.R. Hoare said, "As a tool for the reliable creation of sophisticated programs, the language was a failure".

    I'm sure there are /some/ people who have or will write real code in
    Algol 68 in modern times (the folks behind the new gcc Algol 68
    front-end want to be able to write code in the language), but it is very
    much a niche language.


    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Scott Lurndal@3:633/10 to All on Tue Oct 28 15:03:41 2025
    bart <bc@freeuk.com> writes:
    On 28/10/2025 12:56, Michael S wrote:
    On Sun, 26 Oct 2025 15:45:34 -0700
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:

    Michael S <already5chosen@yahoo.com> writes:
    On Sun, 26 Oct 2025 14:56:56 -0700
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
    Michael S <already5chosen@yahoo.com> writes:
    On Fri, 24 Oct 2025 13:20:45 -0700
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
    [...]
    Free software still has to be usable. cdecl is usable for most
    of us.

    [...]


    I'd say that it is not sufficiently usable for most of us to
    actually use it.

    Why do you say that?

    I would guess that less than 1 per cent of C programmers ever used
    it and less than 5% of those who used it once continued to use it
    regularly.
    All numbers pulled out of thin air...

    So it's about usefulness, not usability. You're not saying that
    it works incorrectly or that it's difficult to use (which would be
    usability issues), but that the job it performs is not useful for
    most C programmers.

    (One data point: I use it occasionally.)


    Few minutes ago I typed 'pacman -S cdecl' at my msys2 command prompt.
    Then I hit Y at suggestion to proceed with installation. After another
    second or three I got it installed. Then tried it and even managed to
    get couple of declarations properly explained.
    So, now I also belong to less than 1 per cent :-)

    In the process I finally understand why the build process is none-trivial. >> It's mostly because of interactivity.
    It's very hard to build decent interactive program in portable subset
    of C. Or, may be, even impossible rather than hard.


    I don't understand. What's hard about interactive programs?

    The problem below, which is in standard C and runs on both Windows and >Linux, should give you all the interativity needed for a program like CDECL.

    It reads a line of input, and prints something based on that. In between >would go all the non-interactive processing that it needs to do (parse
    the line and so on).

    So what's missing that could render this task impossible?

    (Obviously, it will need a keyboard and display!)

    ----------------------------------------
    #include <stdio.h>
    #include <string.h>

    int main() {
    char buffer[1000];

    puts("Type q to quit:");

    while (1) {
    printf("Cdecl> ");
    fgets(buffer, sizeof(buffer), stdin);
    if (buffer[0] == 'q') break;

    printf("Input was: %s\n", buffer);
    }
    }

    Where is the command line editing and history support
    in this trivial application?

    Use libreadline or libedit and you'll get command line
    history and editing; compatable with the standard
    unix/linux shells (useful shells, unlike the DOS
    command line or soi disant powershell).

    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From David Brown@3:633/10 to All on Tue Oct 28 16:20:48 2025
    On 27/10/2025 23:33, Waldek Hebisch wrote:
    David Brown <david.brown@hesbynett.no> wrote:
    On 26/10/2025 16:12, bart wrote:
    On 25/10/2025 16:18, David Brown wrote:
    On 25/10/2025 14:51, bart wrote:

    This is another matter. The CDECL docs talk about C and C++ type
    declarations being 'gibberish'.

    What do you feel about that, and the *need* for such a substantial
    tool to help understand or write such declarations?

    I would rather have put some effort into fixing the syntax so that
    such tools are not necessary!

    And I'd love to hear your plan for "fixing" the syntax of C - noting
    that changing the syntax of C means getting the C standards committee
    to accept your suggestions, getting at least all major C compilers to
    support them, and getting the millions of C programmers to use them.

    I have posted such proposals in the past (probably before 2010).


    No, you have not.

    What you have proposed is a different way to write types in
    declarations, in a different language. That's fine if you are making a
    different language. (For the record, I like some of your suggestions,
    and dislike others - my own choice for an "ideal" syntax would be
    different from both your syntax and C's.)

    I asked you if you had a plan for /fixing/ the syntax of /C/. You don't.

    As an analogy, suppose I invited you - as an architect and builder - to
    see my house, and you said you didn't like the layout of the rooms, the
    kitchen was too small, and you thought the cellar was pointless
    complexity. I ask you if you can give me a plan to fix it, and you
    respond by telling me your own house is nicer.

    Sorry, "proof by analogy" is usually wrong.

    I agree - I wasn't trying to "prove" anything. Analogies can be
    illustrative. Bart had claimed to have a "plan to fix C", without understanding what that could mean, and I was trying to find a way to
    show him how absurd that was. (That is, his claim to have a plan to fix
    C was absurd, not necessarily his alternative syntaxes for declarations.)

    If you insist on
    analogies the right one would be function prototypes: old style
    function declarations where inherently unsafe and it was fixed
    by adding new syntax for function declarations and definitions,
    in parallel to old syntax. Now old style declarations are
    officially retired. Bart proposed new syntax for all
    declarations to be used in parallel with old ones, that is
    exaclty the same fix as used to solve unsafety of old
    function declarations.


    The function prototype syntax was an enhancement to the existing syntax,
    and could be used happily in parallel with it. And it was developed
    within the community of the C language developers and implementers (it
    was before ANSI/ISO standardisation). Bart's suggestion turns existing
    C syntax upside down, is incompatible with everything - in particular, incompatible with the philosophy and intention behind C's syntax - and
    is the product of one person whose motivation seems to be hating C and
    whining about it. So it is a very different situation.

    IMO the worst C problem is standard process. Basically, once
    a large vendor manages to subvert the language it gets
    legitimized and part of the standard. OTOH old warts are
    preserved for long time. Worse, new warts are introduced.


    Backwards compatibility is simultaneously the best part of C, and the
    worst part of C.

    As an example, VMT-s were big opportunity to make array access
    safer. But version which is in the standard skilfully
    sabotages potential compiler attempts to increase safety.

    If you look carefuly, there is several places in the standard
    that effectively forbid static or dynamic error checks. Once
    you add extra safety checks your implementation is
    noncompilant.


    I certainly have no problem finding countless things in C that I would
    have preferred to be done differently. I don't know any serious C
    programmer who could not do the same - but they would all come up with different points (with plenty of overlap).

    It is likely that any standarized language is eventually
    doomed to failure. This is pretty visible with Cobol,
    but C seem to be on similar trajectory (but in much earlier
    stage).


    It takes a /very/ broad definition of "failure" to encompass C!

    But I think Stroustrup was spot-on with his comment "There are two kinds
    of programming languages - the ones everyone complains about, and the
    ones nobody uses".



    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Scott Lurndal@3:633/10 to All on Tue Oct 28 16:05:47 2025
    David Brown <david.brown@hesbynett.no> writes:
    On 28/10/2025 03:00, Janis Papanagnou wrote:
    On 27.10.2025 21:39, Michael S wrote:

    Lua is not Algol 68.

    Correct.
    Lua is a useful programming language.

    (I have no stakes here. Never used it.)


    It's usefulness is demonstrated by its widespread use. It is mostly
    used as a scripting or automation language integrated in other software, >rather than as a stand-alone language. It is particularly popular in
    the gaming industry.

    Algol 68 is a great source of inspiration for designers of
    programming languages.

    Obviously.

    Useful programming language it is not.

    I have to read that as valuation of its usefulness for you.
    (Otherwise, if you're speaking generally, you'd be just wrong.)


    The uselessness of Algol 68 as a programming language in the modern
    world is demonstrated by the almost total non-existence of serious tools >and, more importantly, real-world code in the language. It certainly
    /was/ a useful programming language, long ago, but it has not been
    seriously used outside of historical hobby interest for half a century.
    And unlike other ancient languages (like Cobol or Fortran) there is no
    code of relevance today written in the language. Original Algol was
    mostly used in research, while Algol 68 was mostly not used at all. As >C.A.R. Hoare said, "As a tool for the reliable creation of sophisticated >programs, the language was a failure".

    There is still one computer system that uses Algol as both
    the system programming language, and for applications.

    Unisys Clearpath (descendents of the Burroughs B6500).


    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From bart@3:633/10 to All on Tue Oct 28 16:16:24 2025
    On 28/10/2025 15:03, Scott Lurndal wrote:
    bart <bc@freeuk.com> writes:
    On 28/10/2025 12:56, Michael S wrote:
    On Sun, 26 Oct 2025 15:45:34 -0700
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:

    Michael S <already5chosen@yahoo.com> writes:
    On Sun, 26 Oct 2025 14:56:56 -0700
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
    Michael S <already5chosen@yahoo.com> writes:
    On Fri, 24 Oct 2025 13:20:45 -0700
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
    [...]
    Free software still has to be usable. cdecl is usable for most >>>>>>>> of us.

    [...]


    I'd say that it is not sufficiently usable for most of us to
    actually use it.

    Why do you say that?

    I would guess that less than 1 per cent of C programmers ever used
    it and less than 5% of those who used it once continued to use it
    regularly.
    All numbers pulled out of thin air...

    So it's about usefulness, not usability. You're not saying that
    it works incorrectly or that it's difficult to use (which would be
    usability issues), but that the job it performs is not useful for
    most C programmers.

    (One data point: I use it occasionally.)


    Few minutes ago I typed 'pacman -S cdecl' at my msys2 command prompt.
    Then I hit Y at suggestion to proceed with installation. After another
    second or three I got it installed. Then tried it and even managed to
    get couple of declarations properly explained.
    So, now I also belong to less than 1 per cent :-)

    In the process I finally understand why the build process is none-trivial. >>> It's mostly because of interactivity.
    It's very hard to build decent interactive program in portable subset
    of C. Or, may be, even impossible rather than hard.


    I don't understand. What's hard about interactive programs?

    The problem below, which is in standard C and runs on both Windows and
    Linux, should give you all the interativity needed for a program like CDECL. >>
    It reads a line of input, and prints something based on that. In between
    would go all the non-interactive processing that it needs to do (parse
    the line and so on).

    So what's missing that could render this task impossible?

    (Obviously, it will need a keyboard and display!)

    ----------------------------------------
    #include <stdio.h>
    #include <string.h>

    int main() {
    char buffer[1000];

    puts("Type q to quit:");

    while (1) {
    printf("Cdecl> ");
    fgets(buffer, sizeof(buffer), stdin);
    if (buffer[0] == 'q') break;

    printf("Input was: %s\n", buffer);
    }
    }

    Where is the command line editing and history support
    in this trivial application?

    On Windows, that seems to work anyway: you can edit and navigate within
    a line, or use Up/Down to retrieve previous lines.

    On WSL, only backspace works, others keys show the escape sequences.
    It's the same with the RPi.

    I'd never noticed before that Linux line input doesn't provide these fundamentals.

    Still, it will suffice for the simple task that Cdecl has to do.



    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Kaz Kylheku@3:633/10 to All on Tue Oct 28 17:03:33 2025
    On 2025-10-28, bart <bc@freeuk.com> wrote:
    On 27/10/2025 20:52, Kaz Kylheku wrote:
    On 2025-10-27, Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
    bart <bc@freeuk.com> writes:
    [...]
    Yes, but: the development and build procedures HAVE BEEN BUILT AROUND UNIX.

    So they are utterly dependent on them. So much so that it is pretty
    much impossible to build this stuff on any non-UNIX environment,
    unless that environment is emulated. That is what happens with WSL,
    MSYS2, CYGWIN.
    [...]

    **Yes, you're right**.

    The GNU autotools typically work smoothly when used on Unix-like
    systems. They can be made to work nearly as smoothly under Windows
    by using an emulation layer such as WSL, MSYS2, or Cygwin. It's very
    difficult to use them on pure Windows.

    The way I see the status quo in this matter is this: cross-platform
    programs originating or mainly focusing on Unix-likes require effort
    /from their actual authors/ to have a native Windows port.

    Whereas when such programs are ported to Unix-like which their
    authors do not use, it is often possible for the users to get it
    working without needing help from the authors. There may be some
    patch to upstream, and that's about it.

    Also, a proper Windows port isn't just a way to build on Windows.
    Nobody does that. Windows doens't have tools out of the box.

    When you seriously commit to a Windows port, you provide a binary build
    with a proper installer.

    The problem with a binary distribution is AV software on the user's
    machine which can block it.

    Well, then you're fucked. (Which, anyway, is a good general adjective
    for someone still depending on Microsoft Windows.)

    The problem with source distribution is that users on Windows don't
    have any tooling. To get tooling, they would need to install binaries.

    To get around that AV, you either need to have some clout, be

    The way you do that is by developing a compelling program that helps
    users get their work done and becomes popular, so users (and their
    managers) can they convince their IT that they need it.

    In my case, rather than supply a monolithic executable (EXE file, which either the app itself, or some sort of installer), I've played around

    You are perhaps too hastily skipping over the idea of "some sort of
    installer".

    Yes, use an installer for Windows if you're doing something
    serious that is offered to the public, rather than just to a handful of
    friends or customers.

    I use NSIS myself.

    Creating an installer is a PITA, but once you close the iteration loop
    on that, you hardly have to touch it, if the structure of your
    deliverables stays the same from release to release.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Michael S@3:633/10 to All on Tue Oct 28 20:00:57 2025
    On Tue, 28 Oct 2025 16:05:47 GMT
    scott@slp53.sl.home (Scott Lurndal) wrote:

    David Brown <david.brown@hesbynett.no> writes:
    On 28/10/2025 03:00, Janis Papanagnou wrote:
    On 27.10.2025 21:39, Michael S wrote:

    Lua is not Algol 68.

    Correct.
    Lua is a useful programming language.

    (I have no stakes here. Never used it.)


    It's usefulness is demonstrated by its widespread use. It is mostly
    used as a scripting or automation language integrated in other
    software, rather than as a stand-alone language. It is particularly >popular in the gaming industry.

    Algol 68 is a great source of inspiration for designers of
    programming languages.

    Obviously.

    Useful programming language it is not.

    I have to read that as valuation of its usefulness for you.
    (Otherwise, if you're speaking generally, you'd be just wrong.)


    The uselessness of Algol 68 as a programming language in the modern
    world is demonstrated by the almost total non-existence of serious
    tools and, more importantly, real-world code in the language. It
    certainly /was/ a useful programming language, long ago, but it has
    not been seriously used outside of historical hobby interest for
    half a century. And unlike other ancient languages (like Cobol or
    Fortran) there is no code of relevance today written in the
    language. Original Algol was mostly used in research, while Algol
    68 was mostly not used at all. As C.A.R. Hoare said, "As a tool for
    the reliable creation of sophisticated programs, the language was a >failure".

    There is still one computer system that uses Algol as both
    the system programming language, and for applications.

    Unisys Clearpath (descendents of the Burroughs B6500).


    Is B6500 ALGOL related to A68?
    My impression from Wikipedia article is that B5000 ALGOL was a
    proprietary off-spring of A60. Wikipedia says nothing about sources of
    B6500 ALGOL, but considering that Burroughs was an American enterprise
    and that back at time in US ALGOL 68 was widely considered as a failed
    European experiment I would guess that B6500 ALGOL is derived from
    B5000 ALGOL rather than from A68.





    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Kaz Kylheku@3:633/10 to All on Tue Oct 28 18:01:00 2025
    On 2025-10-26, Michael S <already5chosen@yahoo.com> wrote:
    I can't imagine why anyone would write cdecl (if it is written in C)
    such that it's anything but a maximally conforming ISO C program,
    which can be built like this:

    make cdecl

    without any Makefile present, in a directory in which there is just
    one file: cdecl.c.


    You are exaggerating.
    There is nothing wrong with multiple files and small nice manually

    Yes, I'm exaggerating; of course I can imagine using more than
    one file for cdecl.

    I would say that if you need two files to write cdecl, and
    one of them is not an accurate grammar file for a parser generator
    (needing to be a spearate file due to being in that notation),
    which handles things int (*p)(int (*q)(void * const x)),
    you've massively fucked it up.

    In that regard autotools resemble Postel's principle - the most harmful

    Postel's principle is awful, requiring paragraphs of apologetic
    defense to explain what Postel really meant and how it made sense in his context, so that it wasn't actually idiotic.

    Programs should be conservative in what they generate, and loudly reject
    any input that is out of spec.

    Programs that accept crap are good for business, because naive
    customers just see that those programs "work" with some input
    that other programs "don't handle".

    They are harmful to the ecosystem, creating a race for the bottom
    competition in which specs fall by the wayside while programs struggle
    to handle buggy inputs, and nobody knows what is correct any more.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Scott Lurndal@3:633/10 to All on Tue Oct 28 18:28:21 2025
    Michael S <already5chosen@yahoo.com> writes:
    On Tue, 28 Oct 2025 16:05:47 GMT
    scott@slp53.sl.home (Scott Lurndal) wrote:

    David Brown <david.brown@hesbynett.no> writes:
    On 28/10/2025 03:00, Janis Papanagnou wrote:
    On 27.10.2025 21:39, Michael S wrote:

    Lua is not Algol 68.

    Correct.
    Lua is a useful programming language.

    (I have no stakes here. Never used it.)


    It's usefulness is demonstrated by its widespread use. It is mostly
    used as a scripting or automation language integrated in other
    software, rather than as a stand-alone language. It is particularly
    popular in the gaming industry.

    Algol 68 is a great source of inspiration for designers of
    programming languages.

    Obviously.

    Useful programming language it is not.

    I have to read that as valuation of its usefulness for you.
    (Otherwise, if you're speaking generally, you'd be just wrong.)


    The uselessness of Algol 68 as a programming language in the modern
    world is demonstrated by the almost total non-existence of serious
    tools and, more importantly, real-world code in the language. It
    certainly /was/ a useful programming language, long ago, but it has
    not been seriously used outside of historical hobby interest for
    half a century. And unlike other ancient languages (like Cobol or
    Fortran) there is no code of relevance today written in the
    language. Original Algol was mostly used in research, while Algol
    68 was mostly not used at all. As C.A.R. Hoare said, "As a tool for
    the reliable creation of sophisticated programs, the language was a
    failure".

    There is still one computer system that uses Algol as both
    the system programming language, and for applications.

    Unisys Clearpath (descendents of the Burroughs B6500).


    Is B6500 ALGOL related to A68?

    A-series ALGOL has many extensions.

    DCAlgol, for example, is used to create applications
    for data communications (e.g. poll-select multidrop
    applications such as teller terminals, etc).

    NEWP is an algol dialect used for systems programming
    and the operating system itself.


    ALGOL: https://public.support.unisys.com/aseries/docs/ClearPath-MCP-19.0/86000098-517/86000098-517.pdf
    DCALGOL: https://public.support.unisys.com/aseries/docs/ClearPath-MCP-19.0/86000841-208.pdf
    NEWP: https://public.support.unisys.com/aseries/docs/ClearPath-MCP-21.0/86002003-409.pdf

    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Michael S@3:633/10 to All on Tue Oct 28 20:49:30 2025
    On Tue, 28 Oct 2025 18:28:21 GMT
    scott@slp53.sl.home (Scott Lurndal) wrote:

    Michael S <already5chosen@yahoo.com> writes:
    On Tue, 28 Oct 2025 16:05:47 GMT
    scott@slp53.sl.home (Scott Lurndal) wrote:

    David Brown <david.brown@hesbynett.no> writes:
    On 28/10/2025 03:00, Janis Papanagnou wrote:
    On 27.10.2025 21:39, Michael S wrote:

    Lua is not Algol 68.

    Correct.
    Lua is a useful programming language.

    (I have no stakes here. Never used it.)


    It's usefulness is demonstrated by its widespread use. It is
    mostly used as a scripting or automation language integrated in
    other software, rather than as a stand-alone language. It is
    particularly popular in the gaming industry.

    Algol 68 is a great source of inspiration for designers of
    programming languages.

    Obviously.

    Useful programming language it is not.

    I have to read that as valuation of its usefulness for you.
    (Otherwise, if you're speaking generally, you'd be just wrong.)


    The uselessness of Algol 68 as a programming language in the
    modern world is demonstrated by the almost total non-existence of
    serious tools and, more importantly, real-world code in the
    language. It certainly /was/ a useful programming language, long
    ago, but it has not been seriously used outside of historical
    hobby interest for half a century. And unlike other ancient
    languages (like Cobol or Fortran) there is no code of relevance
    today written in the language. Original Algol was mostly used in
    research, while Algol 68 was mostly not used at all. As C.A.R.
    Hoare said, "As a tool for the reliable creation of sophisticated
    programs, the language was a failure".

    There is still one computer system that uses Algol as both
    the system programming language, and for applications.

    Unisys Clearpath (descendents of the Burroughs B6500).


    Is B6500 ALGOL related to A68?

    A-series ALGOL has many extensions.


    I read your answer as "I don't know. If you are interesting then RTFM by yourself". Is it correct interpretation?

    DCAlgol, for example, is used to create applications
    for data communications (e.g. poll-select multidrop
    applications such as teller terminals, etc).

    NEWP is an algol dialect used for systems programming
    and the operating system itself.


    ALGOL: https://public.support.unisys.com/aseries/docs/ClearPath-MCP-19.0/86000098-517/86000098-517.pdf
    DCALGOL: https://public.support.unisys.com/aseries/docs/ClearPath-MCP-19.0/86000841-208.pdf
    NEWP: https://public.support.unisys.com/aseries/docs/ClearPath-MCP-21.0/86002003-409.pdf



    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Janis Papanagnou@3:633/10 to All on Tue Oct 28 20:14:51 2025
    On 28.10.2025 15:59, David Brown wrote:
    On 28/10/2025 03:00, Janis Papanagnou wrote:
    On 27.10.2025 21:39, Michael S wrote:

    [ snip Lua statements ]

    Algol 68 is a great source of inspiration for designers of
    programming languages.

    Obviously.

    Useful programming language it is not.

    I have to read that as valuation of its usefulness for you.
    (Otherwise, if you're speaking generally, you'd be just wrong.)


    The uselessness of Algol 68 as a programming language in the modern
    world is demonstrated by the almost total non-existence of serious tools
    and, more importantly, real-world code in the language.

    Obviously you are mixing the terms usefulness and dissemination
    (its actual use). Please accept that I'm differentiating here.

    There's quite some [historic] languages that were very useful but
    couldn't disseminate. (For another prominent example cf. Simula,
    that invented not only the object oriented principles with classes
    and inheritance, was a paragon for quite some OO-languages later,
    and it made a lot more technical and design inventions, some even
    now still unprecedented.) It's a pathological historic phenomenon
    that programming languages from the non-US American locations had
    inherent problems to disseminate especially back these days!

    Reasons for dissemination of a language are multifold; back then
    (but to a degree also today) they were often determined by political
    and marketing factors... (you can read about that in various historic
    documents and also in later ruminations about computing history)

    It certainly /was/ a useful programming language, long ago,

    ...as you seem to basically agree to here. (At least as far as you
    couple usefulness with dissemination.)

    but it has not been
    seriously used outside of historical hobby interest for half a century.

    (Make that four decades. It's been used in the mid 1980's. - Later
    I didn't follow it anymore, so I cannot tell about the 1990's.)

    (I also disagree in your valuation "hobby interest"; for "hobbies"
    there were easier accessible languages used, not systems that were
    back these days mainly available on mainframes only.)

    As far as you mean in programming software systems, that may be true;
    I cannot tell that I'd have an oversight who did use it. I've read
    about various applications, though; amongst them that it's even been
    used as a systems programming language (where I was astonished about).

    And unlike other ancient languages (like Cobol or Fortran) there is no
    code of relevance today written in the language.

    Probably right. (That would certainly be also my guess.)

    Original Algol was
    mostly used in research, while Algol 68 was mostly not used at all. As C.A.R. Hoare said, "As a tool for the reliable creation of sophisticated programs, the language was a failure".

    I don't know the context of his statement. If you know the language
    you might admit that reliable software is exactly one strong property
    of that language. (Per se already, but especially so if compared to
    languages like "C", the language discussed in this newsgroup, with an
    extremely large dissemination and also impact.)


    I'm sure there are /some/ people who have or will write real code in
    Algol 68 in modern times

    The point was that the language per se was and is useful. But its
    actual usage for developing software systems seems to have been of
    little and more so it's currently of no importance, without doubt.

    (the folks behind the new gcc Algol 68
    front-end want to be able to write code in the language),

    There's more than the gcc folks. (I've heard, that gcc has taken some substantial code from Genie, an Algol 68 "compiler-interpreter" that
    is still maintained. BTW; I'm for example using that one, not gcc's.)

    but it is very much a niche language.

    It's _functionally_ a general purpose language, not a niche language
    (in the sense of "special purpose language"). Its dissemination makes
    it to a "niche language", that's true. It's in practice just a dead
    language. It's rarely used by anyone. But it's a very useful language.

    Janis


    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Janis Papanagnou@3:633/10 to All on Tue Oct 28 20:32:14 2025
    On 28.10.2025 19:00, Michael S wrote:
    On Tue, 28 Oct 2025 16:05:47 GMT
    scott@slp53.sl.home (Scott Lurndal) wrote:

    There is still one computer system that uses Algol as both
    the system programming language, and for applications.

    Unisys Clearpath (descendents of the Burroughs B6500).


    Is B6500 ALGOL related to A68?

    I would have to look that up myself, but in older literature I've
    seen the all-caps "ALGOL" mostly (only?) in context of Algol 60.

    I also wouldn't expect that Burroughs is of any relevance nowadays.

    IMO it anyway doesn't invalidate the fact that Algol 68 is a dead
    language nowadays, certainly in its practical use, and otherwise
    also mostly forgotten.

    Janis

    My impression from Wikipedia article is that B5000 ALGOL was a
    proprietary off-spring of A60. Wikipedia says nothing about sources of
    B6500 ALGOL, but considering that Burroughs was an American enterprise
    and that back at time in US ALGOL 68 was widely considered as a failed European experiment I would guess that B6500 ALGOL is derived from
    B5000 ALGOL rather than from A68.






    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From James Kuyper@3:633/10 to All on Tue Oct 28 17:34:27 2025
    On 2025-10-27 22:35, Janis Papanagnou wrote:
    been effectively addressed. (I recall you were unfamiliar with make
    files, or am I misremembering?)

    He's heard of make files, and many people have tried to explain them to
    him, but his comments about them indicate that he completely
    misunderstands them, to a degree that I find hard to fathom. It's
    similar to the unbelievable degree of his misunderstandings of C. You
    don't have to like C - many don't - but if you're going to use it you
    should try to understand it, and his preferences make it impossible for
    him to do so.

    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Keith Thompson@3:633/10 to All on Tue Oct 28 14:59:05 2025
    bart <bc@freeuk.com> writes:
    On 28/10/2025 02:35, Janis Papanagnou wrote:
    On 27.10.2025 16:11, bart wrote:
    [...]
    If speed wasn't an issue then we'd all be using easy dynamic languages
    Huh? - Certainly not.

    *I* would! That's why I made my scripting languages as fast and
    capable as possible, so they could be used for more tasks.

    However, if I dare to suggest that even one other person in the world
    might also have the same desire, you'd say that I can't possibly know
    that.

    And yet here you are: you say 'certainly not'. Obviously *you* know
    everyone else's mindset!

    I'll give this one more try.

    This kind of thing makes it difficult to communicate with you.

    In this particular instances, you wrote that "we'd **all** be using easy dynamic languages" (emphasis added).

    Janis replied "Certainly not." -- meaning that we would not **all** be
    using easy dynamic languages. Janis is correct if there are only a few
    people, or even one person, who would not use easy dynamic languages.

    In reply to that, you wrote that **you** would use such languages --
    which is fine and dandy, but it doesn't refute what Janis wrote.

    Nobody at any time claimed that *nobody* would use easy dynamic
    languages. Obviously some people do and some people don't. If speed
    were not an issue, that would still be the case, though it would likely
    change the numbers. (There are valid reasons other than speed to use non-dynamic languages.)

    Are you with me so far?

    You then wrote:

    However, if I dare to suggest that even one other person in the world
    might also have the same desire, you'd say that I can't possibly know
    that.

    That's wrong. I'll assume it was an honest mistake. If you suggested
    that even one other person might also have the same desire, I don't
    think anyone would dispute it. *Of course* there are plenty of people
    who want to use dynamic languages, and there would be more if speed were
    not an issue. As you have done before, you make incorrect assumptions
    about other people's thoughts and motives.

    And yet here you are: you say 'certainly not'. Obviously *you* know
    everyone else's mindset!

    The "certainly not" was in response to your claim that we would ALL
    be using dynamic languages, a claim that was at best hyberbole. Nobody
    has claimed to know everyone else's mindset.

    You misunderstood what Janis wrote. It happens to all of us. You just
    need to be aware that what Janis wrote was not what you thought Janis
    wrote, and you have reacted to something nobody said -- and not for the
    first time.

    This post is likely to be a waste of time, but I'm prepared to be
    pleasantly surprised.

    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    void Void(void) { Void(); } /* The recursive call of the void */

    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From bart@3:633/10 to All on Tue Oct 28 22:26:29 2025
    On 28/10/2025 17:03, Kaz Kylheku wrote:
    On 2025-10-28, bart <bc@freeuk.com> wrote:
    On 27/10/2025 20:52, Kaz Kylheku wrote:
    On 2025-10-27, Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
    bart <bc@freeuk.com> writes:
    [...]
    Yes, but: the development and build procedures HAVE BEEN BUILT AROUND UNIX.

    So they are utterly dependent on them. So much so that it is pretty
    much impossible to build this stuff on any non-UNIX environment,
    unless that environment is emulated. That is what happens with WSL,
    MSYS2, CYGWIN.
    [...]

    **Yes, you're right**.

    The GNU autotools typically work smoothly when used on Unix-like
    systems. They can be made to work nearly as smoothly under Windows
    by using an emulation layer such as WSL, MSYS2, or Cygwin. It's very
    difficult to use them on pure Windows.

    The way I see the status quo in this matter is this: cross-platform
    programs originating or mainly focusing on Unix-likes require effort
    /from their actual authors/ to have a native Windows port.

    Whereas when such programs are ported to Unix-like which their
    authors do not use, it is often possible for the users to get it
    working without needing help from the authors. There may be some
    patch to upstream, and that's about it.

    Also, a proper Windows port isn't just a way to build on Windows.
    Nobody does that. Windows doens't have tools out of the box.

    When you seriously commit to a Windows port, you provide a binary build
    with a proper installer.

    The problem with a binary distribution is AV software on the user's
    machine which can block it.

    Well, then you're fucked. (Which, anyway, is a good general adjective
    for someone still depending on Microsoft Windows.)

    The problem with source distribution is that users on Windows don't
    have any tooling. To get tooling, they would need to install binaries.

    There seems little problem with installing well-known compilers.

    Windows' AV seems to use AI methods to detect viruses which can give
    false positives (there is an 'ai' tag on the report code shown). So I
    guess 'gcc' etc must pass.

    Anyway these days I don't deal with non-technical endusers. People
    should know how to build programs. Or I had assumed they did.

    Although I'd gone to a lot of trouble to ensure my single-file C
    distributions are as easy to build as hello.c (on Windows, that is the
    case), I found out something interesting:

    Some people don't actually know how to compile hello.c! They know only
    how to type 'make', and some argue that is actually simpler in that you
    only type one thing instead of two or three.

    I was rather surprised: I'd reduced the job of installing a kitchen to hammering in just one nail so that you can trivially DIY it, but some
    people don't know how to use a hammer.


    To get around that AV, you either need to have some clout, be

    The way you do that is by developing a compelling program that helps
    users get their work done and becomes popular, so users (and their
    managers) can they convince their IT that they need it.

    In my case, rather than supply a monolithic executable (EXE file, which
    either the app itself, or some sort of installer), I've played around

    You are perhaps too hastily skipping over the idea of "some sort of installer".

    Yes, use an installer for Windows if you're doing something
    serious that is offered to the public, rather than just to a handful of friends or customers.

    An installer is just an executable like any other, at least if it as a
    .EXE extension.

    If you supply a one-file, self-contained ready-to-run application, then
    it doesn't really need installing. Wherever it happens to reside after downloading, it can happily be run from there!

    The only thing that's needed is to make it so that it can be run from
    anywhere without needing to type its path. But I can't remember any apps
    I've installed recently that seem to get that right, even with a
    long-winded installer:

    It might go through a long process of perhaps several minutes. It says
    it's installed, you type (what you assume to be) its name on the command
    line, and you get: File not found. It doesn't even tell where it
    installed it, or its actual EXE name.

    So my stuff is no worse. I just don't think anybody cares anymore; most
    poeple use GUI apps launched via Windows menus.



    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From bart@3:633/10 to All on Tue Oct 28 23:14:48 2025
    On 28/10/2025 21:59, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    On 28/10/2025 02:35, Janis Papanagnou wrote:
    On 27.10.2025 16:11, bart wrote:
    [...]
    If speed wasn't an issue then we'd all be using easy dynamic languages
    Huh? - Certainly not.

    *I* would! That's why I made my scripting languages as fast and
    capable as possible, so they could be used for more tasks.

    However, if I dare to suggest that even one other person in the world
    might also have the same desire, you'd say that I can't possibly know
    that.

    And yet here you are: you say 'certainly not'. Obviously *you* know
    everyone else's mindset!

    I'll give this one more try.

    This kind of thing makes it difficult to communicate with you.

    You're talking to the wrong guy. It's JP who's difficult to talk to.

    He (I assume) always dismisses every single one of my arguments out of hand:

    Build speed is never a problem - ever. The speed of any language
    implemention is never a concern either.

    Despite describing all the work that has gone on with making
    optimisation compilers, faster linkers, tracing-JIT interpreters etc,
    all of which suggest that some people think these are very much a
    problem, that cuts no ice at all.

    When I gave the example of my language that was 1000 times faster to
    build than A68G, and which ran that test 10 times faster than A68G, that apparently doesn't count; he doesn't care; or I'm changing the goalposts.

    So I instead gave an example of Tiny C building Lua, and running the
    test under Lua, but that was no good either:

    "Lua is not Algol68".

    It is just impossible get through. He is never going to admit that A68G
    is rather sluggish in its performance (I guess suggesting optimised C
    might be faster than A68G won't work either, since C isn't Algol68!)

    It's rather frustrating. It's even more frustrating when you take his
    side and think I'm the one who needs convincing about anything.

    I made this remark:

    This is why many like to use scripting languages
    as those don't have a discernible build step.

    On the face of it, it is uncontroversial: they do allow rapid
    development and instant feedback, as one of their several pros. Yet, JP
    feels the need to be contrary:

    I can't tell about the "many" that you have in mind, and about their
    mindset; I'm sure you either can't tell.

    And now you have joined in, to back him up!



    In this particular instances, you wrote that "we'd **all** be using easy dynamic languages" (emphasis added).

    Janis replied "Certainly not." -- meaning that we would not **all** be
    using easy dynamic languages. Janis is correct if there are only a few people, or even one person, who would not use easy dynamic languages.

    You're still on about the logic and trying to prove that JP was right
    and I was wrong.

    JP is trying to trash everything I say and everything I do.



    In reply to that, you wrote that **you** would use such languages --
    which is fine and dandy, but it doesn't refute what Janis wrote.

    Nobody at any time claimed that *nobody* would use easy dynamic
    languages. Obviously some people do and some people don't. If speed
    were not an issue, that would still be the case, though it would likely change the numbers. (There are valid reasons other than speed to use non-dynamic languages.)

    Are you with me so far?

    You then wrote:

    However, if I dare to suggest that even one other person in the world
    might also have the same desire, you'd say that I can't possibly know
    that.

    That's wrong. I'll assume it was an honest mistake. If you suggested
    that even one other person might also have the same desire, I don't
    think anyone would dispute it. *Of course* there are plenty of people
    who want to use dynamic languages, and there would be more if speed were
    not an issue. As you have done before, you make incorrect assumptions
    about other people's thoughts and motives.

    And yet here you are: you say 'certainly not'. Obviously *you* know
    everyone else's mindset!

    The "certainly not" was in response to your claim that we would ALL
    be using dynamic languages, a claim that was at best hyberbole. Nobody
    has claimed to know everyone else's mindset.

    You misunderstood what Janis wrote.

    I understand what he's trying to do. He despises me; he thinks the
    projects I work on are worthless. And any results I get can be
    dismissed. Meanwhile he's a 'professional', as stated many times.

    Maybe you can make up your own mind: here's a survey of mostly
    interpreted languages, all running the same Fibonacci benchmark:

    https://www.reddit.com/r/Compilers/comments/1jyl98f/fibonacci_survey/

    My products are marked with "*". You can see that the fastest purely interpreted language is one of mine.

    JP won't accept any of this, even if you took my stuff out, because he contends that you can't compare different languages.

    This post is likely to be a waste of time, but I'm prepared to be
    pleasantly surprised.

    *I'm* waiting to be pleasantly suprised by you agreeing with me for a change




    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Kaz Kylheku@3:633/10 to All on Wed Oct 29 00:04:13 2025
    On 2025-10-28, bart <bc@freeuk.com> wrote:
    On 28/10/2025 17:03, Kaz Kylheku wrote:
    On 2025-10-28, bart <bc@freeuk.com> wrote:
    On 27/10/2025 20:52, Kaz Kylheku wrote:
    On 2025-10-27, Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
    bart <bc@freeuk.com> writes:
    [...]
    Yes, but: the development and build procedures HAVE BEEN BUILT AROUND UNIX.

    So they are utterly dependent on them. So much so that it is pretty >>>>>> much impossible to build this stuff on any non-UNIX environment,
    unless that environment is emulated. That is what happens with WSL, >>>>>> MSYS2, CYGWIN.
    [...]

    **Yes, you're right**.

    The GNU autotools typically work smoothly when used on Unix-like
    systems. They can be made to work nearly as smoothly under Windows
    by using an emulation layer such as WSL, MSYS2, or Cygwin. It's very >>>>> difficult to use them on pure Windows.

    The way I see the status quo in this matter is this: cross-platform
    programs originating or mainly focusing on Unix-likes require effort
    /from their actual authors/ to have a native Windows port.

    Whereas when such programs are ported to Unix-like which their
    authors do not use, it is often possible for the users to get it
    working without needing help from the authors. There may be some
    patch to upstream, and that's about it.

    Also, a proper Windows port isn't just a way to build on Windows.
    Nobody does that. Windows doens't have tools out of the box.

    When you seriously commit to a Windows port, you provide a binary build >>>> with a proper installer.

    The problem with a binary distribution is AV software on the user's
    machine which can block it.

    Well, then you're fucked. (Which, anyway, is a good general adjective
    for someone still depending on Microsoft Windows.)

    The problem with source distribution is that users on Windows don't
    have any tooling. To get tooling, they would need to install binaries.

    There seems little problem with installing well-known compilers.

    If you think that is the case, then you can make an installer which
    bundles some know compiler, and your source code ... and so it goes.

    At install time, it builds the program.

    The user doesn't care how the program came to be there.

    (But even programs you build on the Windows machine itself can trigger antivirus ...)

    An installer is just an executable like any other, at least if it as a
    .EXE extension.

    Yes and, similarly, "there seems little problem with installing
    well-known" installers.

    If you supply a one-file, self-contained ready-to-run application, then
    it doesn't really need installing. Wherever it happens to reside after downloading, it can happily be run from there!

    Yes; that would be nice. Many people get PuTTY.exe that way, for
    instance.

    The only thing that's needed is to make it so that it can be run from anywhere without needing to type its path. But I can't remember any apps I've installed recently that seem to get that right, even with a
    long-winded installer:

    I did that for the Windows port of the TXR language. The installer
    updates PATH and sends the Windows message to running apps about the environment change. IIRC, existing cmd.exe instances pick that up.

    The generated uninstall.exe will take it right out.

    I've not looked at this in ages. I seem to recall there is a check
    against inserting the same PATH entry multiple times.

    Anyway, once you have that working, it works.

    In my inst.nsi, in Section "TXR" it looks like this;

    ${If} $AccountType == "Admin"
    ${EnvVarUpdate} $0 "PATH" "A" "HKLM" "$INSTDIR\txr\bin"
    ${Else}
    ${EnvVarUpdate} $0 "PATH" "A" "HKCU" "$INSTDIR\txr\bin"
    ${Endif}

    And in Section "Uninstall" the removal looks like this:

    ${If} $AccountType == "Admin"
    ${un.EnvVarUpdate} $0 "PATH" "R" "HKLM" "$INSTDIR\bin"
    ${Else}
    ${un.EnvVarUpdate} $0 "PATH" "R" "HKCU" "$INSTDIR\bin"
    ${Endif}

    Thus everything is done by this EnvVarUpdate, and its un.EnvVarUpdate

    These two environment update functions come from an "env.nsh" file that
    is not part of NSIS; it is a utility developed by multiple authors: Cal
    Turney, Amir Szekely, Diego Pedroso, Kevin English, Hendri Adriaens and
    others.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Keith Thompson@3:633/10 to All on Tue Oct 28 18:48:33 2025
    bart <bc@freeuk.com> writes:
    On 28/10/2025 21:59, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    On 28/10/2025 02:35, Janis Papanagnou wrote:
    On 27.10.2025 16:11, bart wrote:
    [...]
    If speed wasn't an issue then we'd all be using easy dynamic languages >>>> Huh? - Certainly not.

    *I* would! That's why I made my scripting languages as fast and
    capable as possible, so they could be used for more tasks.

    However, if I dare to suggest that even one other person in the world
    might also have the same desire, you'd say that I can't possibly know
    that.

    And yet here you are: you say 'certainly not'. Obviously *you* know
    everyone else's mindset!
    I'll give this one more try.
    This kind of thing makes it difficult to communicate with you.

    You're talking to the wrong guy. It's JP who's difficult to talk to.

    No, I'm talking to you. It turns out that was a mistake.

    My post was **only** about your apparent confusion about a single
    statement, quoted above. I wasn't talking about JP personally, or about
    any of his other interactions with you. I explained in great detail
    what I was referring to. You ignored it.

    You seem unwilling or unable to focus on one thing.

    He (I assume) always dismisses every single one of my arguments out of hand:

    Build speed is never a problem - ever. The speed of any language
    implemention is never a concern either.

    And here you are putting words in other people's mouths.

    I think you goal is to argue, not to do anything that might result in
    agreement or learning.

    [...]

    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    void Void(void) { Void(); } /* The recursive call of the void */

    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Janis Papanagnou@3:633/10 to All on Wed Oct 29 06:57:10 2025
    On 28.10.2025 12:16, bart wrote:
    On 28/10/2025 02:35, Janis Papanagnou wrote:
    On 27.10.2025 16:11, bart wrote:

    That's meaningless, but if you're interested to know...
    Mostly (including my professional work) I've probably used C++.
    But also other languages, depending on either projects' requirements
    or, where there was a choice, what appeared to be fitting best (and
    "best" sadly includes also bad languages if there's no alternative).

    Which bad languages are these?

    Are you hunting for a language war discussion? - I won't start it here.
    If you want, please start an appropriate topic in comp.lang.misc or so.

    [...]

    That CDECL took, what, 49 seconds on my machine, to process 68Kloc of C? That's a whopping 1400 lines per second!

    If we go back 45 years to machines that were 1000 times slower,

    We are not in these days any more. Nowadays there's much more complex
    software; some inherently bad designed software, and in other cases
    they might not care about tweaking the last second out of a process
    (for various reasons). So this comparison isn't really contributing
    anything here.

    the same
    process would only manage 1.4 lines per second, and it would take 13
    HOURS, to create an interactive program that explained what 'int (*(*(*)))[]()' (whatever it was) might mean.

    If that's the sole task of the program the speed is not very appealing.
    But I had not looked into the code, the algorithms implemented, or the
    features it supports. Criticism may be justified, maybe not.

    But you're creating a tool just once, and then use it arbitrary times.
    This is as a user of the tool. So why you care so much is beyond me.
    As a developer of the tool the used algorithms and the build process
    is under your control.


    So, yeah, build-time is a problem, even on the ultra-fast hardware we
    have now.

    What problem? - That you don't want to wait a few seconds? - Or that
    you cannot use that tool when time-traveling "back 45 years"?


    Bear in mind that CDECL (like every finished product you build from
    source) is a working, debugged program. You shouldn't need to do that
    much analysis of it. And here, its performance is not critical either:
    you don't even need fast code from it.

    (Erm.. - so after the rant you're now agreeing?)


    (I recall you were unfamiliar with make
    files, or am I misremembering?)

    I know makefiles. Never used them, never will.

    (Do what you prefer. - After all you're not cooperating with others in
    your personal projects, as I understood, so there's no need to "learn"
    [or just use!] things you don't like. If you think it's a good idea to
    spend time in writing own code for already solved tasks, I'm fine with
    that.)

    You might recall that I create my own solutions.

    I don't recall, to be honest. But let's rather say; I'm not astonished
    that you have "created your own solutions". (Where other folks would
    just use an already existing, flexibly and simply usable, working and
    supported solution.) - So that's your problem not anyone else's.



    Now imagine further if the CPython interpreter was inself written and
    executed with CPython.

    So, the 'speed' of a language (ie. of its typical implementation, which
    also depends on the language design) does matter.

    If speed wasn't an issue then we'd all be using easy dynamic languages

    Huh? - Certainly not.

    *I* would! That's why I made my scripting languages as fast and capable
    as possible, so they could be used for more tasks.

    Sure, you would. Obviously. - You've never been the widely accepted
    standard source for sensible general purpose solutions, though.


    However, if I dare to suggest that even one other person in the world
    might also have the same desire, you'd say that I can't possibly know that.

    You had presented your statement as if there'd be a pressing logical
    decision route. It is not.


    And yet here you are: you say 'certainly not'. Obviously *you* know
    everyone else's mindset!

    No. I neither said nor implied that. (I suggest to re-read what you
    said and what I wrote.)


    Speed is a topic, but as I wrote you have to put it in context

    Actually, the real topic is slowness. I'm constantly coming across
    things which I know (from half a century working with computers) are far slower than they ought to be.

    Fair enough.


    But I'm also coming across people who seem to accept that slowness as
    just how things are. They should question things more!

    I also think that there a not few people that accept inferior quality;
    how else could the success of, say, DOS, Windows, and off-the-shelf
    MS office software, be explained. Or some persistent deficiencies in
    some GNU/Linux tools and runtime system. Or services presented per Web interface.

    Speed is one factor. (I said that before.)


    I can't tell about the "many" that you have in mind, and about their
    mindset; I'm sure you either can't tell.

    I'm pretty sure there are quite a few million users of scripting languages.

    This is of no doubt, I'd say.

    What was arguable was the made-up _decision step_ concerning speed
    and "scripting languages".



    I'm using for very specific types of tasks "scripting languages" -
    and keep in mind that there's no clean definition of that!

    They have typical characteristics as I'm quite sure you're aware. For example:

    Yes, you're right, since I mentioned them I'm aware of them. But they
    are not serving a clear definition of "scripting languages"; they are
    basically just hints.


    * Dynamic typing

    Marcel van der Veer is advertising Genie (his Algol 68 interpreter) as
    a system usable for scripting. (With no dynamic but static typing.)

    * Run from source

    How about JIT, how about intermediate languages?

    * Instant edit-run cycle
    * Possible REPL

    * Uncluttered syntax

    Have a look at the syntax of (e.g.) the Unix shell "scripting language".

    * Higher level features

    Not a distinguished characteristic of scripting languages.

    * Extensive libraries so that you can quickly 'script' most tasks

    Awk (for example) is a stand-alone scripting language.


    So, interactivity and spontaneity. But they also have cons:

    * Slower execution

    Yes, but they can be rather fast (with intermediate code (GNU Awk),
    precompiled language elements (Genie), or other means). It very much
    depends on the languages, on "both types" of languages.

    * Little compile-time error checking

    (We already commented in your above point "Dynamic typing".)

    * Less control (of data structures for example)

    Not sure what you mean (control constructs, more data structures).
    But have a look into Unix shells for control constructs, and into
    Kornshell specifically for data structures.

    It's a very inhomogeneous area. Impossible to clearly classify.

    Janis


    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Janis Papanagnou@3:633/10 to All on Wed Oct 29 08:06:38 2025
    On 29.10.2025 00:14, bart wrote:
    On 28/10/2025 21:59, Keith Thompson wrote:
    [...]

    He (I assume) always dismisses every single one of my arguments out of
    hand:

    No, I'm trying to speak about various things; basically my focus
    is the facts. Not the persons involved. But there's persons with
    specific mindsets (like you) that provoke reactions; on flaws in
    your logic, misrepresentations, limited perspectives, etc.


    Build speed is never a problem - ever.

    Like here. You're making things up. - For example I clearly said;
    "Speed is a topic". But since you're so pathologically focused on
    that factor that you miss the important projects' contexts. So I
    then even quoted that (in case you missed it):
    Speed is not an end in itself. It must be valued in comparison
    with all the other often more relevant factors (that you seem to
    completely miss, even when explained to you).

    The speed of any language implemention is never a concern either.

    Nonsense.

    [...]

    When I gave the example of my language that was 1000 times faster to
    build than A68G, and which ran that test 10 times faster than A68G, that apparently doesn't count; he doesn't care; or I'm changing the goalposts.

    Exactly. Or comparing apples and oranges. - Sadly you do all that
    regularly.

    [...]

    On the face of it, it is uncontroversial: they do allow rapid
    development and instant feedback, as one of their several pros. Yet, JP
    feels the need to be contrary:

    I can't tell about the "many" that you have in mind, and about their
    mindset; I'm sure you either can't tell.

    And now you have joined in, to back him up!

    Bart, you should take Keith's words meant benevolent; all he's trying
    was you not always assuming that we want to hurt you if we criticize
    any misconceptions in your thinking or considering a topic only from
    one isolate perspective. If you continue to assume that the "worst"
    was meant, and only against you, you won't get anywhere.

    Keith has explained in his posts exactly what was said and meant, and
    made your discussion maneuvers explicit. (I would have been happier
    if you, Bart, would have noticed yourself what was obvious to Keith.)

    [...]
    [...]

    You misunderstood what Janis wrote.

    I understand what he's trying to do. He despises me; he thinks the

    Obviously you don't understand, and certainly also don't know what
    I think; if you would understand it you wouldn't have written this
    nonsense.

    projects I work on are worthless.

    Actually, as far as I saw your projects, methods, and targets, yes;
    they are completely worthless _for me_. (Mind the emphasis.)

    I also doubt that they are of worth in typical professional contexts;
    since they seem to lack some basic properties needed in professional
    contexts. - But that is your problem, not mine. (I just don't care.)

    [...] Meanwhile he's a 'professional', as stated many times.

    Oh, my perception is that the regulars here are *all* professionals!
    And (typically) even to a high degree. - That's, I think, one reason
    why you sometimes (often?) get headwind from the audience.

    What I'm regularly trying to tell you is that your project setups
    and results might only rarely serve the requirements in professional
    _projects_ as you find them in _professional software companies_.

    You cannot seem to accept that.

    Personally I'm not working anymore professionally. (I mentioned that occasionally.) But I've still the expertise from my professional work
    and education, and I share my experiences to those who are interested.

    You, personally, are of no interest to me; your presumptions are thus
    wrong. (I'm interested in CS and IT topics.)

    Janis

    [...]


    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From bart@3:633/10 to All on Wed Oct 29 11:20:47 2025
    On 29/10/2025 07:06, Janis Papanagnou wrote:
    On 29.10.2025 00:14, bart wrote:
    On 28/10/2025 21:59, Keith Thompson wrote:
    [...]

    He (I assume) always dismisses every single one of my arguments out of
    hand:

    No, I'm trying to speak about various things; basically my focus
    is the facts. Not the persons involved. But there's persons with
    specific mindsets (like you) that provoke reactions; on flaws in
    your logic, misrepresentations, limited perspectives, etc.


    Build speed is never a problem - ever.

    Like here. You're making things up. - For example I clearly said;
    "Speed is a topic". But since you're so pathologically focused on
    that factor that you miss the important projects' contexts. So I
    then even quoted that (in case you missed it):
    Speed is not an end in itself. It must be valued in comparison
    with all the other often more relevant factors (that you seem to
    completely miss, even when explained to you).

    The speed of any language implemention is never a concern either.

    Nonsense.

    [...]

    When I gave the example of my language that was 1000 times faster to
    build than A68G, and which ran that test 10 times faster than A68G, that
    apparently doesn't count; he doesn't care; or I'm changing the goalposts.

    Exactly. Or comparing apples and oranges. - Sadly you do all that
    regularly.

    [...]

    On the face of it, it is uncontroversial: they do allow rapid
    development and instant feedback, as one of their several pros. Yet, JP
    feels the need to be contrary:

    I can't tell about the "many" that you have in mind, and about their
    mindset; I'm sure you either can't tell.

    And now you have joined in, to back him up!

    Bart, you should take Keith's words meant benevolent; all he's trying
    was you not always assuming that we want to hurt you if we criticize
    any misconceptions in your thinking or considering a topic only from
    one isolate perspective. If you continue to assume that the "worst"
    was meant, and only against you, you won't get anywhere.

    Keith has explained in his posts exactly what was said and meant, and
    made your discussion maneuvers explicit. (I would have been happier
    if you, Bart, would have noticed yourself what was obvious to Keith.)

    [...]
    [...]

    You misunderstood what Janis wrote.

    I understand what he's trying to do. He despises me; he thinks the

    Obviously you don't understand, and certainly also don't know what
    I think; if you would understand it you wouldn't have written this
    nonsense.

    projects I work on are worthless.

    Actually, as far as I saw your projects, methods, and targets, yes;
    they are completely worthless _for me_. (Mind the emphasis.)

    I also doubt that they are of worth in typical professional contexts;
    since they seem to lack some basic properties needed in professional contexts. - But that is your problem, not mine. (I just don't care.)

    [...] Meanwhile he's a 'professional', as stated many times.

    Oh, my perception is that the regulars here are *all* professionals!
    And (typically) even to a high degree. - That's, I think, one reason
    why you sometimes (often?) get headwind from the audience.

    What I'm regularly trying to tell you is that your project setups
    and results might only rarely serve the requirements in professional _projects_ as you find them in _professional software companies_.

    Everyone these days can do their own development on their own projects.
    The standards do not need to be that high, the scale need not be that huge.

    Yet the off-the-shelf tools available are still slow and cumbersome.


    You cannot seem to accept that.

    Personally I'm not working anymore professionally. (I mentioned that occasionally.) But I've still the expertise from my professional work
    and education, and I share my experiences to those who are interested.

    You, personally, are of no interest to me; your presumptions are thus
    wrong. (I'm interested in CS and IT topics.)

    I'm interested in developing small, human-scale and *personal* projects
    around compilers, assemblers, linkers, interpreters and emulators. I
    also devise my own languages.

    That they were small, simple, fast, and self-contained with no
    dependencies (a necessity when I started out) was incidental.

    But those aspects are now deliberately cultivated as a stand against
    big, slow, complex tools and complex ecosystems.

    I also (I seem to be unique in this regard) understand the vast
    difference between building a WIP project from source during
    development, which may be done 100s of times a day, and an enduser
    building a finished product from source code, just once.


    And yet, most source projects that you build from source are just a dump
    of the developer's source tree. No effort is put into making it
    streamlined with few points of failure.

    So I am looking at that. And also at the problems of working with large libraries. I posted elsewhere about this: so WHY isn't the provided API
    for a library supplied as one compact monolithic header instead of
    dozens or hundreds or several headers? Why possible benefit is that to
    the /user/ of the library?

    In short, I'm doing at lot of experimental work in finding tidy,
    efficient solutions to building personal software, ones that are mainly OS-agnostic too.

    Meanwhile everybody else is striving to do that exact opposite! And in
    this newsgroup, continously shout down my work and my views.



    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Michael S@3:633/10 to All on Wed Oct 29 14:17:14 2025
    On Wed, 29 Oct 2025 06:57:10 +0100
    Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wrote:

    On 28.10.2025 12:16, bart wrote:

    * Less control (of data structures for example)

    Not sure what you mean (control constructs, more data structures).
    But have a look into Unix shells for control constructs, and into
    Kornshell specifically for data structures.

    It's a very inhomogeneous area. Impossible to clearly classify.

    Janis


    Less control of data structures means less control of data structures.
    In some (not all) non-scripting languages we have ether full control of
    the layout of records (Ada) or at least non-full-but-good-enough-in- practice-if-one-knows-what-he-is-doing control (C).
    In scripting languages the same effect often has to be achieved by
    coding binary parser in imperative manner. Imperative style in this
    case is less convenient and more error-prone than declarative style
    available in Ada and C.
    However there are many none-scripting language, including few of the
    most popular (Java, C#) that in this regard are not better than your
    typical scripting language.
    So, may be, better division here would be not "dynamic,scripting vs statically-typed, non-scripting", but "system-oriented languages vs application-oriented languages".



    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From bart@3:633/10 to All on Wed Oct 29 14:40:46 2025
    On 29/10/2025 05:57, Janis Papanagnou wrote:
    On 28.10.2025 12:16, bart wrote:
    On 28/10/2025 02:35, Janis Papanagnou wrote:
    On 27.10.2025 16:11, bart wrote:

    That's meaningless, but if you're interested to know...
    Mostly (including my professional work) I've probably used C++.
    But also other languages, depending on either projects' requirements
    or, where there was a choice, what appeared to be fitting best (and
    "best" sadly includes also bad languages if there's no alternative).

    Which bad languages are these?

    Are you hunting for a language war discussion? - I won't start it here.
    If you want, please start an appropriate topic in comp.lang.misc or so.

    I'm just looking for /anything/ you don't like! Since you seem to be remarkably uncritical of everything - except all the stuff I do.



    [...]

    That CDECL took, what, 49 seconds on my machine, to process 68Kloc of C?
    That's a whopping 1400 lines per second!

    If we go back 45 years to machines that were 1000 times slower,

    We are not in these days any more.

    The point of the comparison to 1000-times slower hardware is to
    highlight how remarkably slow some modern toolsets are.

    Nowadays there's much more complex
    software; some inherently bad designed software, and in other cases
    they might not care about tweaking the last second out of a process
    (for various reasons). So this comparison isn't really contributing
    anything here.

    Software might be bigger, but that is why you use LPS figures rather
    than overall build-time.

    However, I also picked on this task since it wouldn't have changed
    signicantly over those decades.



    So, yeah, build-time is a problem, even on the ultra-fast hardware we
    have now.

    What problem? - That you don't want to wait a few seconds?

    You KNOW compile- and build-times can be a serious bottleneck, and
    people are looking into ways to improve that, other than throwing extra hardware resources at it.

    Either you haven't experienced that, or you are remarkably tolerant and patient.

    The actual problem I picked up on is that the build-time was out of
    proportion to the scale of the task. In this case of a one-off build, it
    is not that consequential. But it suggests something is wrong.

    On current hardware we must surely be able to do better than 1-2K lines
    per second, even if optimising. And I know we can because some products,
    not just mine, can manage 500-1000 times faster.


    I know makefiles. Never used them, never will.

    (Do what you prefer. - After all you're not cooperating with others in
    your personal projects, as I understood, so there's no need to "learn"
    [or just use!] things you don't like. If you think it's a good idea to
    spend time in writing own code for already solved tasks, I'm fine with
    that.)

    You might recall that I create my own solutions.

    I don't recall, to be honest. But let's rather say; I'm not astonished
    that you have "created your own solutions". (Where other folks would
    just use an already existing, flexibly and simply usable, working and supported solution.) - So that's your problem not anyone else's.

    Because existing solutions DIDN'T EXIST in a practical form (remember I
    worked with 8-bit computers), or they were hopelessly slow and
    complicated on restricted hardware.

    I don't need a linker, I don't need a makefile, I don't need lists of dependencies between modules, I don't need independent compilation, I
    don't use object files.

    The generated makefile for the 49-module CDECL project is 2000 lines of gobbledygook; that's not really selling it to me!

    If *I* had a 49-module C project, the build info I'd supply you would basically be that list of files, plus the source files.

    With my language, you'd need exactly two files: a self-contained
    compiler, and a self-contained source file amalgamation. For a 600KB
    binary, it might take as much as 0.2 seconds to build.

    I consider that a more satisfactory solution that writing 2000 lines of garbage. YMMV.



    I also think that there a not few people that accept inferior quality;
    how else could the success of, say, DOS, Windows, and off-the-shelf
    MS office software, be explained.

    MS products are fairly solid. They are superb at backwards compatibility
    and at compatibility across machines in general. That's why Windows apps
    can be supplied as binaries that will work on any Windows machine.

    However they tend to be absolutely huge, complicated and slow, even more
    so than any Linux tools. (It once took 90 minutes to install VS.
    Starting it - usually inadvertently due to file associations - took 90 seconds.)

    * Dynamic typing

    Marcel van der Veer is advertising Genie (his Algol 68 interpreter) as
    a system usable for scripting. (With no dynamic but static typing.)

    This product is unusual, but then it's not clear where Algol 68 lies.
    It's a not really a static language like C, Rust, Zig, Go, Java ... but
    it's also not as high-level as ones like Haskell or OCaml, which are
    static or type-infered.

    The first group are usually compiled but may offer interpreted options.
    Such languages can be naturally converted to performant native code.

    However A68G prioritises interpretation. While there is a
    compile-to-native option, it's not very performant.

    So overall it's a curiosity. (An interesting one because after several decades, I was finally able to try out Algol68 for real. I wasn't
    impressed, and nothing to do with its speed either.)



    * Run from source

    How about JIT, how about intermediate languages?

    Intermediate languages (designed for compiler backends) are irrelevant. Whether they even have a textual source format is a detail.

    JIT-ing used in place of AOT-compilation for static languages is
    something new. I haven't come across examples so I don't know how it
    comes across, or what latencies there might be.

    Personally, I can run both C (single file programs ATM) and my languages directly from source, with no discernible delay, via a VERY FAST AOT
    step. But I wouldn't class them as scripting languages for other reasons.


    * Little compile-time error checking

    (We already commented in your above point "Dynamic typing".)

    There's more that could be done. Take:

    F(x, y, z)

    F is a function in some imported module. In most dynamic languages, the
    import is done at runtime so the number of arguments, or if F is even a function can't be checked at compile-time.

    In my dynamic language, the import is done at compile-time so there's
    more that can be checked in advance. It's less dynamic, but x, y, z can
    still be dynamically typed.

    * Less control (of data structures for example)

    Not sure what you mean (control constructs, more data structures).

    I mean things like layouts of structs, or even the exact form of an
    array. Again, mine has FFI abilities built-in, and directly supports
    C-like data types.

    So either of these user types can be defined:

    record date1 =
    var d, m, y # can hold any types
    end

    type date2 = struct
    u8 d, m
    u16 y
    end

    An instance of the latter occupies 4 bytes; of the former, 48+32 bytes
    plus whatever big data the members may contain.

    Most dynamic languages don't natively support that latter kind of data
    type. Actually many don't even directly have records with named fields
    like the first. They have be to emulated.


    It's a very inhomogeneous area. Impossible to clearly classify.

    Ask some people for examples of what they think of as scripting
    languages. I'd be interested in what they say.

    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From tTh@3:633/10 to All on Wed Oct 29 16:09:20 2025
    On 10/29/25 15:40, bart wrote:

    I don't need a linker, I don't need a makefile, I don't need lists of dependencies between modules, I don't need independent compilation, I
    don't use object files.


    s/don't need/refuse to use/

    --
    ** **
    * tTh des Bourtoulots *
    * http://maison.tth.netlib.re/ *
    ** **

    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From David Brown@3:633/10 to All on Wed Oct 29 16:36:26 2025
    On 28/10/2025 20:14, Janis Papanagnou wrote:
    On 28.10.2025 15:59, David Brown wrote:
    On 28/10/2025 03:00, Janis Papanagnou wrote:
    On 27.10.2025 21:39, Michael S wrote:

    [ snip Lua statements ]

    Algol 68 is a great source of inspiration for designers of
    programming languages.

    Obviously.

    Useful programming language it is not.

    I have to read that as valuation of its usefulness for you.
    (Otherwise, if you're speaking generally, you'd be just wrong.)


    The uselessness of Algol 68 as a programming language in the modern
    world is demonstrated by the almost total non-existence of serious tools
    and, more importantly, real-world code in the language.

    Obviously you are mixing the terms usefulness and dissemination
    (its actual use). Please accept that I'm differentiating here.

    There's quite some [historic] languages that were very useful but
    couldn't disseminate. (For another prominent example cf. Simula,
    that invented not only the object oriented principles with classes
    and inheritance, was a paragon for quite some OO-languages later,
    and it made a lot more technical and design inventions, some even
    now still unprecedented.) It's a pathological historic phenomenon
    that programming languages from the non-US American locations had
    inherent problems to disseminate especially back these days!

    Reasons for dissemination of a language are multifold; back then
    (but to a degree also today) they were often determined by political
    and marketing factors... (you can read about that in various historic documents and also in later ruminations about computing history)

    I can certainly agree that some languages, including Algol, Algol 68 and Simula, have had very significant influence on the programming world and
    other programming languages, despite limited usage. I was interpreting "useful programming language" as meaning "a language useful for writing programs" - and neither Algol 68 nor Simula are sensible choices for
    writing code today. Neither of them were ever appropriate choices for
    many programming tasks (Algol and its derivatives was used a lot more
    than Algol 68). The lack of significant usage of these languages beyond
    a few niche cases is evidence (but not proof) that they were never particularly useful as programming languages.


    It certainly /was/ a useful programming language, long ago,

    ...as you seem to basically agree to here. (At least as far as you
    couple usefulness with dissemination.)

    I do couple these, yes. I agree with you that there are many reasons
    for the popularity of languages other than technical suitability, but
    many of these add up to the general "usefulness" of the language. When choosing the language to use for a particular task, the availability of programmers familiar with the language, the availability of tools,
    libraries, and existing code, can be just as important as the language's efficiency, expressibility, or any technical benefits. Consider Bart's language - if we believe him at face value, it is the fastest, clearest,
    most logical, most powerful, and generally best programming language
    ever conceived. But for almost every programmer on the planet, it is completely useless.

    Similarly, Algol 68 may have been the technically best language of its
    age, and highly influential on other languages, and yet still not a
    useful programming language. It could also have been a useful
    programming language in its day, and no longer be a useful programming language.


    but it has not been
    seriously used outside of historical hobby interest for half a century.

    (Make that four decades. It's been used in the mid 1980's. - Later
    I didn't follow it anymore, so I cannot tell about the 1990's.)

    (I also disagree in your valuation "hobby interest"; for "hobbies"
    there were easier accessible languages used, not systems that were
    back these days mainly available on mainframes only.)

    I did not suggest that it is now, or ever has been, an appropriate
    language for hobby programmers - I don't know the language enough to
    judge. I suggested that anyone programming in Algol 68 today is likely
    to be doing so as a hobby or for historical interest. (There may be the occasional professional maintaining ancient Algol code for ancient
    mainframes that are still in use.)


    As far as you mean in programming software systems, that may be true;
    I cannot tell that I'd have an oversight who did use it. I've read
    about various applications, though; amongst them that it's even been
    used as a systems programming language (where I was astonished about).


    My understanding - which may well be flawed - is that Algol 60 and many non-standard variants were used quite widely at the time. Algol 68, on
    the other hand, never took off outside.

    And unlike other ancient languages (like Cobol or Fortran) there is no
    code of relevance today written in the language.

    Probably right. (That would certainly be also my guess.)

    Original Algol was
    mostly used in research, while Algol 68 was mostly not used at all. As
    C.A.R. Hoare said, "As a tool for the reliable creation of sophisticated
    programs, the language was a failure".

    I don't know the context of his statement. If you know the language
    you might admit that reliable software is exactly one strong property
    of that language. (Per se already, but especially so if compared to
    languages like "C", the language discussed in this newsgroup, with an extremely large dissemination and also impact.)


    I don't know the context either.


    I'm sure there are /some/ people who have or will write real code in
    Algol 68 in modern times

    The point was that the language per se was and is useful. But its
    actual usage for developing software systems seems to have been of
    little and more so it's currently of no importance, without doubt.

    (the folks behind the new gcc Algol 68
    front-end want to be able to write code in the language),

    There's more than the gcc folks. (I've heard, that gcc has taken some substantial code from Genie, an Algol 68 "compiler-interpreter" that
    is still maintained. BTW; I'm for example using that one, not gcc's.)

    but it is very much a niche language.

    It's _functionally_ a general purpose language, not a niche language
    (in the sense of "special purpose language"). Its dissemination makes
    it to a "niche language", that's true. It's in practice just a dead
    language. It's rarely used by anyone. But it's a very useful language.


    Can you give any examples of situations where it might be reasonable to
    choose Algol 68 as a language /today/ for a piece of code, rather than a
    more mainstream language (C, Python, Java, Pascal, Visual Basic,
    whatever) ? If such situations are very rare or non-existent, then I do
    not see it as a useful language.

    But I think we are mostly disagreeing about what we consider the term
    "useful programming language" to mean.






    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From David Brown@3:633/10 to All on Wed Oct 29 17:12:44 2025
    On 29/10/2025 00:14, bart wrote:
    On 28/10/2025 21:59, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    On 28/10/2025 02:35, Janis Papanagnou wrote:
    On 27.10.2025 16:11, bart wrote:
    [...]
    If speed wasn't an issue then we'd all be using easy dynamic languages >>>> Huh? - Certainly not.

    *I* would! That's why I made my scripting languages as fast and
    ˙ capable as possible, so they could be used for more tasks.

    However, if I dare to suggest that even one other person in the world
    might also have the same desire, you'd say that I can't possibly know
    that.

    And yet here you are: you say 'certainly not'. Obviously *you* know
    everyone else's mindset!

    I'll give this one more try.

    This kind of thing makes it difficult to communicate with you.

    You're talking to the wrong guy. It's JP who's difficult to talk to.

    He (I assume) always dismisses every single one of my arguments out of
    hand:

    Build speed is never a problem - ever. The speed of any language implemention is never a concern either.


    Bart, I think this all comes down to some basic logic that you get wrong regularly :

    The opposite of "X is always true" is /not/ "X is always false" or that
    "(not X) is always true". It is that "X is /sometimes/ false", or that
    "(not X) is /sometimes/ true".

    You get this wrong repeatedly when you and I are in disagreement, and I
    see it again and again with other people - such as with both Janis and
    Keith.

    No one, in any of the posts I have read in c.l.c. in countless years,
    has ever claimed that "build speed is /never/ a problem". People have regularly said that it /often/ is not a problem, or it is not a problem
    in their own work, or that slow compile times can often be dealt with in various ways so that it is not a problem. People don't disagree that
    build speed can be an issue - they disagree with your claims that it is /always/ an issue (except when using /your/ tools, or perhaps tcc).

    When Janis disagrees with you, he is not trashing /everything/ you say,
    he is disagreeing with /some/ of what you say.

    No one disagrees that /some/ people would change to using dynamic
    scripting languages if they had no runtime speed penalty compared to
    compiled languages - but probably everyone would disagree with a claim
    that /all/ programmers would change. And no one here thinks that either
    you or anyone has a reasonable basis for judging how many that "some"
    would be.

    So please, stop making this kind of mistake. I am confident that you understand the logic here. But you regularly write as though you do
    not, setting up nonsensical straw man arguments as a result. And then
    you make claims about what other people think or said based on this.
    Yes, it is very much /you/ who is difficult to communicate with.

    And you should not be surprised if Keith agrees with you sometimes -
    like I do, like Janis does, and like most people here do, he judges your points as best he can and agrees with some and disagrees with others.
    These discussions are not black-or-white, all-or-nothing affairs. If
    you like to hear positive feedback and agreement on your comments (and
    who doesn't like that?), you need to pay attention to what people write
    and notice when people agree with you rather than focusing only on when
    they disagree. Cut the paranoia, drop the straw men and exaggerations,
    argue you case logically, listen to the replies and feedback you get,
    and the whole discussion will be a lot more enjoyable and productive.




    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From bart@3:633/10 to All on Wed Oct 29 16:47:39 2025
    On 29/10/2025 15:09, tTh wrote:
    On 10/29/25 15:40, bart wrote:

    I don't need a linker, I don't need a makefile, I don't need lists of
    dependencies between modules, I don't need independent compilation, I
    don't use object files.


    ˙˙˙˙ s/don't need/refuse to use/

    It looks like Python refuses to use all those things too!

    Think about that, then think about how it might be possible for a
    language and implementation to use an alternate path to get from source
    code to executable. One that is simpler.


    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From bart@3:633/10 to All on Wed Oct 29 17:24:49 2025
    On 29/10/2025 15:36, David Brown wrote:
    On 28/10/2025 20:14, Janis Papanagnou wrote:

    Reasons for dissemination of a language are multifold; back then
    (but to a degree also today) they were often determined by political
    and marketing factors... (you can read about that in various historic
    documents and also in later ruminations about computing history)

    I can certainly agree that some languages, including Algol, Algol 68 and Simula, have had very significant influence on the programming world and other programming languages, despite limited usage.˙ I was interpreting "useful programming language" as meaning "a language useful for writing programs" - and neither Algol 68 nor Simula are sensible choices for
    writing code today.˙ Neither of them were ever appropriate choices for
    many programming tasks (Algol and its derivatives was used a lot more
    than Algol 68).˙ The lack of significant usage of these languages beyond
    a few niche cases is evidence (but not proof) that they were never particularly useful as programming languages.

    Algol68, while refreshingly different when I came across it in the late
    70s, was a complex language.

    Its reference document, the Revised Report, its two-level van
    Wijngaarden grammar, suggested a language too much up its own arse.

    Its complexities tended to leak even into straightforward features that
    people are familiar with from other languages.

    Understanding it, and confidently using it, looked hard. Implementing it
    must have been a lot harder.

    Also, at the time I'd only ever seen examples of it in print, where it
    was beautifully typeset and looked gorgeous.

    The reality when I finally got to try it was very different. You spent
    half the time fighting with upper/lower case and trying to get
    semicolons right. And most of rest grappling with esoteric error
    messages couched in terms from the revised report (which has its own vocabulary).

    I borrowed some syntactic features I considered cool, but I had to
    produce a real, practical systems language for microprocessors, whose
    compiler had to run on the same machine.

    From this perspective, I consider it rather dreadful now, with lots of dubious-sounding aspects.

    Take this one: comments start with '#' (an alternative to COMMENT) and
    also end with '#'. Leave out '#' (or have a stray one) and everything
    now gets out of step.

    Or this one:

    print((2 + 3 * 4));

    BEGIN
    PRIO * = 5;
    print((2 + 3 * 4))
    END

    The first print shows 14. The second shows 20, as the precedence of '*'
    has been set to match that of '+'.

    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From bart@3:633/10 to All on Wed Oct 29 19:24:12 2025
    On 29/10/2025 01:48, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    On 28/10/2025 21:59, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    On 28/10/2025 02:35, Janis Papanagnou wrote:
    On 27.10.2025 16:11, bart wrote:
    [...]
    If speed wasn't an issue then we'd all be using easy dynamic languages >>>>> Huh? - Certainly not.

    *I* would! That's why I made my scripting languages as fast and
    capable as possible, so they could be used for more tasks.

    However, if I dare to suggest that even one other person in the world
    might also have the same desire, you'd say that I can't possibly know
    that.

    And yet here you are: you say 'certainly not'. Obviously *you* know
    everyone else's mindset!
    I'll give this one more try.
    This kind of thing makes it difficult to communicate with you.

    You're talking to the wrong guy. It's JP who's difficult to talk to.

    No, I'm talking to you. It turns out that was a mistake.

    My post was **only** about your apparent confusion about a single
    statement, quoted above. I wasn't talking about JP personally, or about
    any of his other interactions with you. I explained in great detail
    what I was referring to. You ignored it.

    You seem unwilling or unable to focus on one thing.

    He (I assume) always dismisses every single one of my arguments out of hand: >>
    Build speed is never a problem - ever. The speed of any language
    implemention is never a concern either.

    And here you are putting words in other people's mouths.

    I think you goal is to argue, not to do anything that might result in agreement or learning.

    Again, I think you're mixing up me and JP, whose only goal is to
    contradict and refute everything I say.

    I say: X has some problem; Y doesn't have that problem. This is about approaches to building software.

    He refuses to acknowledge that X has any problem whatsoever, or shrugs
    off the importance

    He refuses to accept that Y is a solution, because I devised it and he
    looks down upon me because he considers himself superior.


    He refuses to accept Z (which I haven't devised) for other reasons (to
    avoid admitting that I might have a point).

    The problems are X are real and I think you have acknowledged them. But
    I have decades of experience of viable alternatives so I think I can
    offer an educated, alternative opinion

    JP I don't think has offered any better alternatives has not devised any
    that I am aware. So he is just and user of such software and not a creator.

    This is rather frustrating to me. You seem to be on his side, and don't
    care about X versus Y either.




    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From bart@3:633/10 to All on Wed Oct 29 21:21:34 2025
    On 29/10/2025 16:12, David Brown wrote:
    On 29/10/2025 00:14, bart wrote:
    On 28/10/2025 21:59, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    On 28/10/2025 02:35, Janis Papanagnou wrote:
    On 27.10.2025 16:11, bart wrote:
    [...]
    If speed wasn't an issue then we'd all be using easy dynamic
    languages
    Huh? - Certainly not.

    *I* would! That's why I made my scripting languages as fast and
    ˙ capable as possible, so they could be used for more tasks.

    However, if I dare to suggest that even one other person in the world
    might also have the same desire, you'd say that I can't possibly know
    that.

    And yet here you are: you say 'certainly not'. Obviously *you* know
    everyone else's mindset!

    I'll give this one more try.

    This kind of thing makes it difficult to communicate with you.

    You're talking to the wrong guy. It's JP who's difficult to talk to.

    He (I assume) always dismisses every single one of my arguments out of
    hand:

    Build speed is never a problem - ever. The speed of any language
    implemention is never a concern either.


    Bart, I think this all comes down to some basic logic that you get wrong regularly :

    The opposite of "X is always true" is /not/ "X is always false" or that "(not X) is always true".˙ It is that "X is /sometimes/ false", or that "(not X) is /sometimes/ true".

    You get this wrong repeatedly when you and I are in disagreement, and I
    see it again and again with other people - such as with both Janis and Keith.

    No one, in any of the posts I have read in c.l.c. in countless years,
    has ever claimed that "build speed is /never/ a problem".˙ People have regularly said that it /often/ is not a problem, or it is not a problem
    in their own work, or that slow compile times can often be dealt with in various ways so that it is not a problem.˙ People don't disagree that
    build speed can be an issue - they disagree with your claims that it
    is /always/ an issue (except when using /your/ tools, or perhaps tcc).

    It was certainly an issue here: the 'make' part of building CDECL and
    A68G, I considered slow for the scale of the task given that the apps
    are 68 and 78Kloc (static total of .c and .h files).

    A68G I know takes 90 seconds to build (since I've just tried it again;
    it took long enough that I had an ice-cream while waiting, so that's something).

    That's under 1Kloc per second; not great.

    But at least all the optimising would have produced a super-fast
    executable? Well, that's disappointing too; no-one can say that A68G is
    fast.

    I said that my equivalent product was 1000 times faster to build (don't
    forget the configure nonsense) and it ran 10 times faster on the same test.

    That is a quite remarkable difference. VERY remarkable. Only some of it
    is due to my product being smaller (but it's not 1000 times smaller!).

    This was stated to demonstrate how different my world was.

    My view is that there is something very wrong with the build systems
    everyone here uses. But I can understand that no one wants to admit that they're that bad.

    You find ways around it, you get inured to it, but you just have to use
    much more powerful machines than mine, but I would go round the bend if
    I had to work with something so unresponsive.




    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Ben Bacarisse@3:633/10 to All on Wed Oct 29 21:30:44 2025
    scott@slp53.sl.home (Scott Lurndal) writes:

    Michael S <already5chosen@yahoo.com> writes:
    On Tue, 28 Oct 2025 16:05:47 GMT
    scott@slp53.sl.home (Scott Lurndal) wrote:
    ...
    There is still one computer system that uses Algol as both
    the system programming language, and for applications.

    Unisys Clearpath (descendents of the Burroughs B6500).


    Is B6500 ALGOL related to A68?

    A-series ALGOL has many extensions.

    DCAlgol, for example, is used to create applications
    for data communications (e.g. poll-select multidrop
    applications such as teller terminals, etc).

    NEWP is an algol dialect used for systems programming
    and the operating system itself.


    ALGOL: https://public.support.unisys.com/aseries/docs/ClearPath-MCP-19.0/86000098-517/86000098-517.pdf
    DCALGOL: https://public.support.unisys.com/aseries/docs/ClearPath-MCP-19.0/86000841-208.pdf
    NEWP: https://public.support.unisys.com/aseries/docs/ClearPath-MCP-21.0/86002003-409.pdf

    None of these are related to Algol 68, any more than any other
    Algol-like language might be. None exhibit any of the key features that distinguish Algol 68 from Algol 60 or any of the many Algol-like
    languages such as Algol W or S-algol (sic).

    --
    Ben.

    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Keith Thompson@3:633/10 to All on Wed Oct 29 15:10:41 2025
    bart <bc@freeuk.com> writes:
    On 29/10/2025 01:48, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    On 28/10/2025 21:59, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    On 28/10/2025 02:35, Janis Papanagnou wrote:
    On 27.10.2025 16:11, bart wrote:
    [...]
    If speed wasn't an issue then we'd all be using easy dynamic languages [...]

    Bart, is the above statement literally accurate? Do you believe that
    we would ALL be using "easy dynamic languages" if speed were not an
    issue, meaning that non-dynamic languages would die out completely?

    That's what this whole sub-argument is about.

    Maybe your statement was meant to be hyberbole, and that what you
    really meant is that dynamic languages would be more popular than
    they are now if speed were not an issue. Possibly someone just took
    your figuratative statement a little too literally. If that's the
    case, please just say so.

    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    void Void(void) { Void(); } /* The recursive call of the void */

    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From David Brown@3:633/10 to All on Thu Oct 30 00:04:43 2025
    On 29/10/2025 22:21, bart wrote:
    On 29/10/2025 16:12, David Brown wrote:
    On 29/10/2025 00:14, bart wrote:
    On 28/10/2025 21:59, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    On 28/10/2025 02:35, Janis Papanagnou wrote:
    On 27.10.2025 16:11, bart wrote:
    [...]
    If speed wasn't an issue then we'd all be using easy dynamic
    languages
    Huh? - Certainly not.

    *I* would! That's why I made my scripting languages as fast and
    ˙ capable as possible, so they could be used for more tasks.

    However, if I dare to suggest that even one other person in the world >>>>> might also have the same desire, you'd say that I can't possibly know >>>>> that.

    And yet here you are: you say 'certainly not'. Obviously *you* know
    everyone else's mindset!

    I'll give this one more try.

    This kind of thing makes it difficult to communicate with you.

    You're talking to the wrong guy. It's JP who's difficult to talk to.

    He (I assume) always dismisses every single one of my arguments out
    of hand:

    Build speed is never a problem - ever. The speed of any language
    implemention is never a concern either.


    Bart, I think this all comes down to some basic logic that you get
    wrong regularly :

    The opposite of "X is always true" is /not/ "X is always false" or
    that "(not X) is always true".˙ It is that "X is /sometimes/ false",
    or that "(not X) is /sometimes/ true".

    You get this wrong repeatedly when you and I are in disagreement, and
    I see it again and again with other people - such as with both Janis
    and Keith.


    Bart, did you understand what I wrote here? Do you agree with it - or
    at least accept how your posts can be interpreted this way? If you
    can't change the way you express yourself, these threads will always end
    with you repeating wild exaggerations and generalisations on your
    favourite rants, no matter what the original topic, and you'll again get frustrated because you feel "everyone is against you". We get more than enough of that with Olcott - I know you can do better.

    No one, in any of the posts I have read in c.l.c. in countless years,
    has ever claimed that "build speed is /never/ a problem".˙ People have
    regularly said that it /often/ is not a problem, or it is not a
    problem in their own work, or that slow compile times can often be
    dealt with in various ways so that it is not a problem.˙ People don't
    disagree that build speed can be an issue - they disagree with your
    claims that it is /always/ an issue (except when using /your/ tools,
    or perhaps tcc).

    It was certainly an issue here: the 'make' part of building CDECL and
    A68G, I considered slow for the scale of the task given that the apps
    are 68 and 78Kloc (static total of .c and .h files).


    I have no interest in A68G. I have no stake in cdecl or knowledge (or particular interest) in how it was written, and how appropriate the
    number of lines of code are for the task in hand. I am confident it
    could have been written in a different way with less code - but not at
    all confident that doing so would be in any way better for the author of
    the program. I am also confident that you know far too little about
    what the program can do, or why it was written the way it was, to judge whether it has a "reasonable" number of lines of code, or not.

    However, it's easy to look at the facts. The "src" directory from the
    github clone has about 50,000 lines of code in .c files, and 18,000
    lines of code in .h files. The total is therefore about 68 kloc of
    source. This does not at all mean that compilation processes exactly 68 thousand lines of code - it will be significantly more than that as
    headers are included by multiple files, and lots of other headers from
    the C standard library and other libraries are included. Let's guess
    100 kloc.

    The build process takes 8 seconds on my decade-old machine, much of
    which is something other than running the compiler. (Don't ask me what
    it is doing - I did not write this software, design its build process,
    or determine how the program is structured and how it is generated by
    yacc or related tools. This is not my area of expertise.) If for some strange reason I choose to run "make" rather than "make -j", thus
    wasting much of my computer's power, it takes 16 seconds. Some of these non-compilation steps do not appear to be able to run in parallel, and a couple of the compilations (like "parser.c", which appears to be from a
    parser generator rather than specifically written) are large and take a
    couple of seconds to compile. My guess is that the actual compilations
    are perhaps 4 seconds. Overall, I make it 25 kloc per second. While I
    don't think that is a particularly relevant measure of anything useful,
    it does show that either you are measuring the wrong thing, using a
    wildly inappropriate or limited build environment, or are unaware of how
    to use your computer to build code. (And my computer cpu was about 30%
    busy doing other productive tasks, such as playing a game, while I was
    doing those builds.)


    So, you are exaggerating, mismeasuring or misusing your system to get
    build times that are well over an order of magnitude worse than
    expected. This follows your well-established practice.

    And you claim your own tools would be 1000 times faster. Maybe they
    would be. Certainly there have been tools in the past that are much
    smaller and faster than modern tools, and were useful at the time.
    Modern tools do so much more, however. A tool that doesn't do the job
    needed is of no use for a given task, even if it could handle other
    tasks quickly.

    But the crux of the matter, and I can't stress this enough as it never
    seems to get through to you, is that fast enough is fast enough. No one
    cares how long cdecl takes to build. Almost everyone who wants it will download a binary file - "apt-get install cdecl", or similar. The only
    people who bother to compile it are those who want the cutting edge
    version. And even if it takes a minute or two to build, so what? It
    does not matter. If it took an hour, that would be annoying if you
    wanted to run it /now/, but even then if it were a useful tool (to the
    user in question), all you need to do is start it running and then let
    it churn away in the background. Computers are really good at doing
    that kind of stuff, and don't get bored easily. Building is a one-time
    task. (If the edit-build-test cycle for the developers took an hour,
    that would be a totally different matter.)

    Of course everyone agrees that smaller and faster is better, all things
    being equal - but all things are usually /not/ equal, and once something
    is fast enough to be acceptable, making it faster is not a priority.

    You can view all this as "bad" if you want. But since the size of the
    source code for cdecl, the time it takes to build, the use of autotools,
    the out-the-box Windows experience, and the length of configure script
    have absolutely /zero/ influence on whether or not I would use cdecl, or
    how useful I would find it, why should I care about those things? I
    don't think they are relevant to the vast majority of other potential
    cdecl users either, and thus do not have to care for their experiences
    either.



    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From vallor@3:633/10 to All on Wed Oct 29 23:11:36 2025
    At Wed, 29 Oct 2025 21:21:34 +0000, bart <bc@freeuk.com> wrote:

    On 29/10/2025 16:12, David Brown wrote:
    On 29/10/2025 00:14, bart wrote:
    On 28/10/2025 21:59, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    On 28/10/2025 02:35, Janis Papanagnou wrote:
    On 27.10.2025 16:11, bart wrote:
    [...]
    If speed wasn't an issue then we'd all be using easy dynamic
    languages
    Huh? - Certainly not.

    *I* would! That's why I made my scripting languages as fast and
    ˙ capable as possible, so they could be used for more tasks.

    However, if I dare to suggest that even one other person in the world >>>> might also have the same desire, you'd say that I can't possibly know >>>> that.

    And yet here you are: you say 'certainly not'. Obviously *you* know
    everyone else's mindset!

    I'll give this one more try.

    This kind of thing makes it difficult to communicate with you.

    You're talking to the wrong guy. It's JP who's difficult to talk to.

    He (I assume) always dismisses every single one of my arguments out of
    hand:

    Build speed is never a problem - ever. The speed of any language
    implemention is never a concern either.


    Bart, I think this all comes down to some basic logic that you get wrong regularly :

    The opposite of "X is always true" is /not/ "X is always false" or that "(not X) is always true".˙ It is that "X is /sometimes/ false", or that "(not X) is /sometimes/ true".

    You get this wrong repeatedly when you and I are in disagreement, and I see it again and again with other people - such as with both Janis and Keith.

    No one, in any of the posts I have read in c.l.c. in countless years,
    has ever claimed that "build speed is /never/ a problem".˙ People have regularly said that it /often/ is not a problem, or it is not a problem
    in their own work, or that slow compile times can often be dealt with in various ways so that it is not a problem.˙ People don't disagree that build speed can be an issue - they disagree with your claims that it
    is /always/ an issue (except when using /your/ tools, or perhaps tcc).

    It was certainly an issue here: the 'make' part of building CDECL and
    A68G, I considered slow for the scale of the task given that the apps
    are 68 and 78Kloc (static total of .c and .h files).

    Not sure if it's worth it, but my 2 cents:

    You can throw more processors at your "make" with the "-j" switch,
    something like:

    $ make -j $(nproc)

    Where $(nproc) substitutes the number of processors on your system
    for a parallel make.


    A68G I know takes 90 seconds to build (since I've just tried it again;
    it took long enough that I had an ice-cream while waiting, so that's something).

    That's under 1Kloc per second; not great.

    But at least all the optimising would have produced a super-fast
    executable? Well, that's disappointing too; no-one can say that A68G is fast.

    I said that my equivalent product was 1000 times faster to build (don't forget the configure nonsense) and it ran 10 times faster on the same test.

    That is a quite remarkable difference. VERY remarkable. Only some of it
    is due to my product being smaller (but it's not 1000 times smaller!).

    This was stated to demonstrate how different my world was.

    My view is that there is something very wrong with the build systems everyone here uses. But I can understand that no one wants to admit that they're that bad.

    You find ways around it, you get inured to it, but you just have to use
    much more powerful machines than mine, but I would go round the bend if
    I had to work with something so unresponsive.





    --
    -v System76 Thelio Mega v1.1 x86_64 NVIDIA RTX 3090Ti 24G
    OS: Linux 6.17.5 D: Mint 22.2 DE: Xfce 4.18
    NVIDIA: 580.95.05 Mem: 258G
    "Let's split up, we can do more damage that way."

    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From bart@3:633/10 to All on Wed Oct 29 23:19:10 2025
    On 29/10/2025 22:10, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    On 29/10/2025 01:48, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    On 28/10/2025 21:59, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    On 28/10/2025 02:35, Janis Papanagnou wrote:
    On 27.10.2025 16:11, bart wrote:
    [...]
    If speed wasn't an issue then we'd all be using easy dynamic languages
    [...]

    Bart, is the above statement literally accurate?

    Literally as in all 8.x billion individuals on the planet, including
    infants and people in comas, would be using such languages?

    This is what you seem to be suggesting that I mean, and here you're both
    being overly pedantic. You could just agree with me you know!

    'If X then we'd all be doing Y' is a common English idiom, suggesting X
    was a no-brainer.


    Do you believe that
    we would ALL be using "easy dynamic languages" if speed were not an
    issue, meaning that non-dynamic languages would die out completely?

    Yes, I believe that if dynamic languages, however they are implemented,
    could always deliver native code speeds, then a huge number of people,
    and companies, would switch because of that and other benefits.

    Bear in mind that if that was the case, then new dynamic languages could emerge that help broad their range of applications.




    That's what this whole sub-argument is about.

    Well I didn't start it. Somebody suggested the speed of a language implementation had little relevance (not willing to admit the
    shortcomings of A68G), and I suggested in light-hearted idiom that if
    dynamic languages were much faster, their take-up would be much greater.

    What should I have said, that it would increase by 54.91% over the next
    4 quarters?

    (Remind me to run my posts through a lawyer next time.)


    really meant is that dynamic languages would be more popular than
    they are now if speed were not an issue. Possibly someone just took
    your figuratative statement a little too literally. If that's the
    case, please just say so.

    Oh, you finaly got it! See it wasn't hard.



    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From bart@3:633/10 to All on Wed Oct 29 23:29:42 2025
    On 29/10/2025 20:33, Waldek Hebisch wrote:
    bart <bc@freeuk.com> wrote:
    On 26/10/2025 16:04, Scott Lurndal wrote:
    bart <bc@freeuk.com> writes:
    On 26/10/2025 06:25, Janis Papanagnou wrote:


    However the A68G configure script is 11000 lines; the CDECL one 31600 lines.

    (I wonder why the latter needs 20000 more lines? I guess nobody is
    curious - or they simply don't care.)

    You should be able to figure that out yourself. You may actually
    learn something useful along the way.


    So you don't know.

    What special requirements does CDECL have (which has a task that is at
    least a magnitude simpler than A68G's), that requires those 20,000 extra
    lines?

    I did not look deeply, but cdecl is using automake and related
    tools. IIUC you can have small real source, and depend on
    autools to provide tests. This is likely to bring tons of
    irrelevant tests into configure. Or you can specify precisely
    which tests are needed. In the second case you need to
    write more code, but generated configure is smaller.

    My working hypotesis is that cdecl is relatively simple program,
    so autotools defaults lead to working build. And nobody was
    motiveted enough to select what is needed, so configure
    contains a lot of code which is useful sometimes, but probably
    not for cdel.

    BTW: In one "my" project there is hand-written configure.ac
    which is select tests that are actually needed for the
    project. Automake in _not_ used. Generated configure
    has 8564 lines. But the project has rather complex
    requirements and autotools defaults are unlikely to
    work, so one really have to explicitly handle various
    details.



    I have a project coming up next month: a subset of my C compiler, which
    is not written in C, being ported to actual C.

    What I'm thinking of doing is taking part of that project, and creating
    a standalone program that does the 'explain' part of cdecl, and only for
    C, not C++. This would not worth doing by itself.

    Then I can make that available to see how it looks and how it builds.

    But I do not expect it to need anything other than a C compiler, and it
    should work on any OS (it needs only a keyboard and a display).


    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Keith Thompson@3:633/10 to All on Wed Oct 29 16:47:29 2025
    David Brown <david.brown@hesbynett.no> writes:
    [...]
    But the crux of the matter, and I can't stress this enough as it never
    seems to get through to you, is that fast enough is fast enough. No
    one cares how long cdecl takes to build.
    [...]

    Since the most recent argument here has been about the interpretation
    of an absolute statement, I think I should point out that your last
    statement above is not literally true. *Some* people do care how
    long cdecl takes to build. Most of us, I think, don't particularly
    care as long as it's no more than a few minutes.

    I understand what you meant, but in a discussion about hyberbolic
    statements being taken literally, I suggest it's good to be
    painfully precise.

    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    void Void(void) { Void(); } /* The recursive call of the void */

    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From bart@3:633/10 to All on Thu Oct 30 00:36:05 2025
    On 29/10/2025 23:04, David Brown wrote:
    On 29/10/2025 22:21, bart wrote:

    It was certainly an issue here: the 'make' part of building CDECL and
    A68G, I considered slow for the scale of the task given that the apps
    are 68 and 78Kloc (static total of .c and .h files).


    I have no interest in A68G.˙ I have no stake in cdecl or knowledge (or particular interest) in how it was written, and how appropriate the
    number of lines of code are for the task in hand.˙ I am confident it
    could have been written in a different way with less code - but not at
    all confident that doing so would be in any way better for the author of
    the program.˙ I am also confident that you know far too little about
    what the program can do, or why it was written the way it was, to judge whether it has a "reasonable" number of lines of code, or not.

    However, it's easy to look at the facts.˙ The "src" directory from the github clone has about 50,000 lines of code in .c files, and 18,000
    lines of code in .h files.˙ The total is therefore about 68 kloc of
    source.˙ This does not at all mean that compilation processes exactly 68 thousand lines of code - it will be significantly more than that as
    headers are included by multiple files, and lots of other headers from
    the C standard library and other libraries are included.˙ Let's guess
    100 kloc.

    Yes, that's why I said the 'static' line counts are 68 and 78K. Maybe
    the slowdown is due to some large headers that lie outside the problem
    (not the standard headers), but so what? (That would be a shortcoming of
    the C language.)

    The A68G sources also contain lots of upper-case content, so perhaps
    macro expansion is going on too.

    The bottom line is this is an 80Kloc app that takes that long to buidld.


    The build process takes 8 seconds on my decade-old machine, much of
    which is something other than running the compiler.˙ (Don't ask me what
    it is doing - I did not write this software, design its build process,
    or determine how the program is structured and how it is generated by
    yacc or related tools.˙ This is not my area of expertise.)˙ If for some strange reason I choose to run "make" rather than "make -j", thus
    wasting much of my computer's power, it takes 16 seconds.˙ Some of these non-compilation steps do not appear to be able to run in parallel, and a couple of the compilations (like "parser.c", which appears to be from a parser generator rather than specifically written) are large and take a couple of seconds to compile.˙ My guess is that the actual compilations
    are perhaps 4 seconds.˙ Overall, I make it 25 kloc per second.˙ While I don't think that is a particularly relevant measure of anything useful,
    it does show that either you are measuring the wrong thing, using a
    wildly inappropriate or limited build environment, or are unaware of how
    to use your computer to build code.

    Tell me then how I should do it to get single-figure build times for a
    fresh build. But whatever it is, why doesn't it just do that anyway?!

    (And my computer cpu was about 30%
    busy doing other productive tasks, such as playing a game, while I was
    doing those builds.)


    So, you are exaggerating, mismeasuring or misusing your system to get
    build times that are well over an order of magnitude worse than
    expected.˙ This follows your well-established practice.

    So, what exactly did I do wrong here (for A68G):

    root@DESKTOP-11:/mnt/c/a68g/algol68g-3.10.5# time make >output
    real 1m32.205s
    user 0m40.813s
    sys 0m7.269s

    This 90 seconds is the actual time I had to hang about waiting. I'd be interested in how I managed to manipulate those figures!

    BTW 68Kloc would be CDECL; and 78Kloc is A68G. The CDECL timings are:

    root@DESKTOP-11:/mnt/c/Users/44775/Downloads/cdecl-18.5# time make
    output
    <warnings>
    real 0m49.512s
    user 0m19.033s
    sys 0m3.911s

    On the RPi4 (usually 1/3 the speed of my PC), the make-time for A68G was
    137 seconds (using SD storage; the PC uses SSD), so perhaps 40 seconds
    on the PC, suggesting that the underlying Windows file system may be
    slowing things down, but I don't know.

    However the same PC, under actual Windows, manages this:

    c:\qx>tim mm qq
    Compiling qq.m to qq.exe (500KB but half is data; A68G is 1MB?)
    Time: 0.084

    And this:

    c:\cx>tim tcc lua.c (250-400KB)
    Time: 0.124

    And you claim your own tools would be 1000 times faster.

    In this case, yes. The figure is more typically around 100 if the other compiler is optimising, however that would be representations of the
    same program. A68G is somewhat bigger than my product.

    ˙ Maybe they
    would be.˙ Certainly there have been tools in the past that are much
    smaller and faster than modern tools, and were useful at the time.
    Modern tools do so much more, however.˙ A tool that doesn't do the job needed is of no use for a given task, even if it could handle other
    tasks quickly.

    It ran my test program; that's what counts!





    But the crux of the matter, and I can't stress this enough as it never
    seems to get through to you, is that fast enough is fast enough.˙ No one cares how long cdecl takes to build.

    I don't care either; I just wanted to try it.

    But I pick up things that nobody else seems to: this particular build
    was unusually slow; why was that? Perhaps there's a bottleneck in the
    process that needs to be fixed, or a bug, that would give benefits when
    it does matter.

    (An article posted in Reddit detailed how a small change in how Clang
    worked made a 5-7% difference in build times for large projects.

    You'd probably dismiss it as irrelevant, but lots of such improvements
    build up. At least it is good that some people are looking at such aspects.

    https://cppalliance.org/mizvekov,/clang/2025/10/20/Making-Clang-AST-Leaner-Faster.html)


    Of course everyone agrees that smaller and faster is better, all things being equal - but all things are usually /not/ equal, and once something
    is fast enough to be acceptable, making it faster is not a priority.

    My compilers have already reached that threshold (most stuff builds in
    the time it takes to take my finger off the Enter button). But most
    mainstream compilers are a LONG way off.




    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Keith Thompson@3:633/10 to All on Wed Oct 29 18:03:02 2025
    bart <bc@freeuk.com> writes:
    On 29/10/2025 22:10, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    On 29/10/2025 01:48, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    On 28/10/2025 21:59, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    On 28/10/2025 02:35, Janis Papanagnou wrote:
    On 27.10.2025 16:11, bart wrote:
    [...]
    If speed wasn't an issue then we'd all be using easy dynamic languages
    [...]
    Bart, is the above statement literally accurate?

    Literally as in all 8.x billion individuals on the planet, including
    infants and people in comas, would be using such languages?

    This is what you seem to be suggesting that I mean, and here you're
    both being overly pedantic. You could just agree with me you know!

    I have agreed with a significant number of your statements in the recent
    past. I would not consider agreeing with this particular statement
    without understanding just what you meant by it. (That would be a
    necessary but sufficient prerequisite for my agreement.)

    'If X then we'd all be doing Y' is a common English idiom, suggesting
    X was a no-brainer.

    So you were being figurative, not literal. That's what I thought.
    Thank you for confirming it.

    Do you believe that
    we would ALL be using "easy dynamic languages" if speed were not an
    issue, meaning that non-dynamic languages would die out completely?

    Yes, I believe that if dynamic languages, however they are
    implemented, could always deliver native code speeds, then a huge
    number of people, and companies, would switch because of that and
    other benefits.

    You are conflating "a huge number of people" with "ALL". I suppose this
    is meant to be hyperbole.

    You wrote :

    If speed wasn't an issue then we'd all be using easy dynamic
    languages

    Janis replied :

    Huh? - Certainly not.

    Your reply to that was :

    *I* would! That's why I made my scripting languages as fast and
    capable as possible, so they could be used for more tasks.

    That is not responsive to what Janis wrote. I'm 99% sure that
    Janis's stated opinion is that *some but not all* programmers would
    switch to "easy dynamic langauges" if speed were not an issue.
    Telling us that you would does not contradict what Janis wrote
    or meant.

    However, if I dare to suggest that even one other person in the
    world might also have the same desire, you'd say that I can't
    possibly know that.

    No. If you suggested that one or more other people would switch to
    dynamic languages if speed were not an issue, I probably wouldn't even
    reply, because that statement would be so obviously true that it
    wouldn't be worth discussing. Your ideas about what other people think
    are so distorted that you assume we would disagree.

    And yet here you are: you say 'certainly not'. Obviously *you* know
    everyone else's mindset!

    And that's just nonsense, and *completely* nonresponsive to what Janis
    wrote.

    Your position is that, if speed were not an issue, "a huge
    number of people, and companies, would switch" to "easy dynamic
    languages". My position, and I believe Janis's position, is that *many*
    people and companies would likely switch to such languages in those circumstances, but probably not "a huge number". (I'm not interested in debating what "a huge number" means. (I acknowledge the possiblity that
    you're right and Janis and I are wrong, but we'll never know, because
    speed will never not be an issue. In any case, the point of this reply
    is to establish what was actually said, not who is right or wrong.)

    When Janis expressed skepticism about your claim that either "all"
    or "a huge number" of people would switch, you reacted exactly as
    if Janis had says that *nobody* would switch. You were offended by
    something that neither Janis nor anyone else wrote or suggested.
    I don't care who started the argument, but your misinterpretation
    of what Janis wrote is what has caused it to continue.

    This kind of thing keeps happening.

    Do you understand what I'm saying?

    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    void Void(void) { Void(); } /* The recursive call of the void */

    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From tTh@3:633/10 to All on Thu Oct 30 05:00:15 2025
    On 10/30/25 01:36, bart wrote:

    You'd probably dismiss it as irrelevant, but lots of such improvements
    build up. At least it is good that some people are looking at such aspects.

    https://cppalliance.org/mizvekov,/clang/2025/10/20/Making-Clang-AST- Leaner-Faster.html)


    This page is about C++, not C. It was irrelevant in
    this newsgroup. Try again, Bart.

    --
    ** **
    * tTh des Bourtoulots *
    * http://maison.tth.netlib.re/ *
    ** **

    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Keith Thompson@3:633/10 to All on Wed Oct 29 21:24:34 2025
    antispam@fricas.org (Waldek Hebisch) writes:
    [...]
    Assuming that you have enough RAM you should try at least using
    'make -j 3', that is allow make to use up to 3 jobs. I wrote
    at least, because AFAIK cheapest PC CPU-s of reasonable age
    have at least 2 cores, so to fully utilize the machine you
    need at least 2 jobs. 3 is better, because some jobs may wait
    for I/O.

    I haven't been using make's "-j" option for most of my builds.
    I'm going to start doing so now (updating my wrapper script).

    I initially tried replacing "make" by "make -j", with no numeric
    argument. The result was that my system nearly froze (the load
    average went up to nearly 200). It even invoked the infamous OOM
    killer. "make -j" tells make to use as many parallel processes
    as possible.

    "make -j $(nproc)" is much better. The "nproc" command reports the
    number of available processing units. Experiments with a fairly
    large build show that arguments to "-j" larger than $(nproc) do
    not speed things up (on a fairly old machine with nproc=4). I had
    speculated that "make -n 5" might be worthwhile of some processes
    were I/O-bound, but that doesn't appear to be the case.

    This applies to GNU make. There are other "make" implementations
    which may or may not have a similar feature.

    [...]

    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    void Void(void) { Void(); } /* The recursive call of the void */

    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From vallor@3:633/10 to All on Thu Oct 30 04:52:50 2025
    At Wed, 29 Oct 2025 21:24:34 -0700, Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:

    antispam@fricas.org (Waldek Hebisch) writes:
    [...]
    Assuming that you have enough RAM you should try at least using
    'make -j 3', that is allow make to use up to 3 jobs. I wrote
    at least, because AFAIK cheapest PC CPU-s of reasonable age
    have at least 2 cores, so to fully utilize the machine you
    need at least 2 jobs. 3 is better, because some jobs may wait
    for I/O.

    I haven't been using make's "-j" option for most of my builds.
    I'm going to start doing so now (updating my wrapper script).

    I initially tried replacing "make" by "make -j", with no numeric
    argument. The result was that my system nearly froze (the load
    average went up to nearly 200). It even invoked the infamous OOM
    killer. "make -j" tells make to use as many parallel processes
    as possible.

    "make -j $(nproc)" is much better. The "nproc" command reports the
    number of available processing units. Experiments with a fairly
    large build show that arguments to "-j" larger than $(nproc) do
    not speed things up (on a fairly old machine with nproc=4). I had
    speculated that "make -n 5" might be worthwhile of some processes
    were I/O-bound, but that doesn't appear to be the case.

    This applies to GNU make. There are other "make" implementations
    which may or may not have a similar feature.

    [...]

    I cloned the cdecl archive to ramdisk and timed the installation commands:

    $ time -p ./bootstrap
    [...]
    real 6.13
    user 4.59
    sys 0.54

    $ time -p ./configure
    [...]
    real 11.94
    user 5.24
    sys 6.13

    $ time -p make -j$(nproc)
    [...]
    real 3.57
    user 11.01
    sys 2.74

    $ time -p sudo make install
    [...]
    real 0.35
    user 0.00
    sys 0.01

    On this system:

    $ nproc
    64

    $ grep 'model name' /proc/cpuinfo | uniq
    model name : AMD Ryzen Threadripper 3970X 32-Core Processor

    This workstation is a few years old, but I don't see any need to replace
    it at this point.

    The numbers above will hopefully give naysayers of autoconf and
    make pause for thought...

    --
    -v System76 Thelio Mega v1.1 x86_64 NVIDIA RTX 3090Ti 24G
    OS: Linux 6.17.6 D: Mint 22.2 DE: Xfce 4.18
    NVIDIA: 580.95.05 Mem: 258G
    "It's deja vu all over again."

    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From vallor@3:633/10 to All on Thu Oct 30 05:38:17 2025
    At Thu, 30 Oct 2025 04:52:50 +0000, vallor <vallor@vallor.earth> wrote:

    $ grep 'model name' /proc/cpuinfo | uniq
    model name : AMD Ryzen Threadripper 3970X 32-Core Processor

    That was on Linux. Now in a virt running Cygwin on Windows 11 Pro for Workstations...C drive image is on my NAS, connected with 10G-base-T.
    nproc is 4.

    CYGWIN_NT-10.0-26100 w11 3.6.5-1.x86_64 2025-10-09 17:21 UTC x86_64 Cygwin

    $ time -p ./bootstrap
    [...]
    real 14.29
    user 6.55
    sys 3.39

    $ time -p ./configure
    [...]
    real 106.75
    user 38.89
    sys 46.26

    $ time -p make -j$(nproc)
    [...]
    real 31.40
    user 50.76
    sys 15.83

    $ time -p make install
    [...]
    real 3.28
    user 1.24
    sys 1.52

    So configure took 1:47. Also, that's a bit misleading, because
    I had to run ./configure multiple times, and use the cygwin package
    manager to install dependencies: flex, bison, and libreadline-dev.

    I could have run it on a RAMdisk, but wasn't worth my time to figure
    out how to set one up in Windows...which probably would have taken
    more than 107 seconds to do anyway.

    Seems like ./configure could be made faster, though, but one
    only runs it occasionally...

    --
    -v System76 Thelio Mega v1.1 x86_64 NVIDIA RTX 3090Ti 24G
    OS: Linux 6.17.6 D: Mint 22.2 DE: Xfce 4.18
    NVIDIA: 580.95.05 Mem: 258G
    "Honey, PLEASE don't pick up the PH$@#*&$^(#@&$^%(*NO CARRIER"

    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Richard Heathfield@3:633/10 to All on Thu Oct 30 07:45:15 2025
    On 30/10/2025 04:24, Keith Thompson wrote:
    I haven't been using make's "-j" option for most of my builds.
    I'm going to start doing so now (updating my wrapper script).

    Well, let's see, on approximately 10,000 lines of code:

    $ make clean
    $time make

    real 0m2.391s
    user 0m2.076s
    sys 0m0.286s

    $ make clean
    $time make -j $(nproc)

    real 0m0.041s
    user 0m0.021s
    sys 0m0.029s

    That's a reduction in wall clock time of 4 minutes per MLOC to 4
    *seconds* per MLOC. I can't deny I'm impressed.

    --
    Richard Heathfield
    Email: rjh at cpax dot org dot uk
    "Usenet is a strange place" - dmr 29 July 1999
    Sig line 4 vacant - apply within

    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From David Brown@3:633/10 to All on Thu Oct 30 09:02:19 2025
    On 30/10/2025 00:19, bart wrote:
    On 29/10/2025 22:10, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    On 29/10/2025 01:48, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    On 28/10/2025 21:59, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    On 28/10/2025 02:35, Janis Papanagnou wrote:
    On 27.10.2025 16:11, bart wrote:
    [...]
    If speed wasn't an issue then we'd all be using easy dynamic >>>>>>>>> languages
    [...]

    Bart, is the above statement literally accurate?

    Literally as in all 8.x billion individuals on the planet, including
    infants and people in comas, would be using such languages?

    This is what you seem to be suggesting that I mean, and here you're both being overly pedantic. You could just agree with me you know!

    'If X then we'd all be doing Y' is a common English idiom, suggesting X
    was a no-brainer.


    ˙Do you believe that
    we would ALL be using "easy dynamic languages" if speed were not an
    issue, meaning that non-dynamic languages would die out completely?

    Yes, I believe that if dynamic languages, however they are implemented, could always deliver native code speeds, then a huge number of people,
    and companies, would switch because of that and other benefits.


    This would all be /so/ much easier if you just wrote what you meant in
    the first place. You don't need to use exaggerations and hyperbole, and
    you don't need to extrapolate your own opinions as though they apply to everyone. And it doesn't help when you write with the assumption that
    your gut feelings (with no objective information to back them up) are "no-brainers" or somehow obvious, and then you get in a fluster when
    others disagree.

    On the particular point here, would more people use "dynamic languages"
    (a somewhat vague term, but we are speaking vaguely here anyway) if
    speed were not an issue? I think if languages like Python or Javascript
    were faster, we'd see a /little/ more use of them - but not much more.
    After all, dynamic languages are already massively popular in particular fields with today's speeds. And while I doubt if anyone would complain
    if they were faster (unless the speed increase cost in other ways), they
    are apparently fast enough for a very wide range of uses.

    Of course there are situations where people have thought "Python is too
    slow for this, so I will have to use C even though I hate that
    language". But I personally do not think that will be the case for a
    "huge number of people and companies".

    Bear in mind that if that was the case, then new dynamic languages could emerge that help broad their range of applications.


    New dynamic languages pop up regularly, and there are many ways in which
    their speed is being improved (such as JIT, or better byte compiling and better VM's, as well as language design targeting speed). But sure, new
    ones could emerge that cover different use-cases better. The same
    applies to static languages.

    Whether the speed of any /particular/ language - such as Algol 68 -
    affected its uptake, is another matter.



    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From David Brown@3:633/10 to All on Thu Oct 30 11:15:22 2025
    On 30/10/2025 01:36, bart wrote:
    On 29/10/2025 23:04, David Brown wrote:
    On 29/10/2025 22:21, bart wrote:

    It was certainly an issue here: the 'make' part of building CDECL and
    A68G, I considered slow for the scale of the task given that the apps
    are 68 and 78Kloc (static total of .c and .h files).


    I have no interest in A68G.˙ I have no stake in cdecl or knowledge (or
    particular interest) in how it was written, and how appropriate the
    number of lines of code are for the task in hand.˙ I am confident it
    could have been written in a different way with less code - but not at
    all confident that doing so would be in any way better for the author
    of the program.˙ I am also confident that you know far too little
    about what the program can do, or why it was written the way it was,
    to judge whether it has a "reasonable" number of lines of code, or not.

    However, it's easy to look at the facts.˙ The "src" directory from the
    github clone has about 50,000 lines of code in .c files, and 18,000
    lines of code in .h files.˙ The total is therefore about 68 kloc of
    source.˙ This does not at all mean that compilation processes exactly
    68 thousand lines of code - it will be significantly more than that as
    headers are included by multiple files, and lots of other headers from
    the C standard library and other libraries are included.˙ Let's guess
    100 kloc.

    Yes, that's why I said the 'static' line counts are 68 and 78K. Maybe
    the slowdown is due to some large headers that lie outside the problem
    (not the standard headers), but so what? (That would be a shortcoming of
    the C language.)

    The A68G sources also contain lots of upper-case content, so perhaps
    macro expansion is going on too.

    The bottom line is this is an 80Kloc app that takes that long to buidld.


    No, the bottom line is that this program took longer to build than you expected or wanted.

    Did the build time affect whether or not you use A68G ? If not, then it
    does /not/ take too long to build, even on your system.

    Of course you might feel it takes longer than you expect, or
    frustratingly long - that's up to you, your opinions, and your expectations.



    The build process takes 8 seconds on my decade-old machine, much of
    which is something other than running the compiler.˙ (Don't ask me
    what it is doing - I did not write this software, design its build
    process, or determine how the program is structured and how it is
    generated by yacc or related tools.˙ This is not my area of
    expertise.)˙ If for some strange reason I choose to run "make" rather
    than "make -j", thus wasting much of my computer's power, it takes 16
    seconds.˙ Some of these non-compilation steps do not appear to be able
    to run in parallel, and a couple of the compilations (like "parser.c",
    which appears to be from a parser generator rather than specifically
    written) are large and take a couple of seconds to compile.˙ My guess
    is that the actual compilations are perhaps 4 seconds.˙ Overall, I
    make it 25 kloc per second.˙ While I don't think that is a
    particularly relevant measure of anything useful, it does show that
    either you are measuring the wrong thing, using a wildly inappropriate
    or limited build environment, or are unaware of how to use your
    computer to build code.

    Tell me then how I should do it to get single-figure build times for a
    fresh build. But whatever it is, why doesn't it just do that anyway?!


    Try "make -j" rather than "make" to build in parallel. That is not the default mode for make, because you don't lightly change the default
    behaviour of a program that millions use regularly and have used over
    many decades. Some build setups (especially very old ones) are not
    designed to work well with parallel building, so having the "safe"
    single task build as the default for make is a good idea.

    I would also, of course, recommend Linux for these things. Or get a
    cheap second-hand machine and install Linux on that - you don't need
    anything fancy. As you enjoy comparative benchmarks, the ideal would be duplicate hardware with one system running Windows, the other Linux.
    (Dual boot is a PITA, and I am not suggesting you mess up your normal
    daily use system.)

    Raspberry Pi's are great for lots of things, but they are not fast for building software - most models have too little memory to support all
    the cores in big parallel builds, they can overheat when pushed too far,
    and their "disks" are very slow. If you have a Pi 5 with lots of ram,
    and use a tmpfs filesystem for the build, it can be a good deal faster.

    (And my computer cpu was about 30% busy doing other productive tasks,
    such as playing a game, while I was doing those builds.)


    So, you are exaggerating, mismeasuring or misusing your system to get
    build times that are well over an order of magnitude worse than
    expected.˙ This follows your well-established practice.

    So, what exactly did I do wrong here (for A68G):

    ˙ root@DESKTOP-11:/mnt/c/a68g/algol68g-3.10.5# time make >output
    ˙ real˙˙˙ 1m32.205s
    ˙ user˙˙˙ 0m40.813s
    ˙ sys˙˙˙˙ 0m7.269s

    This 90 seconds is the actual time I had to hang about waiting. I'd be interested in how I managed to manipulate those figures!

    Try "time make -j" as a simple step.


    BTW 68Kloc would be CDECL; and 78Kloc is A68G. The CDECL timings are:

    ˙ root@DESKTOP-11:/mnt/c/Users/44775/Downloads/cdecl-18.5# time make
    output
    ˙ <warnings>
    ˙ real˙˙˙ 0m49.512s
    ˙ user˙˙˙ 0m19.033s
    ˙ sys˙˙˙˙ 0m3.911s

    On the RPi4 (usually 1/3 the speed of my PC), the make-time for A68G was
    137 seconds (using SD storage; the PC uses SSD), so perhaps 40 seconds
    on the PC, suggesting that the underlying Windows file system may be
    slowing things down, but I don't know.

    However the same PC, under actual Windows, manages this:

    ˙ c:\qx>tim mm qq
    ˙ Compiling qq.m to qq.exe˙˙˙˙˙ (500KB but half is data; A68G is 1MB?)
    ˙ Time: 0.084

    And this:

    ˙ c:\cx>tim tcc lua.c˙˙˙˙˙˙˙˙˙˙ (250-400KB)
    ˙ Time: 0.124


    Windows is a fine system in some ways, but it has different strengths
    and weaknesses compared to Linux. There are plenty of things Windows
    handles better than Linux in a very general sense. Here, however, there
    are two things that Linux (and all *nix style OS's) does significantly
    better than Windows - it has much more efficient filesystems, especially
    when dealing with lots of files at once, and it is much more efficient
    at starting and stopping processes and running lots of processes at once.

    gcc, make, and other tools used in the build of ccdecl (again, I have
    not looked at A68G) come from a world where big tasks are broken down
    into many little tasks. When you run a "gcc" command, even just for a
    compile (without linking), it will run a number of different programs - starting and stopping multiple processes. That is cheap on Linux, but a significant overhead on Windows. They communicate with temporary files
    - cheap on Linux (they are never written to a disk), but expensive on
    Windows. Similarly, the typical C libraries on Linux are happy to use multiple files because doing so is cheap on Linux - but much more
    expensive on Windows. (A single "#include <stdio.h>" C file on my Linux system uses 20 headers, totalling 3536 lines.) There are good reasons
    for breaking things into small parts like this, for better
    maintainability, scalability, portability and flexibility. However, it
    means that these things are all slower on Windows systems.

    Software that originates in the Windows world tends to be more
    monolithic - you make one big program that does everything, you make C
    library headers that are combined to avoid extra includes, and so on. Portability and scalability don't matter so much in a monoculture, and flexibility and reuse don't matter when toolchain developers are closed companies. (By that I mean that in the *nix world, some of the headers
    will be shared across multiple different C standard libraries, different
    C compilers, different OS's, and different target architectures in any combination.)

    I am not saying that one way is "right" and the other way is "wrong" - I
    am saying they are significantly different, and this can be a reason why certain kinds of big software systems can have very different
    performance characteristics on *nix systems and Windows.


    And you claim your own tools would be 1000 times faster.

    In this case, yes. The figure is more typically around 100 if the other compiler is optimising, however that would be representations of the
    same program. A68G is somewhat bigger than my product.

    ˙ Maybe they would be.˙ Certainly there have been tools in the past
    that are much smaller and faster than modern tools, and were useful at
    the time. Modern tools do so much more, however.˙ A tool that doesn't
    do the job needed is of no use for a given task, even if it could
    handle other tasks quickly.

    It ran my test program; that's what counts!

    If a tool does the job you need, and does so efficiently, that's great.






    But the crux of the matter, and I can't stress this enough as it never
    seems to get through to you, is that fast enough is fast enough.˙ No
    one cares how long cdecl takes to build.

    I don't care either; I just wanted to try it.

    But I pick up things that nobody else seems to: this particular build
    was unusually slow; why was that? Perhaps there's a bottleneck in the process that needs to be fixed, or a bug, that would give benefits when
    it does matter.

    Do you think there is a reason why /you/ get fixated on these things,
    and no one else in this group appears to be particularly bothered?
    Could it be that these things are not actually a problem to other
    people? You have never given any indication that you are interested in identifying bottlenecks or slowdowns, and have certainly shown no
    interest in fixing them or even just reporting them to anyone of
    relevance (like the guy who wrote cdecl, or the authors of autotools, or
    the gcc developers, or whoever might be at least vaguely connected with
    the process). I am sure there are lots of people here who - if they
    bothered to build cdecl at all - might think the build took longer than
    they would have guessed. But no one else has whined about it.

    Usually when a person thinks that they are seeing something no one else
    sees, they are wrong. (Look at Olcott for an extreme example.) And if
    if there had ever been a regular in comp.lang.c who was once unaware
    that there are C compilers that can compile faster than gcc, or that
    autotools is outdated and probably unnecessary in most cases, you can be
    sure they have heard your message enough times already.


    (An article posted in Reddit detailed how a small change in how Clang
    worked made a 5-7% difference in build times for large projects.

    You'd probably dismiss it as irrelevant, but lots of such improvements
    build up. At least it is good that some people are looking at such aspects.

    https://cppalliance.org/mizvekov,/clang/2025/10/20/Making-Clang-AST-Leaner-Faster.html)


    I am very happy that people make compilers faster. For me, personally,
    the biggest benefit clang has brought to my work is that the competition
    and cooperation with gcc has encouraged improvements to gcc - functional improvements such as better static warnings, and faster compilation.

    And I fully understand that build times for large projects are
    important, especially during development.

    But I do not share your obsession that compile and build times are the critical factor or the defining feature for a compiler (or toolchain in general). In my experience - /my/ experience - compile times for C code
    has never been an issue. I have never felt the urge to use a different compiler because the one I am using is too slow. I have never felt it
    made sense to use -O0 rather than -O2 (or whatever I choose as
    appropriate for the task in hand) because of compiler speed. I have
    never felt that I won't use a particular piece of software because the
    build step took too long.

    I have certainly found that it can be /nicer/ to have faster compiles or builds. I have certainly found it worth the effort to do builds
    efficiently - if I had to recompile all code for all files in my
    projects every time I made a small change, then build speed would become
    a problem. And I have occasionally done builds (such as full builds of embedded Linux systems) that take a long time - these would be
    frustrating if I had to do them regularly.

    And again, I am always glad when my tools run faster - but that does not
    mean I have a problem with them being too slow. I know you find it very difficult to understand that concept.



    Of course everyone agrees that smaller and faster is better, all
    things being equal - but all things are usually /not/ equal, and once
    something is fast enough to be acceptable, making it faster is not a
    priority.

    My compilers have already reached that threshold (most stuff builds in
    the time it takes to take my finger off the Enter button). But most mainstream compilers are a LONG way off.


    This is not a goal most compiler vendors have. When people are not particularly bothered about the speed of compilation for their files,
    the speed is good enough - people are more interested in other things.
    They are more interested in features like better checks, more helpful
    warnings or information, support for newer standards, better
    optimisation, and so on.

    Mainstream compiler vendors do care about speed - but not about the
    speed of the little C programs you write and compile. They put a huge
    amount of effort into the speed for situations where it matters, such as
    for building very large projects, or building big projects with advanced optimisations (like link-time optimisations across large numbers of
    files and modules), or working with code that is inherently slow to
    compile (like C++ code with complex templates or significant
    compile-time compilation).



    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From David Brown@3:633/10 to All on Thu Oct 30 12:50:33 2025
    On 30/10/2025 05:24, Keith Thompson wrote:
    antispam@fricas.org (Waldek Hebisch) writes:
    [...]
    Assuming that you have enough RAM you should try at least using
    'make -j 3', that is allow make to use up to 3 jobs. I wrote
    at least, because AFAIK cheapest PC CPU-s of reasonable age
    have at least 2 cores, so to fully utilize the machine you
    need at least 2 jobs. 3 is better, because some jobs may wait
    for I/O.

    I haven't been using make's "-j" option for most of my builds.
    I'm going to start doing so now (updating my wrapper script).

    I initially tried replacing "make" by "make -j", with no numeric
    argument. The result was that my system nearly froze (the load
    average went up to nearly 200). It even invoked the infamous OOM
    killer. "make -j" tells make to use as many parallel processes
    as possible.

    "make -j $(nproc)" is much better. The "nproc" command reports the
    number of available processing units. Experiments with a fairly
    large build show that arguments to "-j" larger than $(nproc) do
    not speed things up (on a fairly old machine with nproc=4). I had
    speculated that "make -n 5" might be worthwhile of some processes
    were I/O-bound, but that doesn't appear to be the case.

    This applies to GNU make. There are other "make" implementations
    which may or may not have a similar feature.


    Sometimes "make -j" can be problematic, yes. I don't know if newer
    versions of GNU make have got better at avoiding being too enthusiastic
    about starting jobs, but certainly if you have a project where a very
    large number of compile tasks could be started in parallel, but you
    don't have the ram to handle them all, things can go badly wrong. I've
    seen that myself too on occasion. (In the case of cdecl, there are not
    that many parallel compiles for it to be a risk, at least not on my
    machine.)

    Using "make -j ${nproc}" - or using "make -j 4" or "make -j 8" if you
    know your core count - can be a safer starting point. The ideal number
    for a given build can vary quite a lot, however. More parallel
    processes take more ram - great up to a point, but it can mean less ram
    for disk and file caching and thus slower results overall. And often
    cores are not all created equal - with SMT, half your cores might not be "real" cores, and on some processors you have a mix of fast cores and
    slow low-power cores. On my work machine with 4 "real" cores and 4 SMT
    cores, "make -j 6" is usually optimal for bigger builds. And then you
    have to consider that sometimes builds require significant other work
    than just compiling, and the ideal balance for those tasks may be
    different. Of course such fine-tuning it only really matters if you are
    doing the builds a lot.


    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From bart@3:633/10 to All on Thu Oct 30 12:07:48 2025
    On 30/10/2025 10:15, David Brown wrote:
    On 30/10/2025 01:36, bart wrote:

    Try "make -j" rather than "make" to build in parallel.˙ That is not the default mode for make, because you don't lightly change the default behaviour of a program that millions use regularly and have used over
    many decades.˙ Some build setups (especially very old ones) are not
    designed to work well with parallel building, so having the "safe"
    single task build as the default for make is a good idea.

    I would also, of course, recommend Linux for these things.˙ Or get a
    cheap second-hand machine and install Linux on that - you don't need anything fancy.˙ As you enjoy comparative benchmarks, the ideal would be duplicate hardware with one system running Windows, the other Linux.
    (Dual boot is a PITA, and I am not suggesting you mess up your normal
    daily use system.)

    Raspberry Pi's are great for lots of things, but they are not fast for building software - most models have too little memory to support all
    the cores in big parallel builds, they can overheat when pushed too far,
    and their "disks" are very slow.˙ If you have a Pi 5 with lots of ram,
    and use a tmpfs filesystem for the build, it can be a good deal faster.

    (And my computer cpu was about 30% busy doing other productive tasks,
    such as playing a game, while I was doing those builds.)


    So, you are exaggerating, mismeasuring or misusing your system to get
    build times that are well over an order of magnitude worse than
    expected.˙ This follows your well-established practice.

    So, what exactly did I do wrong here (for A68G):

    ˙˙ root@DESKTOP-11:/mnt/c/a68g/algol68g-3.10.5# time make >output
    ˙˙ real˙˙˙ 1m32.205s
    ˙˙ user˙˙˙ 0m40.813s
    ˙˙ sys˙˙˙˙ 0m7.269s

    This 90 seconds is the actual time I had to hang about waiting. I'd be
    interested in how I managed to manipulate those figures!

    Try "time make -j" as a simple step.


    OK, "make -j" gave a real time of 30s, about three times faster. (Not
    quite sure how that works, given that my machine has only two cores.)

    However, I don't view "-j", and parallelisation, as a solution to slow compilation. It is just a workaround, something you do when you've
    exhausted other possibilities.

    You have to get raw compilation fast enough first.

    Suppose I had the task of transporting N people from A to B in my car,
    but I can only take four at a time and have to get them there by a
    certain time.

    One way of helping out is to use "-j": get multiple drivers with their
    own cars to transport them in parallel.

    Imagine however that my car and all those others can only go at walking
    pace: 3mph instead of 30mph. Then sure, you can recruit enough
    volunteers to get the task done in the necessary time (putting aside the practical details).

    But can you a see a fundamental problem that really ought to be fixed first?


    But I pick up things that nobody else seems to: this particular build
    was unusually slow; why was that? Perhaps there's a bottleneck in the
    process that needs to be fixed, or a bug, that would give benefits
    when it does matter.

    Do you think there is a reason why /you/ get fixated on these things,
    and no one else in this group appears to be particularly bothered?

    Usually when a person thinks that they are seeing something no one else sees, they are wrong.

    Quite a few people have suggested that there is something amiss about my
    1:32 and 0:49 timings. One has even said there is something wrong with
    my machine.

    You have even suggested I have manipulated the figures!

    So was I right in sensing something was off, or not?

    And I fully understand that build times for large projects are
    important, especially during development.

    But I do not share your obsession that compile and build times are the critical factor or the defining feature for a compiler (or toolchain in general).

    I find fast compile-times useful for several reasons:

    *I develop whole-program compilers* This means all sources have to be
    compiled at the same time, as there is no independent compilation at the module level.

    The advantage is that I don't need the complexity of makefiles to help
    decide which dependent modules need recompiling.

    *It can allow programs to be run directly from source* This is something
    that is being explored via complex JIT approaches. But my AOT compiler
    is fast enough that that is not necessary

    *It also allow programs to be interpreted* This is like run from source,
    but the compilation is faster as it can stop at the IL. (Eg. sqlite3
    compiles in 150ms instead of 250ms.)

    *It can allow whole-program optimisation* This is not something I take advantage of much yet. But it allows a simpler approach than either LTO,
    so somehow figuring out to create a one-file amalgamation.

    So it enables interesting new approaches. Imagine if you download the
    CDECL bundle and then just run it without needing to configure anything,
    or having to do 'make', or 'make -j'.

    This is a demo which runs my C compiler instead of a CDECL. The C
    compiler source bundle is the file cc.ma (created using 'mm -ma cc'):

    c:\demo>dir
    30/10/2025 11:31 648,000 cc.ma
    26/09/2025 14:44 60 hello.c

    Now I run my C compiler from source:

    c:\demo>mm -r cc hello
    Compiling cc.m to cc.(run)
    Compiling hello.c to hello.exe

    Magic! Or, since 'cc' also shares the same backend as 'mm', it can also
    run stuff from source (but is limited to single file C programs):

    c:\demo>mm -r cc -r hello
    Compiling cc.m to cc.(run)
    Compiling hello.c to hello.(run)
    Hello, World!

    Forget ./configure, forget make. Of course you can do the same thing,
    maybe there is 'make -run', the difference is that the above is instant.

    This is not a goal most compiler vendors have.˙ When people are not particularly bothered about the speed of compilation for their files,
    the speed is good enough - people are more interested in other things.
    They are more interested in features like better checks, more helpful warnings or information, support for newer standards, better
    optimisation, and so on.

    See the post from Richard Heathfield where he is pleasantly surprised
    that he can get a 60x speedup in build-time.

    People like fast tools!

    Mainstream compiler vendors do care about speed - but not about the
    speed of the little C programs you write and compile.˙ They put a huge amount of effort into the speed for situations where it matters, such as
    for building very large projects, or building big projects with advanced optimisations (like link-time optimisations across large numbers of
    files and modules), or working with code that is inherently slow to
    compile (like C++ code with complex templates or significant compile-
    time compilation).

    I think some 90% at least of the EXE/DLL files in my Windows\System32
    folder are under 1MB in size. That would be approx 100Kloc of C, or under.

    We've seen how long programs of 1MB and 0.6GB (apparent stripped sizes
    of A68G and CDECL) can take to build. Or do those count as 'little'?

    Anyway, the approaches used to speed up compilation of smaller programs
    can also help larger ones.

    (A few years ago, my main compiler was written in my intepreted
    scripting language, so it was very slow IMV. However it was still double
    the speed of gcc -O0! While generating equally indifferent code.

    So I say something is wrong.)


    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From bart@3:633/10 to All on Thu Oct 30 12:56:40 2025
    On 30/10/2025 05:11, Waldek Hebisch wrote:
    bart <bc@freeuk.com> wrote:

    Because existing solutions DIDN'T EXIST in a practical form (remember I
    worked with 8-bit computers), or they were hopelessly slow and
    complicated on restricted hardware.

    I don't need a linker, I don't need a makefile, I don't need lists of
    dependencies between modules, I don't need independent compilation, I
    don't use object files.

    The generated makefile for the 49-module CDECL project is 2000 lines of
    gobbledygook; that's not really selling it to me!

    If *I* had a 49-module C project, the build info I'd supply you would
    basically be that list of files, plus the source files.

    I sometime work with 8-bit microcontrollers. More frequently I work
    with 32-bit microcontrollers of size comparable to 8-bit
    microcontrollers. One target has 4 kB RAM (plus 16 kB flash for
    storing programs). On such targets I care about program size.
    I found it convenient during developement to run programs from
    RAM, so ideally program + data should fit in 4 kB. And frequently
    it fits. I have separate modules. For example, usually before
    doing anything else I need to configure the clock. Needed clock
    speed depends on program. I could use a general clock setting
    routine that can set "any" clock speed. But such routine would
    be more complicated and consequently bigger than a more specialized
    one. So I have a few versions so that each version sets a single
    clock speed and is doing only what is necessary for this speed. Microcontrollers contain several built-in devices, they need
    drivers. But it is almost impossible to use all devices and
    given program usually uses only a few devices. So in programs
    I just include what is needed.

    My developement process is work in progress, there are some
    things which I would like to improve. But I need to organize
    things, for which I use files. There are compiler options,
    paths to tools and libraries. In other words, there is
    essential info outside C files. I use Makefile-s to record
    this info. It is quite likely that in the future I will
    have a tool to create specialized C code from higher level
    information. In such case my dependecies will get more
    complex.

    Modern microcontrollers are quite fast compared to their
    typical tasks, so most of the time speed of code is not
    critical. But I write interrupt handlers and typically
    interrupt handler should be as fast as possible, so speed
    matters here. And as I wrote size of compiled code is
    important. So compiler that quickly generates slow and big
    code is of limited use to me. Given that files are usually
    rather small I find gcc speed reasonable (during developement
    I usually do not need to wait for compilation, it is fast
    enough).

    Certainly better compiler is possible. But given need to
    generate reasonably good code for several differen CPU-s
    (there are a few major familes and within family there are
    variations affecting generated code) this is big task.

    One could have better language than C. But currenly it
    seems that I will be able to get features that I want by
    generating code. Of course, if you look at whole toolchain
    and developement process this is much more complicated than
    specialized compiler for specialized language. But creating
    whole environment with features that I want is a big task.
    By using gcc I reduce amount of work that _I_ need to do.
    I wrote several pieces of code that are available in existing
    libraries (because I wanted to have smaller specialized
    version), so I probably do more work than typical developer.
    But life is finite so one need to choose what is worth
    (re)doing as opposed to reusing existing code.

    BTW: Using usual recipes, frequently gives much bigger programs,
    for example program blinking a LED (embedded equivelent of
    "Hello world") may take 20-30 kB (with my approach it is
    552 bytes, most of which is essentially forced by MCU
    architecure).

    So, gcc and make _I_ find useful. For microcontroller
    projects I currently do not need 'configure' and related
    machinery, but do not exlude that in the future.

    Note that while I am developing programs, my focus is on
    providing a library and developement process. That is
    potential user is supposed to write code which should
    integrate with code that I wrote. So I either need
    some amalgamation at source level or linking. ATM linking
    works better. So I need linking, in the sense that if
    I were forbiden to use linking, I would have to develop
    some replacement and that could be substantial work and
    inconvenience, for example textual amalgamation would
    increase build time from rather satisfactory now to
    probably noticable delay.


    My background is unusual. I started off in hardware, and developed a
    small language and tools to help with my job as test and development
    engineer, something done on the side.

    Those tools evolved, and I got used to creating my own solutions, ones
    that were very productive compared to the (expensive and slow) compilers
    that were available then.

    Linking existed, in the form of a 'loader' program that combined
    multiple object files into one executable; a trivial task IMO, but other people's linkers seemed to make a big deal of it (they still do!).

    I didn't use makefiles: I had a crude IDE which used a project file,
    listing my source modules. So the IDE already knew all the which files
    needed to be submitted for compilation, on the occasions I needed to
    compile everything.

    I was also familiar enough with my projects to know when I only need to recompile the one module. In any case, compilation was quite fast even
    on the early 80s home and business computers I used (and used to help design!).

    I only use linking now for my C compiler, but that task is done within
    my assembler; there are no object files.

    My main language uses a whole-program compiler so linking is not
    relevant. External libraries are accessed dynamically only.

    When I wrote commercial apps, where users wanted to add their own
    content, I provided a scripting language for that. Developing add-ons
    was done within the running application.

    Now, if someone wanted to statically link native code from my compiler
    into their program, or vice versa, I can generate object files in
    standard format.

    Then a normal linker is used, but *they* are using the linker; not me!

    There are other solutions too: others can create libraries that are then
    used via runtime dynamic-linking. While I also have facilities within my backend to generate executable code in-memory, and that could be made available as a library to user-programs.

    In short, there are lots of alternatives when you are not limited to traditional tools, but you may have to write them yourself. For most
    people, that is not feasible or not practical, they will already be
    heavily invested in dependencies, and it cannot be done overnight anyway.

    But in my case it allows me to truthfully say:

    I don't need a linker, I don't need a makefile, I don't need lists of dependencies between modules, I don't need independent compilation, I
    don't use object files.

    However ... I still believe that the build process for lots of C
    programs, for when a user needs to compile working program, can be
    vastly simplified. That means makefiles at least are not needed.



    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Scott Lurndal@3:633/10 to All on Thu Oct 30 14:13:53 2025
    antispam@fricas.org (Waldek Hebisch) writes:
    bart <bc@freeuk.com> wrote:
    On 29/10/2025 23:04, David Brown wrote:
    On 29/10/2025 22:21, bart wrote:


    BTW 68Kloc would be CDECL; and 78Kloc is A68G. The CDECL timings are:

    root@DESKTOP-11:/mnt/c/Users/44775/Downloads/cdecl-18.5# time make >>>output
    <warnings>
    real 0m49.512s
    user 0m19.033s
    sys 0m3.911s

    Those numbers indicate that there is something wrong with your
    machine. Sum of second and third line above give CPU time.
    Real time is twice as large, so something is slowing down things.
    One possible trouble is having too small RAM, then OS is swaping
    data to/from disc. Some programs do a lot of random I/O, that
    can be slow on spinning disc, but SSD-s usually are much
    faster at random I/O.

    Assuming that you have enough RAM you should try at least using
    'make -j 3', that is allow make to use up to 3 jobs. I wrote
    at least, because AFAIK cheapest PC CPU-s of reasonable age
    have at least 2 cores, so to fully utilize the machine you
    need at least 2 jobs. 3 is better, because some jobs may wait
    for I/O.

    FYI, reasonably typical report for normal make (without -j
    option) on my machine is:

    real 0m4.981s
    user 0m3.712s
    sys 0m0.963s


    Just for grins, here's a report for a full rebuild of a real-world project
    that I build regularly. Granted most builds are partial (e.g. one or
    two source files touched) and take far less time (15 seconds or so,
    most of which is make calling stat(2) on a few hundred source files
    on an NFS filesystem). Close to three million SLOC, mostly in header
    files. C++.

    $ time make -s -j96
    real 9m10.38s
    user 3h50m15.59s
    sys 9m58.20s

    I'd challenge Bart to match that with a similarly sized project using
    his compiler and toolset, but I seriously doubt that this project could
    be effectively implemented using his personal language and toolset.

    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From bart@3:633/10 to All on Thu Oct 30 14:32:36 2025
    On 30/10/2025 14:13, Scott Lurndal wrote:
    antispam@fricas.org (Waldek Hebisch) writes:
    bart <bc@freeuk.com> wrote:
    On 29/10/2025 23:04, David Brown wrote:
    On 29/10/2025 22:21, bart wrote:


    BTW 68Kloc would be CDECL; and 78Kloc is A68G. The CDECL timings are:

    root@DESKTOP-11:/mnt/c/Users/44775/Downloads/cdecl-18.5# time make
    output
    <warnings>
    real 0m49.512s
    user 0m19.033s
    sys 0m3.911s

    Those numbers indicate that there is something wrong with your
    machine. Sum of second and third line above give CPU time.
    Real time is twice as large, so something is slowing down things.
    One possible trouble is having too small RAM, then OS is swaping
    data to/from disc. Some programs do a lot of random I/O, that
    can be slow on spinning disc, but SSD-s usually are much
    faster at random I/O.

    Assuming that you have enough RAM you should try at least using
    'make -j 3', that is allow make to use up to 3 jobs. I wrote
    at least, because AFAIK cheapest PC CPU-s of reasonable age
    have at least 2 cores, so to fully utilize the machine you
    need at least 2 jobs. 3 is better, because some jobs may wait
    for I/O.

    FYI, reasonably typical report for normal make (without -j
    option) on my machine is:

    real 0m4.981s
    user 0m3.712s
    sys 0m0.963s


    Just for grins, here's a report for a full rebuild of a real-world project that I build regularly. Granted most builds are partial (e.g. one or
    two source files touched) and take far less time (15 seconds or so,
    most of which is make calling stat(2) on a few hundred source files
    on an NFS filesystem). Close to three million SLOC, mostly in header
    files. C++.


    What is the total size of the produced binaries? That will me an idea of
    the true LoC for the project.

    How many source files (can include headers) does it involve? How many
    binaries does it actually produce?

    $ time make -s -j96
    real 9m10.38s
    user 3h50m15.59s
    sys 9m58.20s

    I'd challenge Bart to match that with a similarly sized project using
    his compiler and toolset, but I seriously doubt that this project could
    be effectively implemented using his personal language and toolset.

    If what you are asking is how my toolset can cope with a project on this scale, then I can have a go at emulating it, given the information above.

    I can tell you that over 4 hours, and working at generating 3-5MB per
    second, my compiler could produce 40-70GB of binary code in that time, although not in one file due to memory. I guess the size is somewhat
    smaller than that.



    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Michael S@3:633/10 to All on Thu Oct 30 16:41:49 2025
    On Thu, 30 Oct 2025 07:45:15 +0000
    Richard Heathfield <rjh@cpax.org.uk> wrote:

    On 30/10/2025 04:24, Keith Thompson wrote:
    I haven't been using make's "-j" option for most of my builds.
    I'm going to start doing so now (updating my wrapper script).

    Well, let's see, on approximately 10,000 lines of code:

    $ make clean
    $time make

    real 0m2.391s
    user 0m2.076s
    sys 0m0.286s

    $ make clean
    $time make -j $(nproc)

    real 0m0.041s
    user 0m0.021s
    sys 0m0.029s

    That's a reduction in wall clock time of 4 minutes per MLOC to 4
    *seconds* per MLOC. I can't deny I'm impressed.


    Something wrong here.
    Most likely you compared "cold" build vs "hot" build.
    Or your 'make clean' failed to clean majority of objects.



    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From David Brown@3:633/10 to All on Thu Oct 30 16:04:51 2025
    On 30/10/2025 13:07, bart wrote:
    On 30/10/2025 10:15, David Brown wrote:
    On 30/10/2025 01:36, bart wrote:

    Try "make -j" rather than "make" to build in parallel.˙ That is not
    the default mode for make, because you don't lightly change the
    default behaviour of a program that millions use regularly and have
    used over many decades.˙ Some build setups (especially very old ones)
    are not designed to work well with parallel building, so having the
    "safe" single task build as the default for make is a good idea.

    I would also, of course, recommend Linux for these things.˙ Or get a
    cheap second-hand machine and install Linux on that - you don't need
    anything fancy.˙ As you enjoy comparative benchmarks, the ideal would
    be duplicate hardware with one system running Windows, the other
    Linux. (Dual boot is a PITA, and I am not suggesting you mess up your
    normal daily use system.)

    Raspberry Pi's are great for lots of things, but they are not fast for
    building software - most models have too little memory to support all
    the cores in big parallel builds, they can overheat when pushed too
    far, and their "disks" are very slow.˙ If you have a Pi 5 with lots of
    ram, and use a tmpfs filesystem for the build, it can be a good deal
    faster.

    (And my computer cpu was about 30% busy doing other productive
    tasks, such as playing a game, while I was doing those builds.)


    So, you are exaggerating, mismeasuring or misusing your system to
    get build times that are well over an order of magnitude worse than
    expected.˙ This follows your well-established practice.

    So, what exactly did I do wrong here (for A68G):

    ˙˙ root@DESKTOP-11:/mnt/c/a68g/algol68g-3.10.5# time make >output
    ˙˙ real˙˙˙ 1m32.205s
    ˙˙ user˙˙˙ 0m40.813s
    ˙˙ sys˙˙˙˙ 0m7.269s

    This 90 seconds is the actual time I had to hang about waiting. I'd
    be interested in how I managed to manipulate those figures!

    Try "time make -j" as a simple step.


    OK, "make -j" gave a real time of 30s, about three times faster. (Not
    quite sure how that works, given that my machine has only two cores.)

    You presumably understand how multi-tasking works when there are more processes than there are cores to run them. Sometimes you have more
    processes ready to run, in which case some have to wait. But sometimes processes are already waiting for something else (typically disk I/O
    here, but it could be networking or other things). So while one compile
    task is waiting for the disk, another one can be running. It's not
    common for the speedup from "make -j" or "make -j N" for some number N
    to be greater than the number of cores, but it can happen for small
    numbers of cores and slow disk.


    However, I don't view "-j", and parallelisation, as a solution to slow compilation. It is just a workaround, something you do when you've
    exhausted other possibilities.

    You moan that compiles are too slow. Yet doing them in parallel is a "workaround". Avoiding compiling unnecessarily is a "workaround".
    Caching compilation work is a "workaround". Using a computer from this century is a "workaround". Using a decent OS is a "workaround". Is /everything/ that would reduce your scope for complaining loudly to the
    wrong people a workaround?

    Of course this kind of thing does not change the fundamental speed of
    the compiler, but it is very much a solution to problems, frustration or issues that people might have from compilers being slower than they
    might want. "make -j" does not make the compiler faster, but it does
    mean that the speed of the compiler is less of an issue.


    You have to get raw compilation fast enough first.

    Why? And - again - the "raw" compilation of gcc on C code, for my
    usage, is already more than fast enough for my needs. If it were
    faster, I would still use make. If it ran at 1 MLOC per second, I'd
    still use make, and I'd still structure my code the same way, and I'd
    still run on Linux. I would be happy to see gcc run at that speed, but
    it would not change how I work.


    Suppose I had the task of transporting N people from A to B in my car,
    but I can only take four at a time and have to get them there by a
    certain time.

    One way of helping out is to use "-j": get multiple drivers with their
    own cars to transport them in parallel.

    Imagine however that my car and all those others can only go at walking pace: 3mph instead of 30mph. Then sure, you can recruit enough
    volunteers to get the task done in the necessary time (putting aside the practical details).

    But can you a see a fundamental problem that really ought to be fixed
    first?

    Sure - if that were realistic. But a more accurate model is that the
    cars go at 30 mph - the people will all get there safely, comfortably
    and in a reasonable time, and if there are lots of people you can scale
    by using more cars in parallel so that the real-world time taken is not
    much different. Your alternative is an electric scooter trimmed to go
    at 600 mph. Yes, it is faster for an individual, but is it really
    /better/? I'm sure we'd all be pleased if the car went at 60 mph rather
    than 30 mph, but the speed of the vehicle is not the only thing that
    affects the throughput of your transport system.

    There is no logical reason to focus solely on speed of one individual
    part of a large process when there are other ways to improve the speed
    of the process as a whole.



    But I pick up things that nobody else seems to: this particular build
    was unusually slow; why was that? Perhaps there's a bottleneck in the
    process that needs to be fixed, or a bug, that would give benefits
    when it does matter.

    Do you think there is a reason why /you/ get fixated on these things,
    and no one else in this group appears to be particularly bothered?

    Usually when a person thinks that they are seeing something no one
    else sees, they are wrong.

    Quite a few people have suggested that there is something amiss about my 1:32 and 0:49 timings. One has even said there is something wrong with
    my machine.


    Maybe there /is/ something wrong with your machine or setup. If you
    have a 2 core machine, it is presumably a low-end budget machine from
    perhaps 15 years ago. I'm all in favour of keeping working systems and
    I strongly disapprove of some people's two or three year cycles for
    swapping out computers, but there is a balance somewhere. With such an
    old system, I presume you also have old Windows (my office Windows
    machine is Windows 7), and thus the old and very slow style of WSL.
    That, I think, could explain the oddities in your timings.

    You have even suggested I have manipulated the figures!

    No, I did not. I have at various times suggested that you cherry-pick,
    that you might have poor methodology and that you sometimes benchmark in
    an unrealistic way in order to give yourself a bigger windmill for your tilting. (Timing a build on an old slow WSL layer on Windows on old
    slow hardware is an example of this - the typical user who would compile something like cdecl from source will be using some flavour of *nix and
    a computer suitable for software development.)


    So was I right in sensing something was off, or not?


    You were wrong in thinking something was off about cdecl or its build.
    And it should not be news to you that there is something very suboptimal
    about your computer environment, as this is not exactly the first time
    it has been discussed.

    And I fully understand that build times for large projects are
    important, especially during development.

    But I do not share your obsession that compile and build times are the
    critical factor or the defining feature for a compiler (or toolchain
    in general).

    I find fast compile-times useful for several reasons:

    Everyone who compiles code finds faster compile times nicer than slower compile times. That is not the point. The issue is about fast /enough/ compiles, and fast /enough/ builds.

    But of course I am quite happy to accept that fast compile times are
    important to you - your preferences and opinions are your own. The
    issue is that you can't accept other people have different priorities
    and experiences.


    *I develop whole-program compilers* This means all sources have to be compiled at the same time, as there is no independent compilation at the module level.

    OK. I have sometimes used whole-program compilation. It is naturally
    slower, but is helped by good tools (such as toolchains that support
    so-called "link-time optimisation"). And improving the speed of LTO - particularly by improving the parallelisation of the task across
    multiple cores - is a key focus for gcc and clang/llvm for speed.


    The advantage is that I don't need the complexity of makefiles to help decide which dependent modules need recompiling.

    People use make for many reasons - incremental building and dependency management is just one (albeit important) aspect. You mentioned in
    another post that "Python does not need make" - I have Python projects
    that are organised by makefiles. And honestly, if you had taken 1% of
    the time and effort you have spend complaining in c.l.c. about "make"
    and instead learned about it, you'd be writing makefiles in your sleep.
    It really is not that hard, and you will never convince me you are not
    smart enough to understand it quickly and easily.


    *It can allow programs to be run directly from source* This is something that is being explored via complex JIT approaches. But my AOT compiler
    is fast enough that that is not necessary

    I don't see what that is at all important for C programming. Why would someone want to use C for scripting? If I had a C file "test.c" that
    was short enough to be realistic for use as a script, and did not care
    about optimisation or static checking, I could just type "make test &&
    ./test" to run it pretty much instantly.


    *It also allow programs to be interpreted* This is like run from source,
    but the compilation is faster as it can stop at the IL. (Eg. sqlite3 compiles in 150ms instead of 250ms.)

    Faster compiles do not change anything fundamental about a language.
    They do not mean that C programs are interpreted, they mean that C
    programs compile faster.


    *It can allow whole-program optimisation* This is not something I take advantage of much yet. But it allows a simpler approach than either LTO,
    so somehow figuring out to create a one-file amalgamation.


    I can fully appreciate that as a compiler /writer/, you want a simpler
    system than LTO. As a compiler /user/, like the vast majority of
    programmers, I don't really care how complicated the compiler is. That
    is someone else's job.

    So it enables interesting new approaches. Imagine if you download the
    CDECL bundle and then just run it without needing to configure anything,
    or having to do 'make', or 'make -j'.

    Almost everyone who uses cdecl does that already. Enthusiasts living on
    the cutting edge need to spend a couple of minutes downloading and
    building the latest versions, but other people will use pre-built
    binaries. And those people are already very familiar with the
    "./configure && make -j 8 && sudo make install" sequence.


    Forget ./configure, forget make. Of course you can do the same thing,
    maybe there is 'make -run', the difference is that the above is instant.

    To be clear - I do think autotools is usually unnecessary, overly
    complex, slow, and long outdated. There are some kinds of projects
    where it could be a definite benefit - typically those for which there
    are a lot of configuration options that people might want in their
    builds, and it gives a lot of them out of the box. But I think there's
    a lot of potential at least for skipping almost all ./configure tests on almost all systems without losing the advantages and features of
    autotools. However, it's up to the project authors to decide if they
    want to use autotools or not, and the cost of ten seconds of my time
    does not bother me here.


    This is not a goal most compiler vendors have.˙ When people are not
    particularly bothered about the speed of compilation for their files,
    the speed is good enough - people are more interested in other things.
    They are more interested in features like better checks, more helpful
    warnings or information, support for newer standards, better
    optimisation, and so on.

    See the post from Richard Heathfield where he is pleasantly surprised
    that he can get a 60x speedup in build-time.


    There were no details in that post - I suspect it was not /entirely/
    serious.

    People like fast tools!

    Sure. I haven't seen anyone suggest otherwise.



    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Scott Lurndal@3:633/10 to All on Thu Oct 30 16:22:57 2025
    bart <bc@freeuk.com> writes:
    On 30/10/2025 14:13, Scott Lurndal wrote:
    antispam@fricas.org (Waldek Hebisch) writes:
    bart <bc@freeuk.com> wrote:
    On 29/10/2025 23:04, David Brown wrote:
    On 29/10/2025 22:21, bart wrote:


    BTW 68Kloc would be CDECL; and 78Kloc is A68G. The CDECL timings are:

    root@DESKTOP-11:/mnt/c/Users/44775/Downloads/cdecl-18.5# time make >>>>> output
    <warnings>
    real 0m49.512s
    user 0m19.033s
    sys 0m3.911s

    Those numbers indicate that there is something wrong with your
    machine. Sum of second and third line above give CPU time.
    Real time is twice as large, so something is slowing down things.
    One possible trouble is having too small RAM, then OS is swaping
    data to/from disc. Some programs do a lot of random I/O, that
    can be slow on spinning disc, but SSD-s usually are much
    faster at random I/O.

    Assuming that you have enough RAM you should try at least using
    'make -j 3', that is allow make to use up to 3 jobs. I wrote
    at least, because AFAIK cheapest PC CPU-s of reasonable age
    have at least 2 cores, so to fully utilize the machine you
    need at least 2 jobs. 3 is better, because some jobs may wait
    for I/O.

    FYI, reasonably typical report for normal make (without -j
    option) on my machine is:

    real 0m4.981s
    user 0m3.712s
    sys 0m0.963s


    Just for grins, here's a report for a full rebuild of a real-world project >> that I build regularly. Granted most builds are partial (e.g. one or
    two source files touched) and take far less time (15 seconds or so,
    most of which is make calling stat(2) on a few hundred source files
    on an NFS filesystem). Close to three million SLOC, mostly in header
    files. C++.


    What is the total size of the produced binaries?

    There are 181 shared objects (DLL in windows speak) and
    six binaries produced by the build. The binaries are all quite small since they dynamically link at runtime with the necessary
    shared objects, the set of which can vary from run-to-run.

    The largest shared object is 7.5MB.

    text data bss dec hex filename
    6902921 109640 1861744 8874305 876941 lib/libXXX.so


    That will me an idea of
    the true LoC for the project.

    There is really no relationship between SLoC and binary size.

    There are about 16 million SLOC (it's been a while since I
    last run sloccount against this codebase).

    $ sloccount .
    Totals grouped by language (dominant language first):
    ansic: 11905053 (72.22%)
    python: 2506984 (15.21%)
    cpp: 1922112 (11.66%)
    tcl: 87725 (0.53%)
    asm: 42745 (0.26%)
    sh: 14333 (0.09%)

    Total Physical Source Lines of Code (SLOC) = 16,484,351 Development Effort Estimate, Person-Years (Person-Months) = 5,357.42 (64,289.00)
    (Basic COCOMO model, Person-Months = 2.4 * (KSLOC**1.05))
    Schedule Estimate, Years (Months) = 13.99 (167.89)
    (Basic COCOMO model, Months = 2.5 * (person-months**0.38))
    Estimated Average Number of Developers (Effort/Schedule) = 382.92
    Total Estimated Cost to Develop = $ 723,714,160
    (average salary = $56,286/year, overhead = 2.40).

    The bulk of the ANSI C code are header files generated from
    YAML, likewise most of the python code (used for unit testing).
    The primary functionality is in the C++ (cpp) code.
    The application is highly multithreaded (circa 100 threads in
    an average run).


    How many source files (can include headers) does it involve? How many >binaries does it actually produce?

    $ time make -s -j96
    real 9m10.38s
    user 3h50m15.59s
    sys 9m58.20s

    I'd challenge Bart to match that with a similarly sized project using
    his compiler and toolset, but I seriously doubt that this project could
    be effectively implemented using his personal language and toolset.

    If what you are asking is how my toolset can cope with a project on this >scale, then I can have a go at emulating it, given the information above.

    I can tell you that over 4 hours, and working at generating 3-5MB per >second, my compiler could produce 40-70GB of binary code in that time,

    That's a completely irrelevent metric.


    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Richard Tobin@3:633/10 to All on Thu Oct 30 16:26:37 2025
    In article <10dv52b$3gq3j$1@dont-email.me>,
    Richard Heathfield <rjh@cpax.org.uk> wrote:

    $time make -j $(nproc)

    Eww. How does make distinguish between j with an argument and
    j with no argument and a target?

    $ make -j a
    make: *** No rule to make target 'a'. Stop.
    $ make -j 3
    make: *** No targets specified and no makefile found. Stop.
    $ make 3
    cc 3.c -o 3

    That's a really bad idea.

    -- Richard

    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Michael S@3:633/10 to All on Thu Oct 30 18:30:01 2025
    On Thu, 30 Oct 2025 16:04:51 +0100
    David Brown <david.brown@hesbynett.no> wrote:

    On 30/10/2025 13:07, bart wrote:


    OK, "make -j" gave a real time of 30s, about three times faster.
    (Not quite sure how that works, given that my machine has only two
    cores.)

    You presumably understand how multi-tasking works when there are more processes than there are cores to run them. Sometimes you have more processes ready to run, in which case some have to wait. But
    sometimes processes are already waiting for something else (typically
    disk I/O here, but it could be networking or other things). So while
    one compile task is waiting for the disk, another one can be running.
    It's not common for the speedup from "make -j" or "make -j N" for
    some number N to be greater than the number of cores, but it can
    happen for small numbers of cores and slow disk.


    It *can* give much higher speedup than the number of cores.
    Measurements taken at relatively small MCU project: 33 modules,
    size:
    text data bss dec hex filename
    26953 156 28028 55137 d761

    Compiled on my corporate desktop.
    Good hardware (Intel i7-17700, 8 P cores, 12 E cores, 28 logical CPUs, competent SSD : Samsung PM9F1).
    Bad software environment - very aggressive antivirus + 2 other
    "management" crapware agents.

    msys2, arm-none-eabi-gcc 13.3.0

    2nd column: execution time with all cores enabled.
    3rd column: execution time with compilation locked to single
    logical CPU (P-core).
    4th column: execution time with compilation locked to single
    logical CPU (E-core).

    flags tm-all tm-one-P tm-one-E
    none 0m20.689s 0m21.162s 0m44.608s
    -j 2 0m9.464s 0m11.199s 0m34.154s
    -j 3 0m6.855s 0m8.695s
    -j 4 0m4.970s 0m7.992s 0m21.895s
    -j 5 0m4.429s 0m7.632s
    -j 6 0m4.016s 0m7.340s
    -j 7 0m3.766s 0m7.296s
    -j 8 0m3.564s 0m7.248s
    -j 9 0m3.439s 0m7.245s 0m20.323s
    -j 10 0m3.562s 0m7.324s
    -j 28 0m3.741s 0m7.295s
    -j 33 0m3.623s 0m7.128s 0m18.098s
    -j 0m3.843s 0m7.187s 0m19.365s

    So, on P-core I see almost 3x speed up from simultaneity even with no
    actual parallelism.























    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Scott Lurndal@3:633/10 to All on Thu Oct 30 17:30:55 2025
    richard@cogsci.ed.ac.uk (Richard Tobin) writes:
    In article <10dv52b$3gq3j$1@dont-email.me>,
    Richard Heathfield <rjh@cpax.org.uk> wrote:

    $time make -j $(nproc)

    Eww. How does make distinguish between j with an argument and
    j with no argument and a target?

    $ man 3 getopt

    Standard unix semantics since, well, forever. 'j' with
    no argument is an error.

    $ man 1 make


    https://pubs.opengroup.org/onlinepubs/9799919799/basedefs/V1_chap12.html

    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From bart@3:633/10 to All on Thu Oct 30 17:40:01 2025
    On 30/10/2025 16:22, Scott Lurndal wrote:
    bart <bc@freeuk.com> writes:
    On 30/10/2025 14:13, Scott Lurndal wrote:
    antispam@fricas.org (Waldek Hebisch) writes:
    bart <bc@freeuk.com> wrote:
    On 29/10/2025 23:04, David Brown wrote:
    On 29/10/2025 22:21, bart wrote:


    BTW 68Kloc would be CDECL; and 78Kloc is A68G. The CDECL timings are: >>>>>
    root@DESKTOP-11:/mnt/c/Users/44775/Downloads/cdecl-18.5# time make >>>>>> output
    <warnings>
    real 0m49.512s
    user 0m19.033s
    sys 0m3.911s

    Those numbers indicate that there is something wrong with your
    machine. Sum of second and third line above give CPU time.
    Real time is twice as large, so something is slowing down things.
    One possible trouble is having too small RAM, then OS is swaping
    data to/from disc. Some programs do a lot of random I/O, that
    can be slow on spinning disc, but SSD-s usually are much
    faster at random I/O.

    Assuming that you have enough RAM you should try at least using
    'make -j 3', that is allow make to use up to 3 jobs. I wrote
    at least, because AFAIK cheapest PC CPU-s of reasonable age
    have at least 2 cores, so to fully utilize the machine you
    need at least 2 jobs. 3 is better, because some jobs may wait
    for I/O.

    FYI, reasonably typical report for normal make (without -j
    option) on my machine is:

    real 0m4.981s
    user 0m3.712s
    sys 0m0.963s


    Just for grins, here's a report for a full rebuild of a real-world project >>> that I build regularly. Granted most builds are partial (e.g. one or
    two source files touched) and take far less time (15 seconds or so,
    most of which is make calling stat(2) on a few hundred source files
    on an NFS filesystem). Close to three million SLOC, mostly in header
    files. C++.


    What is the total size of the produced binaries?

    There are 181 shared objects (DLL in windows speak) and
    six binaries produced by the build. The binaries are all quite small since they dynamically link at runtime with the necessary
    shared objects, the set of which can vary from run-to-run.

    The largest shared object is 7.5MB.

    text data bss dec hex filename
    6902921 109640 1861744 8874305 876941 lib/libXXX.so


    That will me an idea of
    the true LoC for the project.

    There is really no relationship between SLoC and binary size.

    Yes, there is: a rule of thumb for x64 is 10 bytes of code for line of C source. But disproportional use of header files may affect that.


    There are about 16 million SLOC (it's been a while since I
    last run sloccount against this codebase).

    $ sloccount .
    Totals grouped by language (dominant language first):
    ansic: 11905053 (72.22%)
    python: 2506984 (15.21%)
    cpp: 1922112 (11.66%)
    tcl: 87725 (0.53%)
    asm: 42745 (0.26%)
    sh: 14333 (0.09%)

    Total Physical Source Lines of Code (SLOC) = 16,484,351 Development Effort Estimate, Person-Years (Person-Months) = 5,357.42 (64,289.00)
    (Basic COCOMO model, Person-Months = 2.4 * (KSLOC**1.05))
    Schedule Estimate, Years (Months) = 13.99 (167.89)
    (Basic COCOMO model, Months = 2.5 * (person-months**0.38))
    Estimated Average Number of Developers (Effort/Schedule) = 382.92
    Total Estimated Cost to Develop = $ 723,714,160
    (average salary = $56,286/year, overhead = 2.40).

    The bulk of the ANSI C code are header files generated from
    YAML, likewise most of the python code (used for unit testing).
    The primary functionality is in the C++ (cpp) code.
    The application is highly multithreaded (circa 100 threads in
    an average run).


    How many source files (can include headers) does it involve? How many
    binaries does it actually produce?

    $ time make -s -j96
    real 9m10.38s
    user 3h50m15.59s
    sys 9m58.20s

    I'd challenge Bart to match that with a similarly sized project using
    his compiler and toolset, but I seriously doubt that this project could
    be effectively implemented using his personal language and toolset.

    If what you are asking is how my toolset can cope with a project on this
    scale, then I can have a go at emulating it, given the information above.

    I can tell you that over 4 hours, and working at generating 3-5MB per
    second, my compiler could produce 40-70GB of binary code in that time,

    That's a completely irrelevent metric.


    For me it is entirely relevant, as the tools I use are linear. If my car averages 60mph, then after 4 hours I expect to do 240 miles.

    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From bart@3:633/10 to All on Thu Oct 30 17:49:31 2025
    On 30/10/2025 15:04, David Brown wrote:
    On 30/10/2025 13:07, bart wrote:

    You moan that compiles are too slow.˙ Yet doing them in parallel is a "workaround".˙ Avoiding compiling unnecessarily is a "workaround".
    Caching compilation work is a "workaround".˙ Using a computer from this century is a "workaround".˙ Using a decent OS is a "workaround".˙ Is / everything/ that would reduce your scope for complaining loudly to the
    wrong people a workaround?

    Yes, they are all workarounds to cope with unreasonably slow compilers.
    They in fact all come across as excuses for your favorite compiler being
    slow.

    Which one of these methods would you use to advertise the LPS throughput
    of a compiler that you develop?


    Of course this kind of thing does not change the fundamental speed of
    the compiler, but it is very much a solution to problems, frustration or issues that people might have from compilers being slower than they
    might want.˙ "make -j" does not make the compiler faster, but it does
    mean that the speed of the compiler is less of an issue.


    You have to get raw compilation fast enough first.

    Why?˙ And - again - the "raw" compilation of gcc on C code, for my
    usage, is already more than fast enough for my needs.

    Not for mine, sorry.

    ˙ If it were
    faster, I would still use make.˙ If it ran at 1 MLOC per second, I'd
    still use make, and I'd still structure my code the same way, and I'd
    still run on Linux.

    If it ran 1Mlps, then half of make would be pointless.

    However, with C, it would run into other problems, like heavy include
    files, which would normally be repeatedly processed per-module. (This is something my language solves, but I also suggested, elsewhere in the
    thread, a way it could be mitigated in C.)

    But can you a see a fundamental problem that really ought to be fixed
    first?

    Sure - if that were realistic.˙ But a more accurate model is that the
    cars go at 30 mph
    No, I contend that big compilers do seem to go at 3mph, or worse.

    We can argue about how much extra work your compilers do than mine, so
    let's look at a slightly different tool: assemblers.

    Assembly is a straightforward task: there is no deep analysis, no optimisation, so it should be very quick, yes? Well have a look this
    survey I did from a couple of years ago:

    https://www.reddit.com/r/Compilers/comments/1c41y6d/assembler_survey/

    There are quite a range of speeds! So what are those slow products up to
    that take so long?

    People use make for many reasons - incremental building and dependency management is just one (albeit important) aspect.˙ You mentioned in
    another post that "Python does not need make" - I have Python projects
    that are organised by makefiles.

    Makefiles sound to me like your 'hammer' then.

    ˙ And honestly, if you had taken 1% of
    the time and effort you have spend complaining in c.l.c. about "make"
    and instead learned about it, you'd be writing makefiles in your sleep.
    It really is not that hard, and you will never convince me you are not
    smart enough to understand it quickly and easily.

    I simply don't like them; sorry. Everything they might do, is taken care
    of by language design, or by my compiler, or by scripting in a proper scripting language.

    And they are ugly.


    *It can allow programs to be run directly from source* This is
    something that is being explored via complex JIT approaches. But my
    AOT compiler is fast enough that that is not necessary

    I don't see what that is at all important for C programming.˙ Why would someone want to use C for scripting?˙ If I had a C file "test.c" that
    was short enough to be realistic for use as a script, and did not care
    about optimisation or static checking, I could just type "make test
    && ./test" to run it pretty much instantly.

    By 'scripting' people have certain expectations. Here is my example of C
    run like a script:

    c:\cx>cs sql
    SQLite version 3.25.3/MCC 2018-11-05 20:37:38
    Enter ".help" for usage hints.
    Connected to a transient in-memory database.
    Use ".open FILENAME" to reopen on a persistent database.
    sqlite>

    Here, there is 1/4 second delay as it compiles sql.c (some 250Kloc), so
    a bit heavy for scripting. But another option is:

    c:\cx>ci sql
    SQLite version 3.25.3/MCC 2018-11-05 20:37:38
    ...

    'ci' will interpret from source, and 'cs' will run from source as native
    code. (ci/cs are the same EXE with a different name. The compiler looks
    at the name to apply different default options, eg. -r -q for 'cs'.)

    So, there is little start-up delay; there is no discernible build-step;
    there is no unreasonable limit on size; there are no messy files left
    lying around; no files are written so could run on read-only media; for
    C, can run at native-code speeds if possible.

    Otherwise we would have had 'scripting' for C for ever, if you
    definition of it is simpler being able to invoke a program on the same
    line that you've just built it!

    But I accept that using a 'shebang' line, plus the use of tcc, will work
    in many cases.


    Almost everyone who uses cdecl does that already.˙ Enthusiasts living on
    the cutting edge need to spend a couple of minutes downloading and
    building the latest versions, but other people will use pre-built
    binaries.˙ And those people are already very familiar with the "./
    configure && make -j 8 && sudo make install" sequence.

    This is all Unix-Linux specific. There are other ways of building
    programs. I've used some of those over the course of some 49 years.


    Forget ./configure, forget make. Of course you can do the same thing,
    maybe there is 'make -run', the difference is that the above is instant.

    To be clear - I do think autotools is usually unnecessary, overly
    complex, slow, and long outdated.

    What?!

    After being accused of baseless moaning, you know also agree that
    something might be pointlessly slow?!

    What about the argument that 'you only have to run it once'?


    There were no details in that post - I suspect it was not /entirely/ serious.

    He wouldn't have made up the figures, but someone said they may have
    been erroneous.




    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Richard Tobin@3:633/10 to All on Thu Oct 30 18:29:25 2025
    In article <jhNMQ.1338175$Jgh9.1030888@fx15.iad>,
    Scott Lurndal <slp53@pacbell.net> wrote:

    Eww. How does make distinguish between j with an argument and
    j with no argument and a target?

    $ man 3 getopt

    Standard unix semantics since, well, forever. 'j' with
    no argument is an error.

    The upstream articles refer to Gnu make, which evidently does not
    conform to that.

    -- Richard

    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Scott Lurndal@3:633/10 to All on Thu Oct 30 18:37:27 2025
    richard@cogsci.ed.ac.uk (Richard Tobin) writes:
    In article <jhNMQ.1338175$Jgh9.1030888@fx15.iad>,
    Scott Lurndal <slp53@pacbell.net> wrote:

    Eww. How does make distinguish between j with an argument and
    j with no argument and a target?

    $ man 3 getopt

    Standard unix semantics since, well, forever. 'j' with
    no argument is an error.

    The upstream articles refer to Gnu make, which evidently does not
    conform to that.

    Yes, unfortunately the GNU people totally screwed up
    the pption rules. Particuarly with word options rather than
    simple single letters. If a utility requires more
    than 52 options, it should be split into multiple utilities.

    Then there are the application programmers with a
    windows backround who never learned the rules.

    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Kaz Kylheku@3:633/10 to All on Thu Oct 30 18:59:23 2025
    On 2025-10-30, bart <bc@freeuk.com> wrote:
    On 30/10/2025 15:04, David Brown wrote:
    On 30/10/2025 13:07, bart wrote:

    You moan that compiles are too slow.˙ Yet doing them in parallel is a
    "workaround".˙ Avoiding compiling unnecessarily is a "workaround".
    Caching compilation work is a "workaround".˙ Using a computer from this
    century is a "workaround".˙ Using a decent OS is a "workaround".˙ Is /
    everything/ that would reduce your scope for complaining loudly to the
    wrong people a workaround?

    Yes, they are all workarounds to cope with unreasonably slow compilers.

    The idea of incremental rebuilding goes back to a time when compilers
    were fast, but machines were slow.

    If you had /those/ exact compilers today, and used them for even a pretty
    large project, you could likely do a full rebuild every time.

    But incremental building didn't go away because we already had it,
    and we took that into account when maintaining compilers.

    Basically, decades ago, we accepted the idea that it can take several
    seconds to compile the average file, and that we have incremental
    building to help with that.

    And so, unsurprisingly, as machines got several orders of magnitude
    faster, people we have made compilers do more and become more bloated,
    so that it can still take seconds to do one file, and you use make to
    avoid doing it.

    A lot of is it the optimization. Disable optimization and GCC is
    something like 15X faster.

    Optimization exhibits diminshing returns. It takes more and more
    work for less and less gain. It's really easy to make optimization
    take 10X longer for a fraction of a percent increase in speed.

    Yet, it tends to be done because of the reasoning that the program is
    compiled once, and then millions of instances of the program are run
    all over the world.

    One problem in optimization is that it is expensive to look for the
    conditions that enable a certain optimization. It is more expensive
    than doing the optimization, because the optimization is often
    a conceptually simple code transformation that can be done quickly,
    when the conditions are identified. But compiler has to look for those conditions everywhere, in every segment of code, every basic block.
    But it may turn out that there is a "hit" for those conditions in
    something like one file out of every hundred, or even more rarely.

    When there is no "hit" for the optimization's conditions, then it
    doesn't take place, and all that time spent looking for it is just
    making the compiler slower.

    The problem is that to get the best possible optimization, you have to
    look for numerous such rare conditions. When one of them doesn't "hit",
    one of the others might. The costs of these add up. Over time,
    compiler developers tend to add optimizatons much more than remove them.

    They in fact all come across as excuses for your favorite compiler being slow.

    Well, yes. Since we've had incremental rebuilding since the time VLSI
    machines were measured in single digit Mhz, we've taken it for granted
    that it will be used and so, to reiterate, that excuses the idea of
    a compiler taking several seconds to do one file.

    Which one of these methods would you use to advertise the LPS throughput
    of a compiler that you develop?

    It would be a lie to measure lines per second on anything but
    a single-core, complete rebuild of the benchmark program.

    High LPS compilers are somehow not winning in the programming
    marketplace, or at least some segments.

    That field is open!

    Once upon a time it seemed that GCC would remain unchallenged. Then
    Clang came along: but it too got huge, fat and slow within a bunch of
    years. This is mainly due to trying to have good optimizations.

    You will never get a C compiler that has very high LSP throughput, but
    doesn't optimize as well as the "leading brand", to make inroads into
    the ecosystem dominated by the "leading brand".

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Keith Thompson@3:633/10 to All on Thu Oct 30 13:21:39 2025
    richard@cogsci.ed.ac.uk (Richard Tobin) writes:
    In article <10dv52b$3gq3j$1@dont-email.me>,
    Richard Heathfield <rjh@cpax.org.uk> wrote:

    $time make -j $(nproc)

    Eww. How does make distinguish between j with an argument and
    j with no argument and a target?

    $ make -j a
    make: *** No rule to make target 'a'. Stop.
    $ make -j 3
    make: *** No targets specified and no makefile found. Stop.
    $ make 3
    cc 3.c -o 3

    That's a really bad idea.

    Meh.

    The data structure that defines the '-j' option in the GNU make
    source is:

    static struct command_switch switches[] =
    {
    // ...
    { 'j', positive_int, &arg_job_slots, 1, 1, 0, 0, &inf_jobs, &default_job_slots,
    "jobs", 0 },
    //...
    };

    Yes, it's odd that "-j" may or may not be followed by an argument.
    The way it works is that if the following argument exists and is
    (a string representing) a positive integer, it's taken as "-j N",
    otherwise it's taken as just "-j".

    A make argument that's not an option is called a "target"; for
    example in "make -j 4 foo", "foo" is the target. A target whose name
    is a positive integer is rare enough that the potential ambiguity
    is almost never an issue. If it is, you can use the long form:
    "make --jobs" or "make --jobs=N".

    I think it would have been cleaner if the argument to "-j" had
    been mandatory, with an argument of "0", "-1", or "max" having
    some special meaning. But changing it could break existing scripts
    that invoke "make -j" (though as I've written elsethread, "make -j"
    can cause problems).

    It would also have been nice if the "make -j $(nproc)" functionality
    had been built into make.

    The existing behavior is a bit messy, but it works, and I've never
    run into any actual problems with the way the options are parsed.

    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    void Void(void) { Void(); } /* The recursive call of the void */

    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Keith Thompson@3:633/10 to All on Thu Oct 30 13:37:41 2025
    David Brown <david.brown@hesbynett.no> writes:
    [...]
    Try "time make -j" as a simple step.
    [...]

    In my recent testing, "make -j" without a numeric argument (which
    tells make to run as many parallel steps as possible) caused my
    system to bog down badly. This was on a fairly large project (I used
    vim); it might not be as much of a problem with a smaller project.

    I've found that "make -j $(nproc)" is safer. The "nproc" command
    is likely to be available on any system that has a "make" command.

    It occurs to me that "make -j N" can fail if the Makefile does
    not correctly reflect all the dependencies. I suspect this is
    less likely to be a problem if the Makefile is generated rather
    than hand-written.

    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    void Void(void) { Void(); } /* The recursive call of the void */

    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From David Brown@3:633/10 to All on Thu Oct 30 23:01:33 2025
    On 30/10/2025 18:49, bart wrote:
    On 30/10/2025 15:04, David Brown wrote:
    On 30/10/2025 13:07, bart wrote:

    You moan that compiles are too slow.˙ Yet doing them in parallel is a
    "workaround".˙ Avoiding compiling unnecessarily is a "workaround".
    Caching compilation work is a "workaround".˙ Using a computer from
    this century is a "workaround".˙ Using a decent OS is a "workaround".
    Is / everything/ that would reduce your scope for complaining loudly
    to the wrong people a workaround?

    Yes, they are all workarounds to cope with unreasonably slow compilers.
    They in fact all come across as excuses for your favorite compiler being slow.

    Which one of these methods would you use to advertise the LPS throughput
    of a compiler that you develop?


    If I were developing a compiler, I would not advertise any kind of lines-per-second value. It is a totally useless metric - as useless as measuring developer performance on the lines of code he/she writes per day.


    Of course this kind of thing does not change the fundamental speed of
    the compiler, but it is very much a solution to problems, frustration
    or issues that people might have from compilers being slower than they
    might want.˙ "make -j" does not make the compiler faster, but it does
    mean that the speed of the compiler is less of an issue.


    You have to get raw compilation fast enough first.

    Why?˙ And - again - the "raw" compilation of gcc on C code, for my
    usage, is already more than fast enough for my needs.

    Not for mine, sorry.

    OK. I realise that's how you feel.


    ˙ If it were faster, I would still use make.˙ If it ran at 1 MLOC per
    second, I'd still use make, and I'd still structure my code the same
    way, and I'd still run on Linux.

    If it ran 1Mlps, then half of make would be pointless.

    If gcc ran at 1 Mlps, the developers would be doing something wrong -
    there are optimisations already understood that could give significant benefits to generated code but are impractical to implement or use
    because they scale badly and become too slow in practice. It would be
    better to prioritise these than meaningless speeds.


    However, with C, it would run into other problems, like heavy include
    files, which would normally be repeatedly processed per-module. (This is something my language solves, but I also suggested, elsewhere in the
    thread, a way it could be mitigated in C.)


    No method of avoiding headers has been found to be worth the effort in
    C. In C++, it's a different matter, and one of the key motivators for
    the development of C++ modules is build times.

    But can you a see a fundamental problem that really ought to be fixed
    first?

    Sure - if that were realistic.˙ But a more accurate model is that the
    cars go at 30 mph
    No, I contend that big compilers do seem to go at 3mph, or worse.

    We can argue about how much extra work your compilers do than mine, so
    let's look at a slightly different tool: assemblers.

    Assembly is a straightforward task: there is no deep analysis, no optimisation, so it should be very quick, yes? Well have a look this
    survey I did from a couple of years ago:

    https://www.reddit.com/r/Compilers/comments/1c41y6d/assembler_survey/

    There are quite a range of speeds! So what are those slow products up to that take so long?

    People use make for many reasons - incremental building and dependency
    management is just one (albeit important) aspect.˙ You mentioned in
    another post that "Python does not need make" - I have Python projects
    that are organised by makefiles.

    Makefiles sound to me like your 'hammer' then.

    It's a Swiss army knife, not a hammer.


    ˙ And honestly, if you had taken 1% of
    the time and effort you have spend complaining in c.l.c. about "make"
    and instead learned about it, you'd be writing makefiles in your
    sleep. It really is not that hard, and you will never convince me you
    are not smart enough to understand it quickly and easily.

    I simply don't like them; sorry. Everything they might do, is taken care
    of by language design, or by my compiler, or by scripting in a proper scripting language.

    And they are ugly.


    You haven't a clue about make and makefiles, but you insist on judging
    them - and on judging people who use the tool. It's okay for you not to
    use make, but it is not okay to be self-righteous about it as though
    your prejudice from ignorance is a good thing.



    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From David Brown@3:633/10 to All on Thu Oct 30 23:37:15 2025
    On 30/10/2025 21:37, Keith Thompson wrote:
    David Brown <david.brown@hesbynett.no> writes:
    [...]
    Try "time make -j" as a simple step.
    [...]

    In my recent testing, "make -j" without a numeric argument (which
    tells make to run as many parallel steps as possible) caused my
    system to bog down badly. This was on a fairly large project (I used
    vim); it might not be as much of a problem with a smaller project.

    I've found that "make -j $(nproc)" is safer. The "nproc" command
    is likely to be available on any system that has a "make" command.

    It occurs to me that "make -j N" can fail if the Makefile does
    not correctly reflect all the dependencies. I suspect this is
    less likely to be a problem if the Makefile is generated rather
    than hand-written.


    There certainly are makefile builds that might not work correctly with parallel builds. And I think you are right that this is typically a dependency specification issue, and that generating dependencies
    automatically in some way should have lower risk of problems. I think
    it is also typically on older makefiles - from the days of single core machines where "make -j N" was not considered - that had such issues.


    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From bart@3:633/10 to All on Thu Oct 30 23:23:15 2025
    On 30/10/2025 18:59, Kaz Kylheku wrote:
    On 2025-10-30, bart <bc@freeuk.com> wrote:
    On 30/10/2025 15:04, David Brown wrote:
    On 30/10/2025 13:07, bart wrote:

    You moan that compiles are too slow.˙ Yet doing them in parallel is a
    "workaround".˙ Avoiding compiling unnecessarily is a "workaround".
    Caching compilation work is a "workaround".˙ Using a computer from this
    century is a "workaround".˙ Using a decent OS is a "workaround".˙ Is /
    everything/ that would reduce your scope for complaining loudly to the
    wrong people a workaround?

    Yes, they are all workarounds to cope with unreasonably slow compilers.

    The idea of incremental rebuilding goes back to a time when compilers
    were fast, but machines were slow.

    What do you mean by incremental rebuilding? I usually talk about
    /independent/ compilation.

    Then incremental builds might be about deciding which modules to
    recompile, except that that is so obvious, you didn't give it a name.

    Compile the one file you've just edited. If it might impact on any
    others (you work on a project for months, you will know it intimately),
    then you just compile the lot.


    If you had /those/ exact compilers today, and used them for even a pretty large project, you could likely do a full rebuild every time.

    But incremental building didn't go away because we already had it,
    and we took that into account when maintaining compilers.

    Basically, decades ago, we accepted the idea that it can take several
    seconds to compile the average file, and that we have incremental
    building to help with that.

    And so, unsurprisingly, as machines got several orders of magnitude
    faster, people we have made compilers do more and become more bloated,
    so that it can still take seconds to do one file, and you use make to
    avoid doing it.

    A lot of is it the optimization. Disable optimization and GCC is
    something like 15X faster.

    I don't think so. Not for C anyway, or that level of language. It's
    usually about 3-5 times between -O0 and -O3, and even less between -O0
    and -O2.

    (The difference tends to greater for compiling bigger modules, but you
    also get more global optimisations.)

    Optimization exhibits diminshing returns. It takes more and more
    work for less and less gain. It's really easy to make optimization
    take 10X longer for a fraction of a percent increase in speed.

    Yet, it tends to be done because of the reasoning that the program is compiled once, and then millions of instances of the program are run
    all over the world.

    One problem in optimization is that it is expensive to look for the conditions that enable a certain optimization. It is more expensive
    than doing the optimization, because the optimization is often
    a conceptually simple code transformation that can be done quickly,
    when the conditions are identified. But compiler has to look for those conditions everywhere, in every segment of code, every basic block.
    But it may turn out that there is a "hit" for those conditions in
    something like one file out of every hundred, or even more rarely.

    When there is no "hit" for the optimization's conditions, then it
    doesn't take place, and all that time spent looking for it is just
    making the compiler slower.

    The problem is that to get the best possible optimization, you have to
    look for numerous such rare conditions. When one of them doesn't "hit",
    one of the others might. The costs of these add up. Over time,
    compiler developers tend to add optimizatons much more than remove them.

    They in fact all come across as excuses for your favorite compiler being
    slow.

    The problem is that there is no fast path for -O0:

    c:\cx>tim gcc -O2 -s sql.c
    Time: 39.685

    c:\cx>tim gcc -O0 -s sql.c
    Time: 7.819 **

    That 8s vs 40s is welcome, but it can be also be:

    c:\cx>tim bcc sql
    Compiling sql.c to sql.exe
    Time: 0.245

    (** Note that this test uses windows.h, and gcc's version is much bigger
    than mine, and accounts for 1.3s of that timing.)

    So -O0 is still 25 slower than my product.

    (Tcc would be even faster, but it's not working for this app ATM. I'm sometimes considered whether gcc should just secretly bundle tcc.exe,
    and run it for O-1.)


    Well, yes. Since we've had incremental rebuilding since the time VLSI machines were measured in single digit Mhz, we've taken it for granted
    that it will be used and so, to reiterate, that excuses the idea of
    a compiler taking several seconds to do one file.

    Which one of these methods would you use to advertise the LPS throughput
    of a compiler that you develop?

    It would be a lie to measure lines per second on anything but
    a single-core, complete rebuild of the benchmark program.

    Exactly. But also, you really need to do comparisons with other products
    on the same hardware, as LPS will be tied to the machine.

    (My friend's ordinary laptop, used for ordinary consumer stuff, is 70%
    faster than my PC. But I'm happy to give benchmark results on the PC.)

    High LPS compilers are somehow not winning in the programming
    marketplace, or at least some segments.

    That field is open!

    Once upon a time it seemed that GCC would remain unchallenged. Then
    Clang came along: but it too got huge, fat and slow within a bunch of
    years. This is mainly due to trying to have good optimizations.

    It had to keep up with gcc. But it is not helped by being based around
    LLVM which has grown into a monstrosity.

    You will never get a C compiler that has very high LSP throughput, but doesn't optimize as well as the "leading brand", to make inroads into
    the ecosystem dominated by the "leading brand".

    People into compilers are obsessed with optimisation. It can be a
    necessity for languages that generate lots of redundant code that needs
    to be cleaned up, but not so much for C.

    Typical differences of between -O0 and -O2 compiled code can be 2:1.

    However even the most terrible native code will be a magnitude faster
    than interpreted code.


    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Keith Thompson@3:633/10 to All on Thu Oct 30 16:44:38 2025
    bart <bc@freeuk.com> writes:
    On 30/10/2025 18:59, Kaz Kylheku wrote:
    On 2025-10-30, bart <bc@freeuk.com> wrote:
    On 30/10/2025 15:04, David Brown wrote:
    On 30/10/2025 13:07, bart wrote:

    You moan that compiles are too slow.˙ Yet doing them in parallel is a
    "workaround".˙ Avoiding compiling unnecessarily is a "workaround".
    Caching compilation work is a "workaround".˙ Using a computer from this >>>> century is a "workaround".˙ Using a decent OS is a "workaround".˙ Is / >>>> everything/ that would reduce your scope for complaining loudly to the >>>> wrong people a workaround?

    Yes, they are all workarounds to cope with unreasonably slow compilers.
    The idea of incremental rebuilding goes back to a time when
    compilers
    were fast, but machines were slow.

    What do you mean by incremental rebuilding? I usually talk about /independent/ compilation.

    Then incremental builds might be about deciding which modules to
    recompile, except that that is so obvious, you didn't give it a name.

    Compile the one file you've just edited. If it might impact on any
    others (you work on a project for months, you will know it
    intimately), then you just compile the lot.

    I'll assume that was a serious question. Even if you don't care,
    others might.

    Let's say I'm working on a project that has a bunch of *.c and
    *.h files.

    If I modify just foo.c, then type "make", it will (if everything
    is set up correctly) recompile "foo.c" generating "foo.o", and
    then run a link step to recreate any executable that depends on
    "foo.o". It knows it doesn't have to recompile "bar.c" because
    "bar.o" sill exists and is newer than "bar.c".

    Perhaps the project provides several executable programs, and
    only two of them rely on foo.o. Then it can relink just those
    two executables.

    This is likely to give you working executables substantially
    faster than if you did a full rebuild. It's more useful while
    you're developing and updating a project than when you download
    the source and build it once.

    (I often tend to do full rebuilds anyway, for vague reasons I won't
    get into.)

    This depends on all relevant dependencies being reflected in the
    Makefile, and on file timestamps being updated correctly when files
    are edited. (In the distant past, I've run into problems with the
    latter when the files are on an NFS server and the server and client
    have their clocks set differently.)

    (I'll just go ahead and acknowledge, so you don't have to, that
    this might not be necessary if the build tools are infinitely fast.)

    If I've done a "make clean" or "git clean", or started from scratch
    by cloning a git repo or unpacking a .tar.gz file, then any generated
    files will not be present, and typing "make" will have to rebuild
    everything.

    [...]

    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    void Void(void) { Void(); } /* The recursive call of the void */

    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From bart@3:633/10 to All on Thu Oct 30 23:49:53 2025
    On 30/10/2025 15:04, David Brown wrote:
    On 30/10/2025 13:07, bart wrote:

    Maybe there /is/ something wrong with your machine or setup.˙ If you
    have a 2 core machine, it is presumably a low-end budget machine from perhaps 15 years ago.˙ I'm all in favour of keeping working systems and
    I strongly disapprove of some people's two or three year cycles for
    swapping out computers, but there is a balance somewhere.˙ With such an
    old system, I presume you also have old Windows (my office Windows
    machine is Windows 7), and thus the old and very slow style of WSL.
    That, I think, could explain the oddities in your timings.

    The machine is from 2021. It has an SSD, 8GB, and runs Windows 11. It
    uses WSL version 2.

    It is fast enough for my 40Kloc compiler to self-host itself repeatedly
    at about 15Hz (ie. produce 15 new generations per second). And that is
    using unoptimised x64 code:

    c:\mx2>tim ms ms ms ms ms ms ms ms ms ms ms ms ms ms ms hello
    Hello, World
    Time: 1.017

    Hmm, I'm only counting 14 'ms' after the first. So apologies, it is only
    14Hz!


    You have even suggested I have manipulated the figures!

    No, I did not.˙ I have at various times suggested that you cherry-pick,
    that you might have poor methodology and that you sometimes benchmark in
    an unrealistic way in order to give yourself a bigger windmill for your tilting.

    You said this:

    DB:
    So, you are exaggerating, mismeasuring or misusing your system to get
    build times that are well over an order of magnitude worse than
    expected. This follows your well-established practice.

    But this is also very interesting: right from the start, I've been
    making the point that the figures I got were far slower than expected
    for the task.

    Here it seems you are saying the same thing. Yet I'm the one who gets repeatedly castigated.

    So was I right in sensing something was off, or not?


    You were wrong in thinking something was off about cdecl or its build.
    And it should not be news to you that there is something very suboptimal about your computer environment, as this is not exactly the first time
    it has been discussed.

    There's nothing wrong with my environment. My PC is a supercomputer
    compared with even 1970s mainframes and certainly compared to 1980s PCs.






    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From bart@3:633/10 to All on Fri Oct 31 00:15:45 2025
    On 30/10/2025 23:44, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    On 30/10/2025 18:59, Kaz Kylheku wrote:
    On 2025-10-30, bart <bc@freeuk.com> wrote:
    On 30/10/2025 15:04, David Brown wrote:
    On 30/10/2025 13:07, bart wrote:

    You moan that compiles are too slow.˙ Yet doing them in parallel is a >>>>> "workaround".˙ Avoiding compiling unnecessarily is a "workaround".
    Caching compilation work is a "workaround".˙ Using a computer from this >>>>> century is a "workaround".˙ Using a decent OS is a "workaround".˙ Is / >>>>> everything/ that would reduce your scope for complaining loudly to the >>>>> wrong people a workaround?

    Yes, they are all workarounds to cope with unreasonably slow compilers. >>> The idea of incremental rebuilding goes back to a time when
    compilers
    were fast, but machines were slow.

    What do you mean by incremental rebuilding? I usually talk about
    /independent/ compilation.

    Then incremental builds might be about deciding which modules to
    recompile, except that that is so obvious, you didn't give it a name.

    Compile the one file you've just edited. If it might impact on any
    others (you work on a project for months, you will know it
    intimately), then you just compile the lot.

    I'll assume that was a serious question. Even if you don't care,
    others might.

    Let's say I'm working on a project that has a bunch of *.c and
    *.h files.

    If I modify just foo.c, then type "make", it will (if everything
    is set up correctly) recompile "foo.c" generating "foo.o", and
    then run a link step to recreate any executable that depends on
    "foo.o". It knows it doesn't have to recompile "bar.c" because
    "bar.o" sill exists and is newer than "bar.c".

    Perhaps the project provides several executable programs, and
    only two of them rely on foo.o. Then it can relink just those
    two executables.

    This is likely to give you working executables substantially
    faster than if you did a full rebuild. It's more useful while
    you're developing and updating a project than when you download
    the source and build it once.

    I never came across any version of 'make' in the DEC OSes I used in the
    1970s, in the 1980s did see it either.

    In any case it wouldn't have worked with my compiler, as it was not a
    discrete program: it was memory-resident together with an editor, as
    part of my IDE.

    This helped to get fast turnarounds even on floppy-based 8-bit systems.

    Plus, I wouldn't have felt the issue was of any great importance:

    When you're working intensely on a project for weeks or months, you will
    be dealing with a thousand functions, variables and constants that you
    have to keep organised in your mind.

    Keeping track of which modules needed recompiling was child's play (and
    I don't mean that literally!).

    Anyway, with the language I was using at that time, modules had a
    particular organisation:

    * Most were modules containing code
    * Some were classed as headers (only vaguely related to C headers),
    which contained shared, project-wide declarations
    * All modules shared the same set of headers (on compilation, all the
    headers were treated as one, via an IDE-synthesised header that
    included the rest)

    Edits to code modules only needed that module recompiled. A change to
    any header could require all to be recompiled, but that was at your discretion.



    (I often tend to do full rebuilds anyway, for vague reasons I won't
    get into.)

    This depends on all relevant dependencies being reflected in the
    Makefile, and on file timestamps being updated correctly when files
    are edited. (In the distant past, I've run into problems with the
    latter when the files are on an NFS server and the server and client
    have their clocks set differently.)

    (I'll just go ahead and acknowledge, so you don't have to, that
    this might not be necessary if the build tools are infinitely fast.)

    If I've done a "make clean" or "git clean", or started from scratch
    by cloning a git repo or unpacking a .tar.gz file, then any generated
    files will not be present, and typing "make" will have to rebuild
    everything.

    [...]



    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Kaz Kylheku@3:633/10 to All on Fri Oct 31 00:28:12 2025
    On 2025-10-30, David Brown <david.brown@hesbynett.no> wrote:
    If I were developing a compiler, I would not advertise any kind of lines-per-second value. It is a totally useless metric - as useless as measuring developer performance on the lines of code he/she writes per day.

    If that were your only advantage, you'd have to flout it.

    "[[ Our compiler emits lousy code, emits only half the required ISO
    diagnostics (and those are all there are), and is compatible with only
    75% of your system's header files, and 80% of the ABI, but ...]] have you
    seen the raw speed in lines per second?"

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Chris M. Thomasson@3:633/10 to All on Thu Oct 30 17:35:31 2025
    On 10/30/2025 5:27 PM, Waldek Hebisch wrote:
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
    antispam@fricas.org (Waldek Hebisch) writes:
    [...]
    Assuming that you have enough RAM you should try at least using
    'make -j 3', that is allow make to use up to 3 jobs. I wrote
    at least, because AFAIK cheapest PC CPU-s of reasonable age
    have at least 2 cores, so to fully utilize the machine you
    need at least 2 jobs. 3 is better, because some jobs may wait
    for I/O.

    I haven't been using make's "-j" option for most of my builds.
    I'm going to start doing so now (updating my wrapper script).

    I initially tried replacing "make" by "make -j", with no numeric
    argument. The result was that my system nearly froze (the load
    average went up to nearly 200). It even invoked the infamous OOM
    killer. "make -j" tells make to use as many parallel processes
    as possible.

    "make -j $(nproc)" is much better. The "nproc" command reports the
    number of available processing units. Experiments with a fairly
    large build show that arguments to "-j" larger than $(nproc) do
    not speed things up (on a fairly old machine with nproc=4). I had
    speculated that "make -n 5" might be worthwhile of some processes
    were I/O-bound, but that doesn't appear to be the case.

    I frequently build my project on a few different machines. My
    machines typically are generously (compared to compiler need)
    equipped with RAM. Measuring several builds '-j 3' gave me
    fastest build on 2 core machine (no hyperthreading), '-j 7'
    gave me fastest build on old 4 core machine with hyperthreading
    (so 'nproc' reported 8 cores). In general, increasing number
    of jobs I see increasing total CPU time, but real time may go
    down because more jobs can use time where CPU(s) would be
    otherwise idle. At some number of jobs I get best real time
    and with larger number of jobs overheads due to multiple jobs
    seem to dominate leading to increase in real time. If number
    of jobs is too high I get slowdown due to lack of real memory.

    On 12 core machine (24 logical cores) I use '-j 20'. Increasing
    number of jobs give sligtly faster build, but difference is
    small, so I prefer to have more cores availble for interactive
    use.

    Of course, that is balancing tradeoffs, your builds may have
    different characteristics than mine. I just wanted to say
    that _sometimes_ going beyond number of cores is useful.
    IIUC what Bart wrote he got 3 times speedup using '-j 3'
    on two core machine, which is unusually good speedup. IME
    normally 3 jobs on 2 core machine is neutral or gives small
    speedup. OTOH with hyperthreading activationg logical core

    Make sure to avoid false sharing when using hyperthreading... :^o


    my slow down its twin. Consequently using less jobs than
    logical cores may be better.



    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Keith Thompson@3:633/10 to All on Thu Oct 30 18:16:43 2025
    bart <bc@freeuk.com> writes:
    On 30/10/2025 23:44, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    [...]
    What do you mean by incremental rebuilding? I usually talk about
    /independent/ compilation.

    Then incremental builds might be about deciding which modules to
    recompile, except that that is so obvious, you didn't give it a name.

    Compile the one file you've just edited. If it might impact on any
    others (you work on a project for months, you will know it
    intimately), then you just compile the lot.
    I'll assume that was a serious question. Even if you don't care,
    others might.
    [...]

    I never came across any version of 'make' in the DEC OSes I used in
    the 1970s, in the 1980s did see it either.

    In any case it wouldn't have worked with my compiler, as it was not a discrete program: it was memory-resident together with an editor, as
    part of my IDE.

    This helped to get fast turnarounds even on floppy-based 8-bit systems.

    Plus, I wouldn't have felt the issue was of any great importance:
    [...]

    You asked what incremental building means. I told you. Your only
    response is to let us all know that you don't find it useful.

    I think we all already knew that.

    I assumed (a) that you didn't already know what incremental building
    means and (b) that you wanted to know. That's why I posted my answer to
    your question.

    I don't recall ever seeing you react positively to someone giving you information that you've asked for. Instead, you tend to use the answer
    as an opportunity to tell us all that whatever concept you were asking
    about is not useful to you.

    Did you ask what incremental building means because you wanted to know?

    Should I assume that every question you ask is rhetorical?

    And a minor point: In the quoted text in your followup, the blank lines
    between paragraphs in what I wrote were deleted. Please don't do that.

    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    void Void(void) { Void(); } /* The recursive call of the void */

    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From bart@3:633/10 to All on Fri Oct 31 01:22:30 2025
    On 31/10/2025 00:28, Kaz Kylheku wrote:
    On 2025-10-30, David Brown <david.brown@hesbynett.no> wrote:
    If I were developing a compiler, I would not advertise any kind of
    lines-per-second value. It is a totally useless metric - as useless as
    measuring developer performance on the lines of code he/she writes per day.

    If that were your only advantage, you'd have to flout it.

    "[[ Our compiler emits lousy code, emits only half the required ISO diagnostics (and those are all there are), and is compatible with only
    75% of your system's header files, and 80% of the ABI, but ...]] have you seen the raw speed in lines per second?"


    How would Turbo C compare then?

    Anyway, C is often used as a target for compilers of other languages.

    There, it should be validated code, and so needs little error checking.
    It might not even use any headers (my generated C doesn't).

    The main requirement is that after the front-end compiler has generated
    the C, taking some fraction of a second, it doesn't immediately hit a
    brick wall if it tried to use a substantial product like gcc for the
    next stage.

    Here, optimisation is less important (unless the generated is hopelessy
    poor). But it's quite possible to choose between a fast backend compiler
    for routine builds, and a slower optimising one for production.

    In fact, you can use this approach anyway even if directly coding in C:
    use a fast compiler most of the time, and a slower one for a periodic
    check or when you need the better code.

    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From bart@3:633/10 to All on Fri Oct 31 01:36:36 2025
    On 31/10/2025 01:16, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    On 30/10/2025 23:44, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    [...]
    What do you mean by incremental rebuilding? I usually talk about
    /independent/ compilation.

    Then incremental builds might be about deciding which modules to
    recompile, except that that is so obvious, you didn't give it a name.

    Compile the one file you've just edited. If it might impact on any
    others (you work on a project for months, you will know it
    intimately), then you just compile the lot.
    I'll assume that was a serious question. Even if you don't care,
    others might.
    [...]

    I never came across any version of 'make' in the DEC OSes I used in
    the 1970s, in the 1980s did see it either.

    In any case it wouldn't have worked with my compiler, as it was not a
    discrete program: it was memory-resident together with an editor, as
    part of my IDE.

    This helped to get fast turnarounds even on floppy-based 8-bit systems.

    Plus, I wouldn't have felt the issue was of any great importance:
    [...]

    You asked what incremental building means. I told you. Your only
    response is to let us all know that you don't find it useful.

    Actually I didn't mention 'make'. I said what I thought it meant, and I expanded on that in my reply to you.

    You mentioned 'make', and I also explained why it wouldn't have been any
    good to me.

    In any case, you still have to give that dependency information to
    'make', and maintain it, as well as all info about the constituent files
    of the project.

    Since I used project files from a very early stage, much of that
    information is already present (and is used to browse the source files
    and to do full compiles and linking).

    If I wanted automatic dependency handling, then it would have made sense
    to add that to the project file, than use an external tool with arcane
    syntax.

    The project file also had the task of doing test runs of the
    application, applying suitable inputs, and at one point, also dealing
    with overlays.

    Sometimes, the generated program was downloaded to a separate
    microprocessor to in other to test on bare hardware.

    The picture I'm giving is that there was lots going on, centrally
    controlled, compared with the minor aspects that a makefile could help
    with, but which would have needed a duplicate lot of information.

    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Keith Thompson@3:633/10 to All on Thu Oct 30 19:13:17 2025
    bart <bc@freeuk.com> writes:
    On 31/10/2025 01:16, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    On 30/10/2025 23:44, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    [...]
    What do you mean by incremental rebuilding? I usually talk about
    /independent/ compilation.

    Then incremental builds might be about deciding which modules to
    recompile, except that that is so obvious, you didn't give it a name. >>>>>
    Compile the one file you've just edited. If it might impact on any
    others (you work on a project for months, you will know it
    intimately), then you just compile the lot.
    I'll assume that was a serious question. Even if you don't care,
    others might.
    [...]

    I never came across any version of 'make' in the DEC OSes I used in
    the 1970s, in the 1980s did see it either.

    In any case it wouldn't have worked with my compiler, as it was not a
    discrete program: it was memory-resident together with an editor, as
    part of my IDE.

    This helped to get fast turnarounds even on floppy-based 8-bit systems.

    Plus, I wouldn't have felt the issue was of any great importance:
    [...]
    You asked what incremental building means. I told you. Your only
    response is to let us all know that you don't find it useful.

    Actually I didn't mention 'make'. I said what I thought it meant, and
    I expanded on that in my reply to you.

    You mentioned 'make', and I also explained why it wouldn't have been
    any good to me.

    "make" is probably the most common tool that supports incremental
    building, and certainly the one I'm most familiar with. There
    are other tools that have similar support (many of them are built on top
    of "make"). The idea of incremental building isn't as tightly tied to
    "make" as I might have suggested.

    In any case, you still have to give that dependency information to
    'make', and maintain it, as well as all info about the constituent
    files of the project.

    Makefiles are commonly generated automatically.

    I asked you several questions, that you quietly snipped. I'll assume
    you refuse to answer them.

    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    void Void(void) { Void(); } /* The recursive call of the void */

    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From bart@3:633/10 to All on Fri Oct 31 02:14:00 2025
    On 30/10/2025 23:49, bart wrote:
    On 30/10/2025 15:04, David Brown wrote:
    On 30/10/2025 13:07, bart wrote:

    Maybe there /is/ something wrong with your machine or setup.˙ If you
    have a 2 core machine, it is presumably a low-end budget machine from
    perhaps 15 years ago.˙ I'm all in favour of keeping working systems
    and I strongly disapprove of some people's two or three year cycles
    for swapping out computers, but there is a balance somewhere.˙ With
    such an old system, I presume you also have old Windows (my office
    Windows machine is Windows 7), and thus the old and very slow style of
    WSL. That, I think, could explain the oddities in your timings.

    The machine is from 2021. It has an SSD, 8GB, and runs Windows 11. It
    uses WSL version 2.

    It is fast enough for my 40Kloc compiler to self-host itself repeatedly
    at about 15Hz (ie. produce 15 new generations per second). And that is
    using unoptimised x64 code:

    ˙ c:\mx2>tim ms ms ms ms ms ms ms ms ms ms ms ms ms ms ms hello
    ˙ Hello, World
    ˙ Time: 1.017

    Hmm, I'm only counting 14 'ms' after the first. So apologies, it is only 14Hz!

    That timing is from the current compiler. The more streamlined one I'm
    working on now (where the IL plays a smaller role) can manage 16Hz; 14% faster.

    There are a few sluggish areas I want to look at.

    And yes it is more of a sport now than a real need.

    My compilers ought to be slow as they have so many passes. Tcc
    supposedly has only one. So another project I might have a go at is a single-pass C compiler that is faster than Tcc.

    Just to see how fast I can go at producing native code. However, if the
    code is too poor, there will be lots of it, and it will slow down the
    latter stages. I'll have to see.



    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Janis Papanagnou@3:633/10 to All on Fri Oct 31 07:44:43 2025
    On 30.10.2025 21:21, Keith Thompson wrote:
    [...]

    The data structure that defines the '-j' option in the GNU make
    source is:

    static struct command_switch switches[] =
    {
    // ...
    { 'j', positive_int, &arg_job_slots, 1, 1, 0, 0, &inf_jobs, &default_job_slots,
    "jobs", 0 },
    //...
    };

    Yes, it's odd that "-j" may or may not be followed by an argument.
    The way it works is that if the following argument exists and is
    (a string representing) a positive integer, it's taken as "-j N",
    otherwise it's taken as just "-j".

    Incidentally in some recent (<2 years past) "C" program I needed
    a lot of options to control the software. When I looked into the
    man pages of getopt(3) in my GNU/Linux environment I noticed the
    "optional optarg" capability of this 'getopt' version and I used
    it deliberately for good reasons. - The opt-string specification
    for this feature was done with a double-colon, as defined in
    "s::d:f:r:g::u:a::m::kt::lqj::p::nci:o:"
    for the program syntax
    [-s[wxh]] [-d density] [-f pattern] [-r seed] [-g[ngen]] [-u rule]
    [-a[gen]] [-m[rate]] [-k|-t[sec]|-l|-q] [-j[n]] [-p[symbol]|-n|-c]
    [-i infile] [-o outfile]
    The disambiguation with program arguments or other options was done
    by writing _no space_ between the option letter and the optional option-argument. So you could write, e.g., -j, or -j1, but not -j 1
    (for those options that could have optional arguments).

    I cannot tell, though, whether GNU make did use this getopt feature
    similarly (or whether it had coded some ad hoc heuristic parsing).


    A make argument that's not an option is called a "target"; for
    example in "make -j 4 foo", "foo" is the target. A target whose name
    is a positive integer is rare enough that the potential ambiguity
    is almost never an issue. If it is, you can use the long form:
    "make --jobs" or "make --jobs=N".

    I think it would have been cleaner if the argument to "-j" had
    been mandatory, with an argument of "0", "-1", or "max" having
    some special meaning. But changing it could break existing scripts
    that invoke "make -j" (though as I've written elsethread, "make -j"
    can cause problems).

    I agree with having an explicit option argument would be clearer.

    In my case above (and I don't know about the 'make' case discussed
    in this thread) the -j had another semantics than -j0 (or such); I
    needed both possibilities. So the option would have been (for me)
    to add another (unrelated) option name from the very few remaining
    letters and the choice would then have been arbitrary/non-mnemonic.
    (For reasons I also didn't want to introduce long option names.)


    It would also have been nice if the "make -j $(nproc)" functionality
    had been built into make.

    Yes. - This is actually how I'd have (with GNU 'getopt') designed it;
    make (one instance), make -j (use max. available), make -j N (use N).

    (Personally I dislike using the "C" programming pattern '-1' on the
    user interface level to indicate "maximum" or some such.)

    The existing behavior is a bit messy, but it works, and I've never
    run into any actual problems with the way the options are parsed.

    (I've never had any speed issues with make, so I've never used -j;
    despite it comes "for free". - But I've also no 64 kernel CPUs or
    MLOC-sized projects at home.)

    Janis


    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Janis Papanagnou@3:633/10 to All on Fri Oct 31 07:49:25 2025
    On 31.10.2025 07:44, Janis Papanagnou wrote:

    (I've never had any speed issues with make, so I've never used -j;
    despite it comes "for free". - But I've also no 64 kernel CPUs or
    MLOC-sized projects at home.)

    Oops! - s/kernel/core/

    Janis



    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Janis Papanagnou@3:633/10 to All on Fri Oct 31 09:31:29 2025
    On 30.10.2025 23:01, David Brown wrote:
    On 30/10/2025 18:49, bart wrote:
    [...]

    If I were developing a compiler, I would not advertise any kind of lines-per-second value. It is a totally useless metric -

    It's good enough for marketing.

    as useless as measuring developer performance on the lines of code
    he/she writes per day.

    Which sadly had been done (maybe still?) by the less enlightened
    instances of management.

    Janis


    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From tTh@3:633/10 to All on Fri Oct 31 10:29:12 2025
    On 10/31/25 02:22, bart wrote:

    Anyway, C is often used as a target for compilers of other languages.

    There, it should be validated code, and so needs little error checking.
    It might not even use any headers (my generated C doesn't).

    s/should/MUST/

    --
    ** **
    * tTh des Bourtoulots *
    * http://maison.tth.netlib.re/ *
    ** **

    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From tTh@3:633/10 to All on Fri Oct 31 11:52:57 2025
    On 10/30/25 23:37, David Brown wrote:

    There certainly are makefile builds that might not work correctly with parallel builds.˙ And I think you are right that this is typically a dependency specification issue, and that generating dependencies automatically in some way should have lower risk of problems.

    I have encountered a case where to actions run in parallel
    overwrite a badly named temp file, same file for two process
    is definitively wrong :(

    --
    ** **
    * tTh des Bourtoulots *
    * http://maison.tth.netlib.re/ *
    ** **

    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Michael S@3:633/10 to All on Fri Oct 31 13:15:05 2025
    On Fri, 31 Oct 2025 01:22:30 +0000
    bart <bc@freeuk.com> wrote:

    On 31/10/2025 00:28, Kaz Kylheku wrote:
    On 2025-10-30, David Brown <david.brown@hesbynett.no> wrote:
    If I were developing a compiler, I would not advertise any kind of
    lines-per-second value. It is a totally useless metric - as
    useless as measuring developer performance on the lines of code
    he/she writes per day.

    If that were your only advantage, you'd have to flout it.

    "[[ Our compiler emits lousy code, emits only half the required ISO diagnostics (and those are all there are), and is compatible with
    only 75% of your system's header files, and 80% of the ABI, but
    ...]] have you seen the raw speed in lines per second?"


    How would Turbo C compare then?


    Turbo C implented majority of C89/C90 years (like 3-5 years in some
    cases) ahead of many of so-called "serious" C compilers.



    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Richard Tobin@3:633/10 to All on Fri Oct 31 11:43:46 2025
    In article <20251030172415.416@kylheku.com>,
    Kaz Kylheku <643-408-1753@kylheku.com> wrote:

    If that were your only advantage, you'd have to flout it.

    Flaunt.

    -- Richard

    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From David Brown@3:633/10 to All on Fri Oct 31 13:10:38 2025
    On 31/10/2025 00:23, bart wrote:

    People into compilers are obsessed with optimisation. It can be a
    necessity for languages that generate lots of redundant code that needs
    to be cleaned up, but not so much for C.

    Typical differences of between -O0 and -O2 compiled code can be 2:1.

    However even the most terrible native code will be a magnitude faster
    than interpreted code.


    You live in a world of x86 (with brief visits to 64-bit ARM). You used
    to work with smaller processors and lower level code, but seem to have forgotten that long ago.

    A prime characteristic of modern x86 processors is that they are
    extremely good at running extremely bad code. They are targeted at
    systems where being able to run old binaries is essential. A great deal
    of the hardware in an x86 cpu core is there to handle poorly optimised
    code - lots of jumps and function calls get predicted and speculated,
    data that is pushed onto and pulled off the stack gets all kinds of fast
    paths and short-circuits, and so on. And then there is the memory - if
    code has to wait for data from ram, the cpu can happily execute hundreds
    of cycles of unnecessary unoptimised code without making any difference
    to the final speed.

    Big ARM processors - such as on Pi's - have the same effects, though to
    a somewhat lesser extent.

    A prime characteristic of user programs on PC's and other "big" systems
    is that a lot of the time is spent doing things other than running the
    user code - file I/O, screen display, OS calls, or code in static
    libraries, DLLs (or SOs), etc. That stuff is completely unaffected by
    the efficiency of the user code - that's why interpreted or VM code is
    fast enough for a very wide range of use-cases.

    And if you are working with Windows systems with an MS DLL for the C
    runtime library (as used by some C toolchains on Windows, but not all),
    then you can get more distortions. If you have a call to memcpy that
    uses an external DLL, that is going to take perhaps 500 clock cycles
    even for a small fixed size of memcpy (assuming all code and data is in cache). The user code for the call might be 10 cycles or 20 cycles
    depending on the optimisation - compiler optimisation makes no
    measurable difference here. But if the toolchain uses a static library
    for memcpy and can optimise locally to replace the call, the static call
    to general memcpy code might take 200 cycles while the local code takes
    10 cycles. Suddenly the difference between optimising and
    non-optimising is huge.

    Then there is the type of code you are dealing with. Some code is very
    cpu intensive and can benefit from optimisations, other code is not.

    And optimisation is not just a matter of choosing -O0 or -O2 flags. It
    can mean thought and changes in the source code (some standard C
    changes, like use of "restrict" parameters, some compiler-specific
    changes like gcc attributes or builtins, and some target specific like organising data to fit cache usage). And it can mean careful flag
    choices - different specific optimisations suitable for the code at
    hand, and target related flags for enabling more target features. I am entirely confident that you have done nothing of these things when
    testing. That's not necessarily a bad thing in itself, when looking at
    widely portable source compiled to generic binaries, but it gives a very unrealistic picture of compiler optimisations and what can be achieved
    by someone who knows how to work with their compiler.


    All this conspires to give you this 2:1 ratio that you regularly state
    for the difference between optimised code and unoptimised code - gcc -O2
    and gcc -O0.


    In reality, people can often achieve far greater ratios for the type of
    code where performance matters and where it is is achievable. Someone
    working on game engines on an x86 would probably expect at least 10
    times difference between the flags they use, and no optimisation flags.
    For the targets I use, which are (generally) not super-scaler,
    out-of-order, etc., five to ten times difference is not uncommon. And
    when you throw C++ or other modern languages into the mix (remember, gcc
    and clang/llvm are not simple C compilers), the benefits of inlining and
    other inter-procedural optimisations can easily be an order of
    magnitude. (This is one reason why gcc and clang enable a number of optimisations, including at least inlining of functions marked
    appropriately, even with no optimisation flags specified.)


    You can continue to believe that high-end toolchains are no more than
    twice as good as your own compiler or tcc, if you like. (And they give
    you all the performance and features that you need, fine.) Those of us
    who want more from our tools, and know how to get it, know better.


    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From David Brown@3:633/10 to All on Fri Oct 31 13:16:44 2025
    On 31/10/2025 01:28, Kaz Kylheku wrote:
    On 2025-10-30, David Brown <david.brown@hesbynett.no> wrote:
    If I were developing a compiler, I would not advertise any kind of
    lines-per-second value. It is a totally useless metric - as useless as
    measuring developer performance on the lines of code he/she writes per day.

    If that were your only advantage, you'd have to flout it.

    "[[ Our compiler emits lousy code, emits only half the required ISO diagnostics (and those are all there are), and is compatible with only
    75% of your system's header files, and 80% of the ABI, but ...]] have you seen the raw speed in lines per second?"


    I have seen, and even used, compilers that would fit that description
    quite well :-( Usually, however, the flouted advantage is not the raw
    speed, but support for a microcontroller target that no one else
    supports. Oh, and generally they could add "costs a ridiculous price"
    to the list.



    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From bart@3:633/10 to All on Fri Oct 31 12:39:59 2025
    On 30/10/2025 16:22, Scott Lurndal wrote:
    bart <bc@freeuk.com> writes:
    On 30/10/2025 14:13, Scott Lurndal wrote:
    antispam@fricas.org (Waldek Hebisch) writes:
    bart <bc@freeuk.com> wrote:
    On 29/10/2025 23:04, David Brown wrote:
    On 29/10/2025 22:21, bart wrote:


    BTW 68Kloc would be CDECL; and 78Kloc is A68G. The CDECL timings are: >>>>>
    root@DESKTOP-11:/mnt/c/Users/44775/Downloads/cdecl-18.5# time make >>>>>> output
    <warnings>
    real 0m49.512s
    user 0m19.033s
    sys 0m3.911s

    Those numbers indicate that there is something wrong with your
    machine. Sum of second and third line above give CPU time.
    Real time is twice as large, so something is slowing down things.
    One possible trouble is having too small RAM, then OS is swaping
    data to/from disc. Some programs do a lot of random I/O, that
    can be slow on spinning disc, but SSD-s usually are much
    faster at random I/O.

    Assuming that you have enough RAM you should try at least using
    'make -j 3', that is allow make to use up to 3 jobs. I wrote
    at least, because AFAIK cheapest PC CPU-s of reasonable age
    have at least 2 cores, so to fully utilize the machine you
    need at least 2 jobs. 3 is better, because some jobs may wait
    for I/O.

    FYI, reasonably typical report for normal make (without -j
    option) on my machine is:

    real 0m4.981s
    user 0m3.712s
    sys 0m0.963s


    Just for grins, here's a report for a full rebuild of a real-world project >>> that I build regularly. Granted most builds are partial (e.g. one or
    two source files touched) and take far less time (15 seconds or so,
    most of which is make calling stat(2) on a few hundred source files
    on an NFS filesystem). Close to three million SLOC, mostly in header
    files. C++.


    What is the total size of the produced binaries?

    There are 181 shared objects (DLL in windows speak) and
    six binaries produced by the build. The binaries are all quite small since they dynamically link at runtime with the necessary
    shared objects, the set of which can vary from run-to-run.

    The largest shared object is 7.5MB.

    text data bss dec hex filename
    6902921 109640 1861744 8874305 876941 lib/libXXX.so

    Well, I've done a couple of small tests.

    The first was in generating 200 'small' DLLs - duplicates of the same
    library. This took 6 seconds to produce 200 libraries of 50KB each (10MB total). Each library is 5KB as it includes my language's standard libs.

    The second was to compile a single program of 7.5MB. This was done by
    taking one 300KB project and duplicating one of the bigger source
    modules a large number of times (130 copies for the 4.5MB result).

    However that ran into some problems; possibly, running out of memory (I
    have 6GB available), or something. In any case it's not worth my time
    looking at it right now.

    I did manage to produce a 4.5MB executable, and that took about 1
    second. The total source code was 500K (about 9 bytes per source line;
    how about that!)

    To summarise:

    Generate 200 x 50KB DLLS: 6 seconds (1.7MB/s) (1000Kloc so 170Klps)
    Generate 1 x 4.5MB EXE: 1 second (4.5MB/s) (500Kloc so 500Klps)

    This is on a machine that David Brown suggested was hopelessly old and
    slow. All source code compiled was in my language.

    I then did the same test using an existing C port of that library, with:

    gcc -O0 -s -shared libnnn.c -o libnnn.dll

    It took 72 seconds, with each DLL now being 100KB. Source code is the
    bare library so only 1.7Lloc, giving a throughput of 4.7Klps.


    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Scott Lurndal@3:633/10 to All on Fri Oct 31 13:43:20 2025
    bart <bc@freeuk.com> writes:
    On 30/10/2025 23:44, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:

    <snip accurate description of make(1) semantics>

    This is likely to give you working executables substantially
    faster than if you did a full rebuild. It's more useful while
    you're developing and updating a project than when you download
    the source and build it once.

    I never came across any version of 'make' in the DEC OSes I used in the >1970s, in the 1980s did see it either.

    Unix provided make in the 1970s, on DEC hardware.


    In any case it wouldn't have worked with my compiler, as it was not a >discrete program: it was memory-resident together with an editor, as
    part of my IDE.

    This helped to get fast turnarounds even on floppy-based 8-bit systems.

    The programs[*] I worked on in the 70 and 80's couldn't have been compiled
    on floppy-based 8-bit systems.

    [*] Master Control Program (MCP), for example.

    We had a program called WFL (Work Flow Language) which could be used
    to automate MCP builds, which would rebuild only the modules that
    changed then run the binder (linker) to create the MCP binary.


    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Scott Lurndal@3:633/10 to All on Fri Oct 31 13:48:21 2025
    David Brown <david.brown@hesbynett.no> writes:
    On 30/10/2025 21:37, Keith Thompson wrote:
    David Brown <david.brown@hesbynett.no> writes:
    [...]
    Try "time make -j" as a simple step.
    [...]

    In my recent testing, "make -j" without a numeric argument (which
    tells make to run as many parallel steps as possible) caused my
    system to bog down badly. This was on a fairly large project (I used
    vim); it might not be as much of a problem with a smaller project.

    I've found that "make -j $(nproc)" is safer. The "nproc" command
    is likely to be available on any system that has a "make" command.

    It occurs to me that "make -j N" can fail if the Makefile does
    not correctly reflect all the dependencies. I suspect this is
    less likely to be a problem if the Makefile is generated rather
    than hand-written.


    There certainly are makefile builds that might not work correctly with >parallel builds. And I think you are right that this is typically a >dependency specification issue, and that generating dependencies >automatically in some way should have lower risk of problems. I think
    it is also typically on older makefiles - from the days of single core >machines where "make -j N" was not considered - that had such issues.


    Hence the development of tools like 'mkdepend' (from X11, IIRC).

    Modern gcc includes all the support necessary to generate
    dependency files used by make(1) to reduce [re-]build times.

    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Scott Lurndal@3:633/10 to All on Fri Oct 31 13:57:20 2025
    bart <bc@freeuk.com> writes:
    On 30/10/2025 16:22, Scott Lurndal wrote:
    bart <bc@freeuk.com> writes:


    What is the total size of the produced binaries?

    There are 181 shared objects (DLL in windows speak) and
    six binaries produced by the build. The binaries are all quite small since >> they dynamically link at runtime with the necessary
    shared objects, the set of which can vary from run-to-run.

    The largest shared object is 7.5MB.

    text data bss dec hex filename
    6902921 109640 1861744 8874305 876941 lib/libXXX.so

    Well, I've done a couple of small tests.

    Pointlessly.


    The first was in generating 200 'small' DLLs - duplicates of the same >library. This took 6 seconds to produce 200 libraries of 50KB each (10MB >total). Each library is 5KB as it includes my language's standard libs.

    The shared object 'text' size ranges from 500KB to 14MB.

    Your toy projects aren't representative of real world application
    development. Can you not understand that?

    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From bart@3:633/10 to All on Fri Oct 31 14:55:49 2025
    On 31/10/2025 13:57, Scott Lurndal wrote:
    bart <bc@freeuk.com> writes:
    On 30/10/2025 16:22, Scott Lurndal wrote:
    bart <bc@freeuk.com> writes:


    What is the total size of the produced binaries?

    There are 181 shared objects (DLL in windows speak) and
    six binaries produced by the build. The binaries are all quite small since
    they dynamically link at runtime with the necessary
    shared objects, the set of which can vary from run-to-run.

    The largest shared object is 7.5MB.

    text data bss dec hex filename
    6902921 109640 1861744 8874305 876941 lib/libXXX.so

    Well, I've done a couple of small tests.

    Pointlessly.


    The first was in generating 200 'small' DLLs - duplicates of the same
    library. This took 6 seconds to produce 200 libraries of 50KB each (10MB
    total). Each library is 5KB as it includes my language's standard libs.

    The shared object 'text' size ranges from 500KB to 14MB.

    Well, I asked for some figures, and they were lacking. And here, the
    14MB figure contradicts the 7.5MB you mentioned above as the largest object.


    Your toy projects aren't representative of real world application development. Can you not understand that?

    I don't believe you. Clearly my tests show that basic conversion of HLL
    code to native code can be easily done at several MB per second even on
    my low-end hardware - per core.

    If your tests have a effective throughput far below that, then either
    you have very slow compilers, or are doing a mountain of work unrelated
    to compiling, or the orchestration of the whole process is poor, or some combination.

    (You mentioned there are nearly 400 developers involved? It sounds like
    a management problem.

    Perhaps you should employ someone whose job it is to look at the big
    picture, and to get those iteration times down.)

    In any case, the tasks I want to build are nothing like that, yet there
    is at least 2 magnitudes difference in build-time between my 'toy'
    tools, and all that Unix stuff that you are all trying to force down my throat.


    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From bart@3:633/10 to All on Fri Oct 31 16:34:22 2025
    On 31/10/2025 12:10, David Brown wrote:
    On 31/10/2025 00:23, bart wrote:

    People into compilers are obsessed with optimisation. It can be a
    necessity for languages that generate lots of redundant code that
    needs to be cleaned up, but not so much for C.

    Typical differences of between -O0 and -O2 compiled code can be 2:1.

    However even the most terrible native code will be a magnitude faster
    than interpreted code.


    You live in a world of x86 (with brief visits to 64-bit ARM).˙ You used
    to work with smaller processors and lower level code, but seem to have forgotten that long ago.

    A prime characteristic of modern x86 processors is that they are
    extremely good at running extremely bad code.

    Yes. And? That means compilers don't need to be so clever!


    They are targeted at
    systems where being able to run old binaries is essential.˙ A great deal
    of the hardware in an x86 cpu core is there to handle poorly optimised
    code - lots of jumps and function calls get predicted and speculated,
    data that is pushed onto and pulled off the stack gets all kinds of fast paths and short-circuits, and so on.˙ And then there is the memory - if
    code has to wait for data from ram, the cpu can happily execute hundreds
    of cycles of unnecessary unoptimised code without making any difference
    to the final speed.

    Big ARM processors - such as on Pi's - have the same effects, though to
    a somewhat lesser extent.

    A prime characteristic of user programs on PC's and other "big" systems
    is that a lot of the time is spent doing things other than running the
    user code - file I/O, screen display, OS calls, or code in static
    libraries, DLLs (or SOs), etc.˙ That stuff is completely unaffected by
    the efficiency of the user code - that's why interpreted or VM code is
    fast enough for a very wide range of use-cases.


    Yes. That's why interpreted/dynamic languages (those usually go
    together) are viable.

    When I first introduced interpreted scripting to my apps (35 years ago),
    I had a rough guideline in that an interpreted version of a task should ideally be no worse than half the speed of 100% native code.

    My everyday text-editor is interpreted, and I routinely edit 1-million
    line files without noticing any lag.


    And if you are working with Windows systems with an MS DLL for the C
    runtime library (as used by some C toolchains on Windows, but not all),
    then you can get more distortions.˙ If you have a call to memcpy that
    uses an external DLL, that is going to take perhaps 500 clock cycles
    even for a small fixed size of memcpy (assuming all code and data is in cache).˙ The user code for the call might be 10 cycles or 20 cycles depending on the optimisation - compiler optimisation makes no
    measurable difference here.˙ But if the toolchain uses a static library
    for memcpy and can optimise locally to replace the call, the static call
    to general memcpy code might take 200 cycles while the local code takes
    10 cycles.˙ Suddenly the difference between optimising and non-
    optimising is huge.

    (My language has a 'clear' operator. Then inline code is generated for fixed-size objects.)

    Then there is the type of code you are dealing with.˙ Some code is very
    cpu intensive and can benefit from optimisations, other code is not.

    And optimisation is not just a matter of choosing -O0 or -O2 flags.

    To me, 'compiler'-optimisation means getting my program faster /without changing the source/. All I want to do is either enable or disable the
    option.

    A lot of my optimisations are to do with design choices in my language, special features it might provide, and design choices in the application.

    Anything that can be done in the compiler is a bonus, but I don't rely
    on it (other than the special case of generated C, see below).



    ˙ It
    can mean thought and changes in the source code (some standard C
    changes, like use of "restrict" parameters, some compiler-specific
    changes like gcc attributes or builtins, and some target specific like organising data to fit cache usage).


    ˙ And it can mean careful flag
    choices - different specific optimisations suitable for the code at
    hand, and target related flags for enabling more target features.

    It sounds a lot of work. I used to just use inline assembly and be done
    with it!

    ˙ I am
    entirely confident that you have done nothing of these things when
    testing.˙ That's not necessarily a bad thing in itself, when looking at widely portable source compiled to generic binaries, but it gives a very unrealistic picture of compiler optimisations and what can be achieved
    by someone who knows how to work with their compiler.


    All this conspires to give you this 2:1 ratio that you regularly state
    for the difference between optimised code and unoptimised code - gcc -O2
    and gcc -O0.

    If I'm giving figures that compare gcc-O0 to gcc-O2, then clearly,
    everything else must remain the same. Otherwise why not compare two
    entirely different algorithms while we're about it.

    Anyway, I assume all that stuff you've mentioned has been incorporated
    into the A68G makefiles, and it's still a pretty slow interpreter!
    (Although probably the advanced features of the language don't help.)

    However, one thing I did try the other day was to take the generated
    makefile, and change the -O2 flag to -O0. Building it was a little
    faster (60s instead of 90s), but my benchmark ran in 13s instead of 5s,
    so 2.6:1.

    You seem to be suggesting the difference should be greater, but this is someone else's codebase, and someone else's set of compiler flags, other
    than the choice of -O0/-O2.

    So, while I understand what you're saying, that doesn't apply if you are building, running and measuring an existing codebase created by someone
    else.

    I *am* seeing figures of 2:1, or sometimes 3:1 or 4:1; the latter
    usually when someone is trying to be too clever with intensive use of
    macros that may hide too many nested function, so that it needs inlining
    to get a respectable speed.



    In reality, people can often achieve far greater ratios for the type of
    code where performance matters and where it is is achievable.˙ Someone working on game engines on an x86 would probably expect at least 10
    times difference between the flags they use, and no optimisation flags.
    For the targets I use, which are (generally) not super-scaler, out-of- order, etc., five to ten times difference is not uncommon.

    For the /applications/ I write (not silly benchmarks), and for x64, 2:1
    is typical, but this is comparing my compilers (a little better than
    gcc-O0), with gcc-O2.

    These are apps like compilers, assemblers and interpreters, which are computationally intensive (most code executed is within the program I've generated). On those, I usually get better than 2:1 for /programs I've written/, such as 1.5:1.

    It can be worse than 2:1 for C programs, especially other people's.

    But I have also seen up to 10:1 for my generated C code (18:1 below),
    which currently is very poor, where I /require/ optimisation to clean up redundancies.


    ˙ And when you
    throw C++ or other modern languages into the mix (remember, gcc and clang/llvm are not simple C compilers), the benefits of inlining and
    other inter-procedural optimisations can easily be an order of
    magnitude.˙ (This is one reason why gcc and clang enable a number of optimisations, including at least inlining of functions marked appropriately, even with no optimisation flags specified.)


    You can continue to believe that high-end toolchains are no more than
    twice as good as your own compiler or tcc, if you like.

    Here are examples of two C libraries:

    Jpeg decoder on 94MB image:

    Ratio
    gcc -O2 4.4 seconds
    bcc 6.4 seconds 1.45 : 1
    tcc 10.6 seconds 2.41 : 1

    (The input file has been cached. Stopping after loading via fread takes
    0.08 seconds.)

    Calculate N digits of pi via my bignum library:

    gcc -O2 0.7 seconds
    bcc 1.6 seconds 2.3 : 1 (C version ported from M)
    mm 1.2 seconds 1.7 : 1 (using version in my language)
    tcc 1.9 seconds 2.7 : 1

    And here is the Lua interpreter running Fibonacci:

    gcc -O2 3.2 seconds
    gcc -O0 11.4 seconds 3.6 : 1
    bcc 7.3 seconds 2.3 : 1
    tcc 10.2 seconds 3.2 : 1

    This one is my interpreter also running the same Fibonacci test:

    gcc -O2 1.2 seconds (from low-level transpiled C)
    gcc -O0 22.3 seconds 18.6 : 1
    mm 1.3 seconds 1.1 : 1

    Here, gcc's optimiser is earning its keep.

    The ratios involving my own products are 1.45, 2.3, 1.7, 2.3, 1.1. The
    average is 1.77:1 slowdown compared to gcc-O2.

    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Scott Lurndal@3:633/10 to All on Fri Oct 31 17:18:53 2025
    bart <bc@freeuk.com> writes:
    On 31/10/2025 13:57, Scott Lurndal wrote:
    bart <bc@freeuk.com> writes:
    On 30/10/2025 16:22, Scott Lurndal wrote:
    bart <bc@freeuk.com> writes:


    What is the total size of the produced binaries?

    There are 181 shared objects (DLL in windows speak) and
    six binaries produced by the build. The binaries are all quite small since
    they dynamically link at runtime with the necessary
    shared objects, the set of which can vary from run-to-run.

    The largest shared object is 7.5MB.

    text data bss dec hex filename
    6902921 109640 1861744 8874305 876941 lib/libXXX.so

    Well, I've done a couple of small tests.

    Pointlessly.


    The first was in generating 200 'small' DLLs - duplicates of the same
    library. This took 6 seconds to produce 200 libraries of 50KB each (10MB >>> total). Each library is 5KB as it includes my language's standard libs.

    The shared object 'text' size ranges from 500KB to 14MB.

    Well, I asked for some figures, and they were lacking. And here, the
    14MB figure contradicts the 7.5MB you mentioned above as the largest object.

    The 7.5MB was the shared object containing the main code. 14MB
    was one outlier that I hadn't expected to be so large a text region (am actually looking into that now, I suspect the gcc optimizer doesn't handle
    a particular bit of generated data structure initialization sequence very well).

    $ size lib/*.so | cut -f 1

    text
    367395
    8053916
    8053916
    8053916
    22385
    134993
    6902921
    719346
    33698635
    36084944
    19501560
    3869694
    73570
    211384
    126472
    44610
    90992
    69081
    287447
    5308581
    12213437
    11228898
    6166468
    116563
    63242
    71842
    480359
    30823
    315595
    552362
    111956
    111956
    951445
    1457999
    29053
    2388204
    348969
    150472
    219346
    49420
    750129
    120295
    138622
    868002
    117492
    142438
    489431
    595478
    151900
    265009
    112371
    234140
    52977
    1152928
    567153
    614616
    151578
    181964
    14798814
    657231
    29984
    145595
    90394
    46204
    276076
    38248
    25649
    81913
    93313
    328478
    70278
    31539
    387492
    1885298
    144763
    51537
    37037
    44668
    167946
    4726570
    2472426
    95714
    29547
    24790
    55887
    76059
    47813
    78769
    136931
    65500
    323558
    2757388
    465288
    707782
    240259
    69803
    109695
    91664
    47862
    629404
    738060
    155033
    281246
    397902
    66721
    49279
    124507
    148506
    320033
    81491
    131769
    252140
    156101
    118933
    1777033
    353799
    534605
    96492
    143886
    254192
    26850
    54655
    106790
    56512
    87201
    230382
    792823
    314391
    37951
    274781
    1149389
    25851
    131519
    108052
    96303
    338036
    175900
    61630
    138460
    189483
    116789
    340759
    31324
    25293
    32149
    26870
    78069
    1494212
    427356
    237699
    30062440
    577998
    14611
    57346
    8724
    12007
    16053
    429021
    25367738
    35760664
    593138
    30982
    10087
    6552
    20032
    6539
    6738
    6738
    15262923
    145335
    4997
    42188
    11129
    11321
    7671
    8521
    8521
    11756
    15872
    11076
    23053

    A couple are third-party libraries distributed
    in binary form (e.g. the ones with 30+Mbytes of text).




    Your toy projects aren't representative of real world application
    development. Can you not understand that?

    I don't believe you. Clearly my tests show that basic conversion of HLL
    code to native code can be easily done at several MB per second even on
    my low-end hardware - per core.


    If your tests have a effective throughput far below that, then either
    you have very slow compilers, or are doing a mountain of work unrelated
    to compiling, or the orchestration of the whole process is poor, or some >combination.

    Or your tools are not capable of building a project of this size
    and complexity. If they were, they'd likely take even _more_ time
    to run.


    (You mentioned there are nearly 400 developers involved? It sounds like
    a management problem.

    I said nothing about the number of developers (perhaps you were looking
    at the output of the 'sloccount' command?)

    Between 2 and 8 developers have worked on this project
    at any one time over the last 15 years.

    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From bart@3:633/10 to All on Fri Oct 31 17:52:24 2025
    On 31/10/2025 17:18, Scott Lurndal wrote:
    bart <bc@freeuk.com> writes:
    On 31/10/2025 13:57, Scott Lurndal wrote:
    bart <bc@freeuk.com> writes:
    On 30/10/2025 16:22, Scott Lurndal wrote:
    bart <bc@freeuk.com> writes:


    What is the total size of the produced binaries?

    There are 181 shared objects (DLL in windows speak) and
    six binaries produced by the build. The binaries are all quite small since
    they dynamically link at runtime with the necessary
    shared objects, the set of which can vary from run-to-run.

    The largest shared object is 7.5MB.

    text data bss dec hex filename
    6902921 109640 1861744 8874305 876941 lib/libXXX.so

    Well, I've done a couple of small tests.

    Pointlessly.


    The first was in generating 200 'small' DLLs - duplicates of the same
    library. This took 6 seconds to produce 200 libraries of 50KB each (10MB >>>> total). Each library is 5KB as it includes my language's standard libs. >>>
    The shared object 'text' size ranges from 500KB to 14MB.

    Well, I asked for some figures, and they were lacking. And here, the
    14MB figure contradicts the 7.5MB you mentioned above as the largest object.

    The 7.5MB was the shared object containing the main code. 14MB
    was one outlier that I hadn't expected to be so large a text region (am actually looking into that now, I suspect the gcc optimizer doesn't handle
    a particular bit of generated data structure initialization sequence very well).

    $ size lib/*.so | cut -f 1

    text
    367395
    8053916


    A couple are third-party libraries distributed
    in binary form (e.g. the ones with 30+Mbytes of text).

    In sorted form:

    1 4,997 bytes
    2 6,539
    3 6,552
    ...
    178 30,062,440
    179 33,698,635
    180 35,760,664
    181 36,084,944

    About 330MB, or 260MB if disregarding the two biggest.

    That's quite substantial, but still, going with my test which built
    4.5MB in one second, 60 such builds would take a minute, totalling a
    260MB. Say add a bit more if split into 180 separate builds.

    And that is if done one at a time.

    So I still contend that the basic translation can still be done in a reasonable time, /if/ you really had to rebuild everything.

    (When I rebuild everything, it's because a module is part of one
    executable, so that whole binary must be rebuilt.)

    If your tests have a effective throughput far below that, then either
    you have very slow compilers, or are doing a mountain of work unrelated
    to compiling, or the orchestration of the whole process is poor, or some
    combination.

    Or your tools are not capable of building a project of this size
    and complexity. If they were, they'd likely take even _more_ time
    to run.

    Perhaps not, but so what? I've always developed tools according to the
    tasks and circumstances that were relevant to me.

    And usually, for building my own software.

    They just happen to also be a great deal zippier in operation when
    compared with other tools for building the same codebases.

    I'm pretty certain they have inefficiences that someone could address if
    they wanted to, or could choose to find streamlined paths if a fast
    turnaround was desirable.

    That's why I said it should be somebody's job to do that, in the same
    way that I considered it part of my job to ensure my development process wasn't slow enough to slow me down. If I'm twiddling my thumbs, then something's wrong!


    (You mentioned there are nearly 400 developers involved? It sounds like
    a management problem.

    I said nothing about the number of developers (perhaps you were looking
    at the output of the 'sloccount' command?)

    Yes. (I'm not sure what that was about.)

    Between 2 and 8 developers have worked on this project
    at any one time over the last 15 years.

    You might want to clear out some cruft then.

    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Kaz Kylheku@3:633/10 to All on Fri Oct 31 22:47:27 2025
    On 2025-10-31, Richard Tobin <richard@cogsci.ed.ac.uk> wrote:
    In article <20251030172415.416@kylheku.com>,
    Kaz Kylheku <643-408-1753@kylheku.com> wrote:

    If that were your only advantage, you'd have to flout it.

    Flaunt.

    *rubeyes* I can't believe I wrote that!

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From bart@3:633/10 to All on Fri Oct 31 23:40:03 2025
    On 31/10/2025 00:28, Kaz Kylheku wrote:
    On 2025-10-30, David Brown <david.brown@hesbynett.no> wrote:
    If I were developing a compiler, I would not advertise any kind of
    lines-per-second value. It is a totally useless metric - as useless as
    measuring developer performance on the lines of code he/she writes per day.

    If that were your only advantage, you'd have to flout it.

    "[[ Our compiler emits lousy code, emits only half the required ISO diagnostics (and those are all there are), and is compatible with only
    75% of your system's header files, and 80% of the ABI, but ...]]

    Those incompatibilities anyway, even on big compilers, and people are
    tolerant of them.

    How many headers have you seen which multiple conditional blocks which
    pander to different compilers, for example (from SDL2):

    # if defined(HAVE_ALLOCA_H)
    # include <alloca.h>
    # elif defined(__GNUC__)
    # define alloca __builtin_alloca
    # elif defined(_MSC_VER)
    # include <malloc.h>
    # define alloca _alloca
    # elif defined(__WATCOMC__)
    # include <malloc.h>
    # elif defined(__BORLANDC__)
    # include <malloc.h>
    # elif defined(__DMC__)
    # include <stdlib.h>
    # elif defined(__AIX__)
    #pragma alloca
    # elif defined(__MRC__)
    void *alloca(unsigned);
    # else
    char *alloca();
    # endif
    #endif

    (If you are writing your own compiler, where is it going to fit in?)

    In fact, half of configure scripts seem to be about testing the
    capabilities of the C compiler, so it is apparently expected that any of
    those features can be missing.

    And as for diagnostics, it seems that you have actively know about them
    and explicitly enable checking for them.





    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Keith Thompson@3:633/10 to All on Fri Oct 31 17:14:42 2025
    bart <bc@freeuk.com> writes:
    [...]
    And as for diagnostics, it seems that you have actively know about
    them and explicitly enable checking for them.

    It "seems"?

    Yes. Most C compilers, and gcc in particular, are not fully
    conforming by default, and do not produce all the diagnostics
    required by the ISO C standard. Most C compilers have options
    that tell them to attempt to do so. For gcc or clang, you can use
    "-std=c17 -pedantic". Replace "c17" by whatever edition of the
    standard hou prefer to use. Replace "-pedantic" by "-pedantic-errors"
    if you want fatal diagnostics. Replace "-pedantic" by "-Wpedantic"
    if you're fond of the letter 'W'.

    I've been telling you this for well over a decade, and it still only
    "seems" to be the case? How does that work?

    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    void Void(void) { Void(); } /* The recursive call of the void */

    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From bart@3:633/10 to All on Sat Nov 1 11:57:57 2025
    On 31/10/2025 22:01, Waldek Hebisch wrote:
    bart <bc@freeuk.com> wrote:
    On 30/10/2025 10:15, David Brown wrote:
    On 30/10/2025 01:36, bart wrote:

    So, what exactly did I do wrong here (for A68G):

    ˙˙ root@DESKTOP-11:/mnt/c/a68g/algol68g-3.10.5# time make >output
    ˙˙ real˙˙˙ 1m32.205s
    ˙˙ user˙˙˙ 0m40.813s
    ˙˙ sys˙˙˙˙ 0m7.269s

    This 90 seconds is the actual time I had to hang about waiting. I'd be >>>> interested in how I managed to manipulate those figures!

    Try "time make -j" as a simple step.


    OK, "make -j" gave a real time of 30s, about three times faster. (Not
    quite sure how that works, given that my machine has only two cores.)

    However, I don't view "-j", and parallelisation, as a solution to slow
    compilation. It is just a workaround, something you do when you've
    exhausted other possibilities.

    You have to get raw compilation fast enough first.
    <snip>

    Quite a few people have suggested that there is something amiss about my
    1:32 and 0:49 timings. One has even said there is something wrong with
    my machine.

    Yes, I wrote this. 90 seconds in itself could be OK, your machine
    just could be slow. But the numbers you gave clearly show that
    that only about 50% of time on _one_ core is used to do the build.
    So something is slowing down your machine. And this is specific to
    your setup, as other people running build on Linux get better than
    90% CPU utilization. You apparently get offended by this statement.
    If you are realy interested if fast tools you should investigate
    what is causing this.

    Anyway, there could be a lot of different reasons for slowdown.
    Fact that you get 3 times faster build using 'make -j' suggests
    that some other program is competing for CPU and using more jobs
    allows getting higher share of CPU. If that affects only programs
    running under WSL, than your numbers may or may not be relevant to
    WSL experience, but are incomparable to Linux timings. If slowdown
    affects all programs on your machine, then you should be interested
    in eliminating it, because it would also make your compiler faster.
    But that is your machine, if you not curious what happens that
    is OK.


    I'm really not interested in finding out the ins and outs of my Linux
    system or messing about with it.

    All I know is that I followed the instructions and the built-time for a particular project WAS 90 seconds elapsed, after that configure stuff.
    It shouldn't be job to fix any shortcomings.

    I wasn't that happy either with using '-j'. Yes I got a faster time, but
    that looks to me like brushing things under the carpet. What is really
    going on? It's hard to tell because it's all so complicated.

    I had a go anyway. I logged the output of a full 'make'. The output
    (sans some make-lines at each end) was 213 lines: 107 invocations of
    gcc, and 106 uses of 'mv'.

    I was able to use that output file as a script (and I didn't need
    'clean' before each run).

    It still took 92 seconds. I got rid of the 'mv' lines, it was now 85
    seconds. I added some commands, 'echo n' before each compile, and
    'time', to track each invocation.

    It looks like there are 106 files compiled, and last use of gcc is for linking, which took 3.x seconds. Most compiles were 0.5-0.8 seconds,
    with a few taking 1-2 seconds, all elapsed 'real' time.

    In each case, the user time was a fraction of the real time. One that
    caught my eye was file # 4: 0.450s real, 0.08s user.

    I tried to extract the invocation and simplify it, but it was too
    complicated. It looks like this (line breaks added):

    gcc -DHAVE_CONFIG_H -I. -I./src/include -D_GNU_SOURCE
    -DBINDIR='"/usr/local/bin"' -DINCLUDEDIR='"/usr/local/include"'
    -g -O2 --std=c17 -Wall -Wshadow -Wunused-variable -Wunused-parameter
    -Wno-long-long -MT ./src/a68g/a68g-a68g-conversion.o
    -MD -MP -MF ./src/a68g/.deps/a68g-a68g-conversion.Tpo -c
    -o ./src/a68g/a68g-a68g-conversion.o
    `test -f './src/a68g/a68g-conversion.c' ||
    echo './'`./src/a68g/a68g-conversion.c


    I've no idea what this is up to. But here, I managed to compile that
    file my way (I copied it to a place where the relevant headers were all
    in one place):

    gcc -O2 -c a68g-conversion.c

    Now real time is 0.14 seconds (recall it was 0.45). User time is still
    0.08s.

    So, what is all that crap that is making it 3 times slower? And do we
    need all those -Wall checks, given that this is a working, debugged program?

    I suggest a better approach would be to get rid of that rubbish and
    simplify it, rather than keep it in but having to call in reinforcements
    by employing extra cores, don't you think?

    If slowdown
    affects all programs on your machine, then you should be interested
    in eliminating it, because it would also make your compiler faster.

    That would be interesting. My already heavy 6-pass compiler can manage a sustained 0.5Mlps on the same machine, /and/ under Windows. How much
    faster can it be?

    OK, I have a way to run my C compiler under Linux. It would be a cross-compiler for Windows, and wouldn't be able to generate EXEs (needs access to actual Windows DLLS), but it can generate OBJ files.

    It's done via C transpilation, and I compared such versions on both
    Windows and WSL:

    c:\cx>tim cc -c sql
    Compiling sql.c to sql.obj
    Time: 0.187

    root@DESKTOP-11:/mnt/c/cx# time ./cu -c sql.c
    Compiling sql.c to sql.obj

    real 0m0.316s
    user 0m0.170s
    sys 0m0.075s

    The 'user' time looks about the same as what I get on Windows. I just
    get a longer elapsed time on Linux!

    (Note: the 'tim' utility on Windows is written to exclude the shell
    process start overheads, since I want actual compile-time. Normally my compilers are invoked from an IDE program - not using 'system' - so that overhead is not relevant.

    If included, the Windows timing would be 0.21 seconds.)


    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From bart@3:633/10 to All on Sat Nov 1 14:56:58 2025
    On 01/11/2025 11:57, bart wrote:
    On 31/10/2025 22:01, Waldek Hebisch wrote:

    Anyway, there could be a lot of different reasons for slowdown.
    Fact that you get 3 times faster build using 'make -j' suggests
    that some other program is competing for CPU and using more jobs
    allows getting higher share of CPU.˙ If that affects only programs
    running under WSL, than your numbers may or may not be relevant to
    WSL experience, but are incomparable to Linux timings.˙ If slowdown
    affects all programs on your machine, then you should be interested
    in eliminating it, because it would also make your compiler faster.
    But that is your machine, if you not curious what happens that
    is OK.

    I've no idea what this is up to. But here, I managed to compile that
    file my way (I copied it to a place where the relevant headers were all
    in one place):

    ˙˙ gcc -O2 -c a68g-conversion.c

    Now real time is 0.14 seconds (recall it was 0.45). User time is still 0.08s.

    So, what is all that crap that is making it 3 times slower? And do we
    need all those -Wall checks, given that this is a working, debugged
    program?

    I suggest a better approach would be to get rid of that rubbish and
    simplify it, rather than keep it in but having to call in reinforcements
    by employing extra cores, don't you think?

    I can now compile and link the 106 C modules of A68G into an executable,
    using my simple approach.

    The @ file below is invoked as 'gcc -O2 @file'. For this test, all
    relevant files are in one place for simplicity. Only a single invocation
    of gcc is used (multiple invocations would be needed to parallise,
    assuming gcc doesn't have such abilities itself).

    It took 38 seconds (30 seconds user) on a single core. Using -O0, it
    took 18/10 seconds.

    The generated A68 binary is 1.7MB. If I use -Os instead of -O2, the size
    is just 1MB, and build time is 35s elapsed. The benchmark is only
    slightly slower.

    It appears that the purpose of './configure' is to generate a 440-line
    header called 'a68g-config.h'.

    The BINDIR macro is needed only for plugin-script.c.

    -----------------------------
    -o a68 -s
    -DBINDIR='"/usr/local/bin"'
    --std=c17
    a68g-apropos.c
    a68g-bits.c
    a68g-conversion.c
    a68g-diagnostics.c
    a68g-io.c
    a68g-keywords.c
    a68g-listing.c
    a68g-mem.c
    a68g-non-terminal.c
    a68g-options.c
    a68g-path.c
    a68g-postulates.c
    a68g-pretty.c
    a68g.c
    double-gamic.c
    double-math.c
    double.c
    genie-assign.c
    genie-call.c
    genie-coerce.c
    genie-declaration.c
    genie-denotation.c
    genie-enclosed.c
    genie-formula.c
    genie-hip.c
    genie-identifier.c
    genie-misc.c
    genie-regex.c
    genie-rows.c
    genie-stowed.c
    genie-unix.c
    genie.c
    moids-diagnostics.c
    moids-misc.c
    moids-size.c
    moids-to-string.c
    mp-bits.c
    mp-complex.c
    mp-gamic.c
    mp-gamma.c
    mp-genie.c
    mp-math.c
    mp-mpfr.c
    mp-pi.c
    mp.c
    parser-annotate.c
    parser-bottom-up.c
    parser-brackets.c
    parser-extract.c
    parser-modes.c
    parser-moids-check.c
    parser-moids-coerce.c
    parser-moids-equivalence.c
    parser-refinement.c
    parser-scanner.c
    parser-scope.c
    parser-taxes.c
    parser-top-down.c
    parser-victal.c
    parser.c
    plugin-basic.c
    plugin-driver.c
    plugin-folder.c
    plugin-gen.c
    plugin-inline.c
    plugin-script.c
    plugin-tables.c
    plugin.c
    prelude-bits.c
    prelude-gsl.c
    prelude-mathlib.c
    prelude.c
    rts-bool.c
    rts-char.c
    rts-curl.c
    rts-curses.c
    rts-enquiries.c
    rts-formatted.c
    rts-heap.c
    rts-int128.c
    rts-internal.c
    rts-mach.c
    rts-monitor.c
    rts-parallel.c
    rts-plotutils.c
    rts-postgresql.c
    rts-sounds.c
    rts-stowed.c
    rts-transput.c
    rts-unformatted.c
    single-blas.c
    single-decomposition.c
    single-fft.c
    single-gamic.c
    single-gsl.c
    single-laplace.c
    single-math.c
    single-multivariate.c
    single-physics.c
    single-python.c
    single-r-math.c
    single-rnd.c
    single-svd.c
    single-torrix-gsl.c
    single-torrix.c
    single.c
    -lncursesw -ldl -lpthread -lgmp -lquadmath -lrt -lm

    --- PyGate Linux v1.5
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)