• Re: VAX (was: Why I've Dropped In)

    From Lars Poulsen@3:633/280.2 to All on Thu Jul 31 03:17:28 2025
    According to Anton Ertl <anton@mips.complang.tuwien.ac.at>:
    So going for microcode no longer was the best choice for the VAX, but AE>>>neither the VAX designers nor their competition realized this, and AE>>>commercial RISCs only appeared in 1986.

    John Levine <johnl@taugh.com> writes:
    That is certainly true but there were other mistakes too. One is that JL>>they underestimated how cheap memory would get, leading to the overcomplex JL>>instruction and address modes and the tiny 512 byte page size.

    On 2025-07-30, Anton Ertl <anton@mips.complang.tuwien.ac.at> wrote:
    Concerning code density, while VAX code is compact, RISC-V code with the
    C extension is more compact
    <2025Mar4.093916@mips.complang.tuwien.ac.at>, so in our time-traveling
    scenario that would not be a reason for going for the VAX ISA.

    Another aspect from those measurements is that the 68k instruction set (with only one memory operand for any compute instructions, and 16-bit granularity) has a code density similar to the VAX.

    Another, which is not entirely their fault, is that they did not expect JL>>compilers to improve as fast as they did, leading to a machine which was fun to
    program in assembler but full of stuff that was useless to compilers and JL>>instructions like POLY that should have been subroutines. The 801 project and
    PL.8 compiler were well underway at IBM by the time the VAX shipped, but DEC
    presumably didn't know about it.

    DEC probably was aware from the work of William Wulf and his students
    what optimizing compilers can do and how to write them. After all,
    they used his language BLISS and its compiler themselves.

    POLY would have made sense in a world where microcode makes sense: If microcode can be executed faster than subroutines, put a building
    stone for transcendental library functions into microcode. Of course, given that microcode no longer made sense for VAX, POLY did not make
    sense for it, either.

    Related to the microcode issue they also don't seem to have anticipated how JL>>important pipelining would be. Some minor changes to the VAX, like not letting
    one address modify another in the same instruction, would have made it a lot
    easier to pipeline.

    My RISC alternative to the VAX 11/780 (RISC-VAX) would probably have
    to use pipelining (maybe a three-stage pipeline like the first ARM) to achieve its clock rate goals; that would eat up some of the savings in implementation complexity that avoiding the actual VAX would have
    given us.

    Another issue would be is how to implement the PDP-11 emulation mode.
    I would add a PDP-11 decoder (as the actual VAX 11/780 probably has)
    that would decode PDP-11 code into RISC-VAX instructions, or into what RISC-VAX instructions are decoded into. The cost of that is probably similar to that in the actual VAX 11/780. If the RISC-VAX ISA has a MIPS/Alpha/RISC-V-like handling of conditions, the common microcode
    would have to support both the PDP-11 and the RISC-VAX handling of conditions; probably not that expensive, but maybe one still would
    prefer a ARM/SPARC/HPPA-like handling of conditions.

    In the days of VAX-11/780, it was "obvious" that operating systems would
    be written in assembler in order to be efficient, and the instruction
    set allowed high productivity for writing systems programs in "native"
    code. Yes, UNIX - written in C - existed, but was not all that well
    known. DEC had developed BLISS in -11 and -10 variants and they decided
    to do a -32 for the VAX and a number of system utilities were written in BLISS-32, but I think that the BLISS-32 compiler was written in
    BLISS-10. This all had a feeling of experimentation. "It may be the
    future, but we are not there yet".

    As for a RISC-VAX: To little old naive me, it seems that it would have
    been possible to create an alternative microcode load that would be able
    to support a RISC ISA on the same hardware, if the idea had occured to a well-connected group of graduate students. How good a RISC might have
    been feasible?

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Al Kossow@3:633/280.2 to All on Thu Jul 31 04:07:14 2025
    On 7/30/25 10:17 AM, Lars Poulsen wrote:

    As for a RISC-VAX: To little old naive me, it seems that it would have
    been possible to create an alternative microcode load that would be able
    to support a RISC ISA on the same hardware, if the idea had occured to a well-connected group of graduate students. How good a RISC might have
    been feasible?


    Early RISC like instruction sets existed on microcoded machines.

    The Ridge-32 for example, whose designers came out of the HP 3000 world, was claimed
    at the time to be the first commercial RISC system.

    Pyramid may have been another example, but very little (at least by me) is known of their ISA

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Scott Lurndal@3:633/280.2 to All on Thu Jul 31 04:35:32 2025
    Reply-To: slp53@pacbell.net

    Al Kossow <aek@bitsavers.org> writes:
    On 7/30/25 10:17 AM, Lars Poulsen wrote:

    As for a RISC-VAX: To little old naive me, it seems that it would have
    been possible to create an alternative microcode load that would be able
    to support a RISC ISA on the same hardware, if the idea had occured to a
    well-connected group of graduate students. How good a RISC might have
    been feasible?


    Early RISC like instruction sets existed on microcoded machines.

    The Ridge-32 for example, whose designers came out of the HP 3000 world, was claimed
    at the time to be the first commercial RISC system.

    Pyramid may have been another example, but very little (at least by me) is known of their ISA

    Pyramid used MIPS.

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: UsenetServer - www.usenetserver.com (3:633/280.2@fidonet)
  • From Peter Flass@3:633/280.2 to All on Thu Jul 31 06:13:42 2025
    On 7/30/25 10:17, Lars Poulsen wrote:

    Oops. did it again. Thunderbird encourages me to "send" instead of
    "follow up". Sorry Lars.

    On 7/30/25 10:17, Lars Poulsen wrote:

    John Levine <johnl@taugh.com> writes:
    That is certainly true but there were other mistakes too. One is
    that
    they underestimated how cheap memory would get, leading to the
    overcomplex
    instruction and address modes and the tiny 512 byte page size.

    That's a simple mistake to fix in software, though - always work with multiples of pages, like 16 or more.


    In the days of VAX-11/780, it was "obvious" that operating systems
    would> JL>>Another, which is not entirely their fault, is that they did
    not expect
    compilers to improve as fast as they did, leading to a machine
    which was fun to
    program in assembler but full of stuff that was useless to
    compilers and
    instructions like POLY that should have been subroutines. The 801
    project and
    PL.8 compiler were well underway at IBM by the time the VAX
    shipped, but DEC
    presumably didn't know about it.


    I only did a little VAX assembler. Maybe if I'd done more I'd have
    coding patterns as a reflex, but the number of possible variant
    instructions always had me stuck in a mental loop: "Do I want a one- or
    two- (or three-) address instruction here?

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Bob Eager@3:633/280.2 to All on Thu Jul 31 07:16:56 2025
    On Wed, 30 Jul 2025 13:13:42 -0700, Peter Flass wrote:

    On 7/30/25 10:17, Lars Poulsen wrote:

    John Levine <johnl@taugh.com> writes:
    That is certainly true but there were other mistakes too. One is
    that
    they underestimated how cheap memory would get, leading to the
    overcomplex
    instruction and address modes and the tiny 512 byte page size.

    That's a simple mistake to fix in software, though - always work with multiples of pages, like 16 or more.

    They obviously designed it and VMS in parallel. Unfortunately that led to
    the omission of a 'reference' bit in page table entries, making it hard
    for some other systems.



    --
    Using UNIX since v6 (1975)...

    Use the BIG mirror service in the UK:
    http://www.mirrorservice.org

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: ---:- FTN<->UseNet Gate -:--- (3:633/280.2@fidonet)
  • From John Levine@3:633/280.2 to All on Thu Jul 31 07:22:13 2025
    It appears that Peter Flass <Peter@Iron-Spring.com> said:
    instruction and address modes and the tiny 512 byte page size.

    That's a simple mistake to fix in software, though - always work with >multiples of pages, like 16 or more.

    Sure, but your page tables are 16 times as big as they should be, and the
    logic to manage them is more complex since you have to, e.g., merge all
    of the change and reference bits into the logical page.

    PL.8 compiler were well underway at IBM by the time the VAX
    shipped, but DEC
    presumably didn't know about it.

    I only did a little VAX assembler. Maybe if I'd done more I'd have
    coding patterns as a reflex, but the number of possible variant
    instructions always had me stuck in a mental loop: "Do I want a one- or
    two- (or three-) address instruction here?

    Yeah. Particularly because it was often not obvious what would be faster.

    The big advance in PL.8 was register allocation by graph coloring which was
    way better than the stack approaches used before. That made it a lot easier
    to keep variables and intermediate values in registers longer than one expression
    but less than the entire routine.

    --
    Regards,
    John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
    Please consider the environment before reading this e-mail. https://jl.ly

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: Taughannock Networks (3:633/280.2@fidonet)
  • From Anton Ertl@3:633/280.2 to All on Sat Aug 2 03:16:48 2025
    Lars Poulsen <lars@cleo.beagle-ears.com> writes:
    In the days of VAX-11/780, it was "obvious" that operating systems would
    be written in assembler in order to be efficient, and the instruction
    set allowed high productivity for writing systems programs in "native"
    code.

    Yes. I don't think that the productivity would have suffered from a
    load/store architecture, though.

    As for a RISC-VAX: To little old naive me, it seems that it would have
    been possible to create an alternative microcode load that would be able
    to support a RISC ISA on the same hardware, if the idea had occured to a >well-connected group of graduate students. How good a RISC might have
    been feasible?

    Did the VAX 11/780 have writable microcode?

    Given that the VAX 11/780 was not (much) pipelined, I don't expect
    that using an alternative microcode that implements a RISC ISA would
    have performed well.

    Crossposted to comp.arch, alt.folklore.computers

    - anton
    --
    'Anyone trying for "industrial quality" ISA should avoid undefined behavior.'
    Mitch Alsup, <c17fcd89-f024-40e7-a594-88a85ac10d20o@googlegroups.com>

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: Institut fuer Computersprachen, Technische Uni (3:633/280.2@fidonet)
  • From Scott Lurndal@3:633/280.2 to All on Sat Aug 2 04:11:28 2025
    Reply-To: slp53@pacbell.net

    anton@mips.complang.tuwien.ac.at (Anton Ertl) writes:
    Lars Poulsen <lars@cleo.beagle-ears.com> writes:
    In the days of VAX-11/780, it was "obvious" that operating systems would
    be written in assembler in order to be efficient, and the instruction
    set allowed high productivity for writing systems programs in "native" >>code.

    Yes. I don't think that the productivity would have suffered from a >load/store architecture, though.

    As for a RISC-VAX: To little old naive me, it seems that it would have
    been possible to create an alternative microcode load that would be able
    to support a RISC ISA on the same hardware, if the idea had occured to a >>well-connected group of graduate students. How good a RISC might have
    been feasible?

    Did the VAX 11/780 have writable microcode?

    Yes.


    Given that the VAX 11/780 was not (much) pipelined, I don't expect
    that using an alternative microcode that implements a RISC ISA would
    have performed well.

    A new ISA also requires development of the complete software
    infrastructure for building applications (compilers, linkers,
    assemblers); updating the OS, rebuilding existing applications
    for the new ISA, field and customer training, etc.

    Digital eventually did move VMS to Alpha, but it was neither
    cheap, nor easy. Most alpha customers were existing VAX
    customers - it's not clear that DEC actually grew the customer
    base by switching to Alpha.


    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: UsenetServer - www.usenetserver.com (3:633/280.2@fidonet)
  • From Dan Cross@3:633/280.2 to All on Sat Aug 2 06:41:06 2025
    In article <kr7jQ.442699$Tc12.355083@fx17.iad>,
    Scott Lurndal <slp53@pacbell.net> wrote:
    Digital eventually did move VMS to Alpha, but it was neither
    cheap, nor easy. Most alpha customers were existing VAX
    customers - it's not clear that DEC actually grew the customer
    base by switching to Alpha.

    Not for VMS, anyway.

    DEC was decently well regarded in the Unix world even then, and
    OSF/1 seemed pretty nifty, if you were coming from a BSD-ish
    place. A lot of Sun shops that didn't want SVR4 and Solaris on
    SPARC looked hard at OSF/1 on Alpha, though I don't know how
    many ultimately jumped.

    And Windows on Alpha had a brief shining moment in the sun (no
    pun intended).

    Interesting, the first OS brought up on Alpha was Ultrix, though
    it never shipped as a product.

    I wonder, if you broke it down by OS, what shipped on the most
    units.

    - Dan C.


    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: PANIX Public Access Internet and UNIX, NYC (3:633/280.2@fidonet)
  • From Waldek Hebisch@3:633/280.2 to All on Sat Aug 2 09:41:36 2025
    In comp.arch Anton Ertl <anton@mips.complang.tuwien.ac.at> wrote:
    Lars Poulsen <lars@cleo.beagle-ears.com> writes:
    In the days of VAX-11/780, it was "obvious" that operating systems would
    be written in assembler in order to be efficient, and the instruction
    set allowed high productivity for writing systems programs in "native" >>code.

    Yes. I don't think that the productivity would have suffered from a load/store architecture, though.

    As for a RISC-VAX: To little old naive me, it seems that it would have
    been possible to create an alternative microcode load that would be able
    to support a RISC ISA on the same hardware, if the idea had occured to a >>well-connected group of graduate students. How good a RISC might have
    been feasible?

    Did the VAX 11/780 have writable microcode?

    Yes, 12 kB (2K words 96-bit each).

    Given that the VAX 11/780 was not (much) pipelined, I don't expect
    that using an alternative microcode that implements a RISC ISA would
    have performed well.

    Yes.

    --
    Waldek Hebisch

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: To protect and to server (3:633/280.2@fidonet)
  • From Peter Flass@3:633/280.2 to All on Sat Aug 2 13:06:43 2025
    On 8/1/25 11:11, Scott Lurndal wrote:
    anton@mips.complang.tuwien.ac.at (Anton Ertl) writes:
    Lars Poulsen <lars@cleo.beagle-ears.com> writes:
    In the days of VAX-11/780, it was "obvious" that operating systems would >>> be written in assembler in order to be efficient, and the instruction
    set allowed high productivity for writing systems programs in "native"
    code.

    Yes. I don't think that the productivity would have suffered from a
    load/store architecture, though.

    As for a RISC-VAX: To little old naive me, it seems that it would have
    been possible to create an alternative microcode load that would be able >>> to support a RISC ISA on the same hardware, if the idea had occured to a >>> well-connected group of graduate students. How good a RISC might have
    been feasible?

    Did the VAX 11/780 have writable microcode?

    Yes.


    Given that the VAX 11/780 was not (much) pipelined, I don't expect
    that using an alternative microcode that implements a RISC ISA would
    have performed well.

    A new ISA also requires development of the complete software
    infrastructure for building applications (compilers, linkers,
    assemblers); updating the OS, rebuilding existing applications
    for the new ISA, field and customer training, etc.

    Digital eventually did move VMS to Alpha, but it was neither
    cheap, nor easy. Most alpha customers were existing VAX
    customers - it's not clear that DEC actually grew the customer
    base by switching to Alpha.


    Wasn't PRISM/MICA supposed to solve this problem, or am I confusing it
    with something else?

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Sat Aug 2 13:37:34 2025
    On Fri, 1 Aug 2025 20:06:43 -0700, Peter Flass wrote:

    Wasn't PRISM/MICA supposed to solve this problem, or am I confusing it
    with something else?

    PRISM was going to be a new hardware architecture, and MICA the OS to run
    on it. Yes, they were supposed to solve the problem of where DEC was going
    to go since the VAX architecture was clearly being left in the dust by
    RISC.

    I think the MICA kernel was going to support the concept of “personalities”, so that a VMS-compatible environment could be implemented by one set of upper layers, while another set could provide Unix functionality.

    I think the project was taking too long, and not making enough progress.
    So DEC management cancelled the whole thing, and brought out a MIPS-based machine instead.

    The guy in charge got annoyed at the killing of his pet project and left
    in a huff. He took some of those ideas with him to his new employer, to
    create a new OS for them.

    The new employer was Microsoft. The guy in question was Dave Cutler. The
    OS they brought out was called “Windows NT”.

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Chris M. Thomasson@3:633/280.2 to All on Sat Aug 2 14:35:26 2025
    On 8/1/2025 9:14 PM, Ted Nolan <tednolan> wrote:
    In article <106k15u$qgip$6@dont-email.me>,
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    On Fri, 1 Aug 2025 20:06:43 -0700, Peter Flass wrote:

    Wasn't PRISM/MICA supposed to solve this problem, or am I confusing it
    with something else?

    PRISM was going to be a new hardware architecture, and MICA the OS to run
    on it. Yes, they were supposed to solve the problem of where DEC was going >> to go since the VAX architecture was clearly being left in the dust by
    RISC.

    I think the MICA kernel was going to support the concept of
    “personalities”, so that a VMS-compatible environment could be implemented
    by one set of upper layers, while another set could provide Unix
    functionality.

    I think the project was taking too long, and not making enough progress.
    So DEC management cancelled the whole thing, and brought out a MIPS-based
    machine instead.

    The guy in charge got annoyed at the killing of his pet project and left
    in a huff. He took some of those ideas with him to his new employer, to
    create a new OS for them.

    The new employer was Microsoft. The guy in question was Dave Cutler. The
    OS they brought out was called “Windows NT”.

    And it's *still* not finished!

    Well, what about:

    https://github.com/ZoloZiak/WinNT4

    Humm... A little?

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Waldek Hebisch@3:633/280.2 to All on Sat Aug 2 18:07:50 2025
    In comp.arch Peter Flass <Peter@iron-spring.com> wrote:
    On 8/1/25 11:11, Scott Lurndal wrote:
    anton@mips.complang.tuwien.ac.at (Anton Ertl) writes:
    Lars Poulsen <lars@cleo.beagle-ears.com> writes:
    In the days of VAX-11/780, it was "obvious" that operating systems would >>>> be written in assembler in order to be efficient, and the instruction
    set allowed high productivity for writing systems programs in "native" >>>> code.

    Yes. I don't think that the productivity would have suffered from a
    load/store architecture, though.

    As for a RISC-VAX: To little old naive me, it seems that it would have >>>> been possible to create an alternative microcode load that would be able >>>> to support a RISC ISA on the same hardware, if the idea had occured to a >>>> well-connected group of graduate students. How good a RISC might have
    been feasible?

    Did the VAX 11/780 have writable microcode?

    Yes.


    Given that the VAX 11/780 was not (much) pipelined, I don't expect
    that using an alternative microcode that implements a RISC ISA would
    have performed well.

    A new ISA also requires development of the complete software
    infrastructure for building applications (compilers, linkers,
    assemblers); updating the OS, rebuilding existing applications
    for the new ISA, field and customer training, etc.

    Digital eventually did move VMS to Alpha, but it was neither
    cheap, nor easy. Most alpha customers were existing VAX
    customers - it's not clear that DEC actually grew the customer
    base by switching to Alpha.


    Wasn't PRISM/MICA supposed to solve this problem, or am I confusing it
    with something else?

    IIUC PRISM eventually became Alpha. One piece of supporting sofware
    was a VAX emulator IIRC called FX11: it allowed running unmodified
    VAX binaries. Another supporting piece was Macro32, which effectively
    was a compiler from VAX assembly to Alpha binaries.

    One big selling point of Alpha was 64-bit architecture, but IIUC
    VMS was never fully ported to 64-bits, that is a lot of VMS
    software used 32-bit addresses and some system interfaces were
    32-bit only. OTOH Unix for Alpha was claimed to be pure 64-bit.
    --
    Waldek Hebisch

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: To protect and to server (3:633/280.2@fidonet)
  • From Al Kossow@3:633/280.2 to All on Sat Aug 2 18:48:39 2025
    On 8/2/25 1:07 AM, Waldek Hebisch wrote:

    IIUC PRISM eventually became Alpha.

    Not really. Documents for both, including
    the rare PRISM docs are on bitsavers.
    PRISM came out of Cutler's DEC West group,
    Alpha from the East Coast. I'm not aware
    of any team member overlap.


    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Thomas Koenig@3:633/280.2 to All on Sat Aug 2 19:07:14 2025
    Dan Cross <cross@spitfire.i.gajendra.net> schrieb:

    And Windows on Alpha had a brief shining moment in the sun (no
    pun intended).

    Vobis (a German discount computer reseller) offered Alpha-based
    Windows boxes in 1993 and another model in 1997. Far too expensive
    for private users (cost was 9999 DM for the two models, the latter
    one with SCSI; IDE was cheaper, equivalent to ~10000 Euros today)
    for a machine with very limited suftware support.
    --
    This USENET posting was made without artificial intelligence,
    artificial impertinence, artificial arrogance, artificial stupidity,
    artificial flavorings or artificial colorants.

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Anton Ertl@3:633/280.2 to All on Sat Aug 2 19:28:17 2025
    scott@slp53.sl.home (Scott Lurndal) writes:
    anton@mips.complang.tuwien.ac.at (Anton Ertl) writes:
    Given that the VAX 11/780 was not (much) pipelined, I don't expect
    that using an alternative microcode that implements a RISC ISA would
    have performed well.

    A new ISA also requires development of the complete software
    infrastructure for building applications (compilers, linkers,
    assemblers); updating the OS, rebuilding existing applications
    for the new ISA, field and customer training, etc.

    The VAX was a new ISA, a followon to the PDP-11, which was different
    in many respects (e.g., 16-bit instruction granularity on PDP-11,
    8-bit granularity on VAX). In my RISC-VAX scenario, the RISC-VAX
    would be the PDP-11 followon instead of the actual (CISC) VAX, so
    there would be no additional ISA.

    Digital eventually did move VMS to Alpha, but it was neither
    cheap, nor easy. Most alpha customers were existing VAX
    customers - it's not clear that DEC actually grew the customer
    base by switching to Alpha.

    Our group had no VAX in the years before we bought our first Alphas in
    1995, but we had DecStations. My recommendation in 1995 was to go for
    Linux on Pentium, but the Alpha camp won, and we ran OSF/1 on them for
    some years. Later we ran Linux on our Alphas, and eventually we
    switched to Linux on Intel and AMD.

    As for the VAX-Alpha transition, there were two reasons for the
    switch:

    1) Performance, and that cost DEC customers since RISCs were
    introduced in the mid-1980s. DecStations were introduced to reduce
    this bleeding, but of course this meant that these customers were
    not VAX customers.

    2) The transition to 64 bits. Almost everyone in the workstation
    market introduced hardware for that in the 1990s: MIPS R4000 in
    1991 (MIPS III architecture); DEC Alpha 21064 in 1992; SPARCv9
    (specification) 1993 with first implementation 1995; HP PA-8000
    1995; PowerPC 620 1997 (originally planned earlier); "The original
    goal year for delivering the first [IA-64] product, Merced, was
    1998." I think, though, that for many customers that need arose
    only in the 2000s; e.g., our last Alpha (bought in the year 2000)
    only has 1GB of RAM, so a 64-bit architecture was not necessary for
    us until a few years later, maybe 2005.

    DEC obviously failed to convert its thriving VAX business from the
    1980s into a sustainable Alpha business. Maybe the competetive
    landscape was such that they would have run into problems in any case;
    DEC were not alone in getting problems. OTOH, HP was a mini and
    workstation manufacturer that replaced its CISC architectures with
    RISC early, and managed to survive (and buy Compaq, which had bought
    DEC), although it eventually abandoned its own RISC architecture as
    well as IA-64, the intended successor.

    - anton
    --
    'Anyone trying for "industrial quality" ISA should avoid undefined behavior.'
    Mitch Alsup, <c17fcd89-f024-40e7-a594-88a85ac10d20o@googlegroups.com>

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: Institut fuer Computersprachen, Technische Uni (3:633/280.2@fidonet)
  • From Thomas Koenig@3:633/280.2 to All on Sun Aug 3 01:29:38 2025
    Anton Ertl <anton@mips.complang.tuwien.ac.at> schrieb:

    1) Performance, and that cost DEC customers since RISCs were
    introduced in the mid-1980s. DecStations were introduced to reduce
    this bleeding, but of course this meant that these customers were
    not VAX customers.

    Or, even more importantly, VMS customers.
    --
    This USENET posting was made without artificial intelligence,
    artificial impertinence, artificial arrogance, artificial stupidity,
    artificial flavorings or artificial colorants.

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Peter Flass@3:633/280.2 to All on Sun Aug 3 08:33:15 2025
    On 8/2/25 08:29, Thomas Koenig wrote:
    Anton Ertl <anton@mips.complang.tuwien.ac.at> schrieb:

    1) Performance, and that cost DEC customers since RISCs were
    introduced in the mid-1980s. DecStations were introduced to reduce
    this bleeding, but of course this meant that these customers were
    not VAX customers.

    Or, even more importantly, VMS customers.

    I guess I'm getting DecStations and VaxStations mixed up. Maybe one of
    their problems was brand confusion.


    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Sun Aug 3 09:08:39 2025
    On Sat, 2 Aug 2025 08:07:50 -0000 (UTC), Waldek Hebisch wrote:

    One big selling point of Alpha was 64-bit architecture, but IIUC
    VMS was never fully ported to 64-bits, that is a lot of VMS
    software used 32-bit addresses and some system interfaces were
    32-bit only. OTOH Unix for Alpha was claimed to be pure 64-bit.

    Of the four main OSes for Alpha, the only fully-64-bit ones were DEC Unix
    and Linux. OpenVMS was a hybrid 32/64-bit implementation, and Windows NT
    was 32-bit-only.

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Sun Aug 3 09:17:34 2025
    On Sat, 2 Aug 2025 15:33:15 -0700, Peter Flass wrote:

    I guess I'm getting DecStations and VaxStations mixed up. Maybe one of
    their problems was brand confusion.

    Wot fun.

    “VAXstation” = graphical workstation with VAX processor.

    “DECstation” = short-lived DEC machine range with MIPS processor.

    “DECserver” = dedicated terminal server running LAT protocol.

    “DECmate” = one of their 3 different PC families. This one was based around a PDP-8-compatible processor.

    “VAXmate” = a quick look at the docs indicates this was some kind of Microsoft-PC-compatible, bundled with extra DEC-specific connectivity features.

    Any others ... ?

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Sun Aug 3 09:20:37 2025
    On Sat, 02 Aug 2025 09:28:17 GMT, Anton Ertl wrote:

    In my RISC-VAX scenario, the RISC-VAX would be the PDP-11 followon
    instead of the actual (CISC) VAX, so there would be no additional
    ISA.

    In order to be RISC, it would have had to add registers and remove
    addressing modes from the non-load/store instructions (and replace “move” with separate “load” and “store” instructions). “No additional ISA” or
    not, it would still have broken existing code.

    Remember that VAX development started in the early-to-mid-1970s. RISC was still nothing more than a research idea at that point, which had yet to
    prove itself.

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Sun Aug 3 09:21:18 2025
    On Sat, 2 Aug 2025 09:07:14 -0000 (UTC), Thomas Koenig wrote:

    Vobis (a German discount computer reseller) offered Alpha-based Windows
    boxes in 1993 and another model in 1997. Far too expensive for private
    users ...

    And what a waste of a 64-bit architecture, to run it in 32-bit-only
    mode ...

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Stefan Monnier@3:633/280.2 to All on Sun Aug 3 13:10:56 2025
    Lawrence D'Oliveiro [2025-08-02 23:21:18] wrote:
    On Sat, 2 Aug 2025 09:07:14 -0000 (UTC), Thomas Koenig wrote:
    Vobis (a German discount computer reseller) offered Alpha-based Windows
    boxes in 1993 and another model in 1997. Far too expensive for private
    users ...
    And what a waste of a 64-bit architecture, to run it in 32-bit-only
    mode ...

    What do you mean by that? IIUC, the difference between 32bit and 64bit
    (in terms of cost of designing and producing the CPU) was very small.
    MIPS happily designed their R4000 as 64bit while knowing that most of
    them would never get a chance to execute an instruction that makes use
    of the upper 32bits.


    Stefan

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Sun Aug 3 19:14:10 2025
    On Sat, 02 Aug 2025 23:10:56 -0400, Stefan Monnier wrote:

    Lawrence D'Oliveiro [2025-08-02 23:21:18] wrote:

    On Sat, 2 Aug 2025 09:07:14 -0000 (UTC), Thomas Koenig wrote:

    Vobis (a German discount computer reseller) offered Alpha-based
    Windows boxes in 1993 and another model in 1997. Far too expensive
    for private users ...

    And what a waste of a 64-bit architecture, to run it in 32-bit-only
    mode ...

    What do you mean by that?

    Of all the major OSes for Alpha, Windows NT was the only one
    that couldn’t take advantage of the 64-bit architecture.

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Peter Flass@3:633/280.2 to All on Mon Aug 4 00:41:14 2025
    On 8/3/25 02:14, Lawrence D'Oliveiro wrote:
    On Sat, 02 Aug 2025 23:10:56 -0400, Stefan Monnier wrote:

    Lawrence D'Oliveiro [2025-08-02 23:21:18] wrote:

    On Sat, 2 Aug 2025 09:07:14 -0000 (UTC), Thomas Koenig wrote:

    Vobis (a German discount computer reseller) offered Alpha-based
    Windows boxes in 1993 and another model in 1997. Far too expensive
    for private users ...

    And what a waste of a 64-bit architecture, to run it in 32-bit-only
    mode ...

    What do you mean by that?

    Of all the major OSes for Alpha, Windows NT was the only one
    that couldn’t take advantage of the 64-bit architecture.

    At that point they should have renamed it "Windows OT".


    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Anton Ertl@3:633/280.2 to All on Mon Aug 4 02:42:20 2025
    antispam@fricas.org (Waldek Hebisch) writes:
    In comp.arch Anton Ertl <anton@mips.complang.tuwien.ac.at> wrote:
    Did the VAX 11/780 have writable microcode?

    Yes, 12 kB (2K words 96-bit each).

    So that's 12KB of fast RAM that could have been reused for making the
    cache larger in a RISC-VAX, maybe increasing its size from 2KB to
    12KB.

    Followups set to comp.arch. Change it if you think this is still
    on-topic for afc.

    - anton
    --
    'Anyone trying for "industrial quality" ISA should avoid undefined behavior.'
    Mitch Alsup, <c17fcd89-f024-40e7-a594-88a85ac10d20o@googlegroups.com>

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: Institut fuer Computersprachen, Technische Uni (3:633/280.2@fidonet)
  • From Anton Ertl@3:633/280.2 to All on Mon Aug 4 02:51:10 2025
    antispam@fricas.org (Waldek Hebisch) writes:
    One piece of supporting sofware
    was a VAX emulator IIRC called FX11: it allowed running unmodified
    VAX binaries.

    There was also a static binary translator for DecStation binaries. I
    never used it, but a collegue tried to. He found that on the Prolog
    systems that he tried it with (I think it was Quintus or SICStus), it
    did not work, because that system did unusual things with the binary,
    and that did not work on the result of the binary translation. Moral
    of the story: Better use dynamic binary translation (which Apple did
    for their 68K->PowerPC transition at around the same time).

    OTOH Unix for Alpha was claimed to be pure 64-bit.

    It depends on the kind of purity you are aspiring to. After a bunch
    of renamings it was finally called Tru64 UNIX. Not Pur64, but
    Tru64:-) Before that, it was called Digital UNIX (but once DEC had
    been bought by Compaq, that was no longer appropriate), and before
    that, DEC OSF/1 AXP.

    The C environment for DEC OSF/1 was an I32LP64 setup, not an ILP64
    setup, so can you really call it pure?

    In addition there were some OS features for running ILP32 programs,
    similar to Linux' MAP_32BIT flag for mmap(). IIRC Netscape Navigator
    was compiled as ILP32 program (the C compiler had a flag for that),
    and needed these OS features.

    - anton
    --
    'Anyone trying for "industrial quality" ISA should avoid undefined behavior.'
    Mitch Alsup, <c17fcd89-f024-40e7-a594-88a85ac10d20o@googlegroups.com>

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: Institut fuer Computersprachen, Technische Uni (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Mon Aug 4 10:04:54 2025
    On Sun, 03 Aug 2025 16:51:10 GMT, Anton Ertl wrote:

    The C environment for DEC OSF/1 was an I32LP64 setup, not an ILP64
    setup, so can you really call it pure?

    As far as I’m aware, I32LP64 is the standard across 64-bit *nix systems.

    Microsoft’s compilers for 64-bit Windows do LLP64. Not aware of any platforms that do/did ILP64.

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From BGB@3:633/280.2 to All on Mon Aug 4 12:07:02 2025
    On 8/3/2025 7:04 PM, Lawrence D'Oliveiro wrote:
    On Sun, 03 Aug 2025 16:51:10 GMT, Anton Ertl wrote:

    The C environment for DEC OSF/1 was an I32LP64 setup, not an ILP64
    setup, so can you really call it pure?

    As far as I’m aware, I32LP64 is the standard across 64-bit *nix systems.

    Microsoft’s compilers for 64-bit Windows do LLP64. Not aware of any platforms that do/did ILP64.

    Yeah, pretty much nothing does ILP64, and doing so would actually be a problem.

    Also, C type names:
    char : 8 bit
    short : 16 bit
    int : 32 bit
    long : 64 bit
    long long: 64 bit

    If 'int' were 64-bits, then what about 16 and/or 32 bit types.
    short short?
    long short?
    ...

    Current system seems preferable.
    Well, at least in absence of maybe having the compiler specify actual fixed-size types.

    Or, say, what if there was a world where the actual types were, say:
    _Int8, _Int16, _Int32, _Int64, _Int128
    And, then, say:
    char, short, int, long, ...
    Were seen as aliases.

    Well, maybe along with __int64 and friends, but __int64 and _Int64 could
    be seen as equivalent.


    Then of course, the "stdint.h" types.
    Traditionally, these are a bunch of typedef's to the 'int' and friends.
    But, one can imagine a hypothetical world where stdint.h contained
    things like, say:
    typedef _Int32 int32_t;


    ....



    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Peter Flass@3:633/280.2 to All on Mon Aug 4 13:39:52 2025
    On 8/3/25 19:07, BGB wrote:
    On 8/3/2025 7:04 PM, Lawrence D'Oliveiro wrote:
    On Sun, 03 Aug 2025 16:51:10 GMT, Anton Ertl wrote:

    The C environment for DEC OSF/1 was an I32LP64 setup, not an ILP64
    setup, so can you really call it pure?

    As far as I’m aware, I32LP64 is the standard across 64-bit *nix systems. >>
    Microsoft’s compilers for 64-bit Windows do LLP64. Not aware of any
    platforms that do/did ILP64.

    Yeah, pretty much nothing does ILP64, and doing so would actually be a problem.

    Also, C type names:
    char : 8 bit
    short : 16 bit
    int : 32 bit
    long : 64 bit
    long long: 64 bit

    If 'int' were 64-bits, then what about 16 and/or 32 bit types.
    short short?
    long short?
    ...

    Current system seems preferable.
    Well, at least in absence of maybe having the compiler specify actual fixed-size types.

    Or, say, what if there was a world where the actual types were, say:
    _Int8, _Int16, _Int32, _Int64, _Int128
    And, then, say:
    char, short, int, long, ...
    Were seen as aliases.

    Well, maybe along with __int64 and friends, but __int64 and _Int64 could
    be seen as equivalent.


    Then of course, the "stdint.h" types.
    Traditionally, these are a bunch of typedef's to the 'int' and friends.
    But, one can imagine a hypothetical world where stdint.h contained
    things like, say:
    typedef _Int32 int32_t;



    Like PL/I which lets you specify any precision: FIXED BINARY(31), FIXED BINARY(63) etc.

    C keeps borrowing more and more PL/I features.


    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Mon Aug 4 14:50:11 2025
    On Sun, 3 Aug 2025 20:39:52 -0700, Peter Flass wrote:

    C keeps borrowing more and more PL/I features.

    Struct definitions with level numbers??

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Michael S@3:633/280.2 to All on Mon Aug 4 19:19:38 2025
    On Sun, 3 Aug 2025 21:07:02 -0500
    BGB <cr88192@gmail.com> wrote:

    On 8/3/2025 7:04 PM, Lawrence D'Oliveiro wrote:
    On Sun, 03 Aug 2025 16:51:10 GMT, Anton Ertl wrote:
    =20
    The C environment for DEC OSF/1 was an I32LP64 setup, not an ILP64
    setup, so can you really call it pure? =20
    =20
    As far as I=E2=80=99m aware, I32LP64 is the standard across 64-bit *nix systems.
    =20
    Microsoft=E2=80=99s compilers for 64-bit Windows do LLP64. Not aware of=
    any
    platforms that do/did ILP64. =20
    =20
    Yeah, pretty much nothing does ILP64, and doing so would actually be
    a problem.
    =20
    Also, C type names:
    char : 8 bit
    short : 16 bit
    int : 32 bit

    Except in embedded 16 bit are not rare

    long : 64 bit

    Except for majority of the world where long is 32 bit

    long long: 64 bit
    =20
    If 'int' were 64-bits, then what about 16 and/or 32 bit types.
    short short?
    long short?
    ...
    =20
    Current system seems preferable.
    Well, at least in absence of maybe having the compiler specify actual=20 fixed-size types.
    =20
    Or, say, what if there was a world where the actual types were, say:
    _Int8, _Int16, _Int32, _Int64, _Int128
    And, then, say:
    char, short, int, long, ...
    Were seen as aliases.
    =20

    Actually, in our world the latest C standard (C23) has them, but the
    spelling is different: _BitInt(32) and unsigned _BitInt(32).
    I'm not sure if any major compiler already has them implemented. Bing
    copilot says that clang does, but I don't tend to believe eveything Bing copilot says.

    Well, maybe along with __int64 and friends, but __int64 and _Int64
    could be seen as equivalent.
    =20
    =20
    Then of course, the "stdint.h" types.
    Traditionally, these are a bunch of typedef's to the 'int' and
    friends. But, one can imagine a hypothetical world where stdint.h
    contained things like, say:
    typedef _Int32 int32_t;
    =20
    =20
    ...
    =20
    =20



    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Michael S@3:633/280.2 to All on Mon Aug 4 19:35:01 2025
    On Sun, 3 Aug 2025 20:39:52 -0700
    Peter Flass <Peter@Iron-Spring.com> wrote:

    On 8/3/25 19:07, BGB wrote:
    On 8/3/2025 7:04 PM, Lawrence D'Oliveiro wrote: =20
    On Sun, 03 Aug 2025 16:51:10 GMT, Anton Ertl wrote:
    =20
    The C environment for DEC OSF/1 was an I32LP64 setup, not an ILP64
    setup, so can you really call it pure? =20

    As far as I=E2=80=99m aware, I32LP64 is the standard across 64-bit *nix
    systems.

    Microsoft=E2=80=99s compilers for 64-bit Windows do LLP64. Not aware o=
    f any
    platforms that do/did ILP64. =20
    =20
    Yeah, pretty much nothing does ILP64, and doing so would actually
    be a problem.
    =20
    Also, C type names:
    =C2=A0 char=C2=A0=C2=A0=C2=A0=C2=A0 :=C2=A0 8 bit
    =C2=A0 short=C2=A0=C2=A0=C2=A0 : 16 bit
    =C2=A0 int=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 : 32 bit
    =C2=A0 long=C2=A0=C2=A0=C2=A0=C2=A0 : 64 bit
    =C2=A0 long long: 64 bit
    =20
    If 'int' were 64-bits, then what about 16 and/or 32 bit types.
    =C2=A0 short short?
    =C2=A0 long short?
    =C2=A0 ...
    =20
    Current system seems preferable.
    Well, at least in absence of maybe having the compiler specify
    actual fixed-size types.
    =20
    Or, say, what if there was a world where the actual types were, say:
    =C2=A0 _Int8, _Int16, _Int32, _Int64, _Int128
    And, then, say:
    =C2=A0 char, short, int, long, ...
    Were seen as aliases.
    =20
    Well, maybe along with __int64 and friends, but __int64 and _Int64
    could be seen as equivalent.
    =20
    =20
    Then of course, the "stdint.h" types.
    Traditionally, these are a bunch of typedef's to the 'int' and
    friends. But, one can imagine a hypothetical world where stdint.h
    contained things like, say:
    =C2=A0 typedef _Int32 int32_t;
    =20
    =20
    =20
    Like PL/I which lets you specify any precision: FIXED BINARY(31),
    FIXED BINARY(63) etc.
    =20

    C23 does not let you specify any precision.
    Implementation defines BITINT_MAXWIDTH that, according to my
    understanding, (I didn't read the standard) is allowed to be quite
    small.
    It seems, that in real life BITINT_MAXWIDTH >=3D 128 will be supported
    on all platforms that would go to trouble of implementing complete C23,
    even on 32-bit hardware.

    C keeps borrowing more and more PL/I features.
    =20

    How can we know that the feature is borrowed from PL/1 and not for few
    other languages that had similar features?


    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Michael S@3:633/280.2 to All on Mon Aug 4 19:42:44 2025
    On Sun, 03 Aug 2025 16:51:10 GMT
    anton@mips.complang.tuwien.ac.at (Anton Ertl) wrote:

    antispam@fricas.org (Waldek Hebisch) writes:
    One piece of supporting sofware
    was a VAX emulator IIRC called FX11: it allowed running unmodified
    VAX binaries.

    There was also a static binary translator for DecStation binaries. I
    never used it, but a collegue tried to. He found that on the Prolog
    systems that he tried it with (I think it was Quintus or SICStus), it
    did not work, because that system did unusual things with the binary,
    and that did not work on the result of the binary translation. Moral
    of the story: Better use dynamic binary translation (which Apple did
    for their 68K->PowerPC transition at around the same time).


    IIRC, x386-to-Alpha translator was dynamic. Supposedly, VAX-to-Alpha
    was also dynamic.
    May be, MIPS-to-Alpha was static simply because it had much lower
    priority within DEC?

    OTOH Unix for Alpha was claimed to be pure 64-bit.

    It depends on the kind of purity you are aspiring to. After a bunch
    of renamings it was finally called Tru64 UNIX. Not Pur64, but
    Tru64:-) Before that, it was called Digital UNIX (but once DEC had
    been bought by Compaq, that was no longer appropriate), and before
    that, DEC OSF/1 AXP.

    The C environment for DEC OSF/1 was an I32LP64 setup, not an ILP64
    setup, so can you really call it pure?

    In addition there were some OS features for running ILP32 programs,
    similar to Linux' MAP_32BIT flag for mmap(). IIRC Netscape Navigator
    was compiled as ILP32 program (the C compiler had a flag for that),
    and needed these OS features.

    - anton



    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Al Kossow@3:633/280.2 to All on Mon Aug 4 20:32:52 2025
    On 8/4/25 2:42 AM, Michael S wrote:

    May be, MIPS-to-Alpha was static simply because it had much lower
    priority within DEC?

    MIPS products came out of DECWRL (the research group
    started to build Titan) and were stopgaps until
    the "real" architecture came out (Cutler's out of DECWest)
    I don't think it ever got much love out of DEC corporate
    and were just done so DEC didn't completely get their
    lunch eaten in the Unix workstation market.



    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Anton Ertl@3:633/280.2 to All on Mon Aug 4 22:09:32 2025
    Michael S <already5chosen@yahoo.com> writes:
    Actually, in our world the latest C standard (C23) has them, but the
    spelling is different: _BitInt(32) and unsigned _BitInt(32).
    I'm not sure if any major compiler already has them implemented. Bing
    copilot says that clang does, but I don't tend to believe eveything Bing >copilot says.

    I asked godbolt, and tried the following program:

    typedef ump unsigned _BitInt(65535);

    ump sum3(ump a, ump b, ump c)
    {
    return a+b+c;
    }

    and for the C setting gcc-15.1 AMD64 produces 129 lines of assembly
    language code; for C++ it complains about the syntax. For 65536 bits,
    it complains about being beyond the maximum number of 65535 bits.

    For the same program with the C setting clang-20.1 produces 29547
    lines of assembly language code; that's more than 28 instructions for
    every 64-bit word of output, which seems excessive to me, even if you
    don't use ADX instructions (which clang apparently does not); I expect
    that clang will produce better code at some point in the future.
    Compiling this function also takes noticable time, and when I ask for
    1000000 bits, clang still does complain about too many bits, but
    godbolt's timeout strikes; I finally found out clang's limit: 8388608
    bits. On clang-20.1 the C++ setting also accepts this kind of input.

    Followups set to comp.arch.

    - anton
    --
    'Anyone trying for "industrial quality" ISA should avoid undefined behavior.'
    Mitch Alsup, <c17fcd89-f024-40e7-a594-88a85ac10d20o@googlegroups.com>

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: Institut fuer Computersprachen, Technische Uni (3:633/280.2@fidonet)
  • From Anton Ertl@3:633/280.2 to All on Mon Aug 4 23:42:32 2025
    Michael S <already5chosen@yahoo.com> writes:
    May be, MIPS-to-Alpha was static simply because it had much lower
    priority within DEC?

    Skimming the article on "Binary Translation" in Digital Technical
    Journal Vol. 4 No. 4, 1992 <https://dn790009.ca.archive.org/0/items/bitsavers_decdtjdtjv_19086731/dtj_v04-04_1992.pdf>,
    it seems that both VEST (VAX VMS->Alpha VMS) and mx (MIPS Ultrix ->
    Alpha OSF/1) used a hybrid approach. These binary translators took an
    existing binary for one system and produced a binary for the the other
    system, but included a run-time system that would do binary
    translation of run-time-generated code.

    But for the Prolog system that did not work with mx the problem was
    that the binary looked different (IIRC Ultrix uses a.out format, and
    Digital OSF/1 used a different binary format), so the run-time
    component of the binary translator did not help.

    What would have been needed for that is a way to run the MIPS-Ultrix
    binary as-is, with the binary translation coming in out-of-band,
    either completely at run-time, or with the static part of the
    translated code looked up based on the original binary and loaded into
    address space beyond the reach of the 32-bit MIPS architecture
    supported by Ultrix.

    - anton
    --
    'Anyone trying for "industrial quality" ISA should avoid undefined behavior.'
    Mitch Alsup, <c17fcd89-f024-40e7-a594-88a85ac10d20o@googlegroups.com>

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: Institut fuer Computersprachen, Technische Uni (3:633/280.2@fidonet)
  • From Scott Lurndal@3:633/280.2 to All on Tue Aug 5 00:22:14 2025
    Reply-To: slp53@pacbell.net

    Michael S <already5chosen@yahoo.com> writes:
    On Sun, 3 Aug 2025 21:07:02 -0500
    BGB <cr88192@gmail.com> wrote:


    Except for majority of the world where long is 32 bit


    What majority? Linux owns the server market, the
    appliance market and much of the handset market (which apple
    dominates with their OS). And all Unix/Linux systems have
    64-bit longs on 64-bit CPUs.

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: UsenetServer - www.usenetserver.com (3:633/280.2@fidonet)
  • From Terje Mathisen@3:633/280.2 to All on Tue Aug 5 00:46:03 2025
    Scott Lurndal wrote:
    Michael S <already5chosen@yahoo.com> writes:
    On Sun, 3 Aug 2025 21:07:02 -0500
    BGB <cr88192@gmail.com> wrote:


    Except for majority of the world where long is 32 bit


    What majority? Linux owns the server market, the
    appliance market and much of the handset market (which apple
    dominates with their OS). And all Unix/Linux systems have
    64-bit longs on 64-bit CPUs.

    Apple/iPhone might dominate in the US market (does it?), but in the rest
    of the world Android (with linux) is far larger. World total is 72%
    Android, 28% iOS.

    Terje

    --
    - <Terje.Mathisen at tmsw.no>
    "almost all programming can be viewed as an exercise in caching"

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Scott Lurndal@3:633/280.2 to All on Tue Aug 5 01:05:51 2025
    Reply-To: slp53@pacbell.net

    Terje Mathisen <terje.mathisen@tmsw.no> writes:
    Scott Lurndal wrote:
    Michael S <already5chosen@yahoo.com> writes:
    On Sun, 3 Aug 2025 21:07:02 -0500
    BGB <cr88192@gmail.com> wrote:


    Except for majority of the world where long is 32 bit


    What majority? Linux owns the server market, the
    appliance market and much of the handset market (which apple
    dominates with their OS). And all Unix/Linux systems have
    64-bit longs on 64-bit CPUs.

    Apple/iPhone might dominate in the US market (does it?), but in the rest
    of the world Android (with linux) is far larger. World total is 72%
    Android, 28% iOS.

    Good point, thanks.

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: UsenetServer - www.usenetserver.com (3:633/280.2@fidonet)
  • From Michael S@3:633/280.2 to All on Tue Aug 5 01:07:48 2025
    On Mon, 04 Aug 2025 14:22:14 GMT
    scott@slp53.sl.home (Scott Lurndal) wrote:

    Michael S <already5chosen@yahoo.com> writes:
    On Sun, 3 Aug 2025 21:07:02 -0500
    BGB <cr88192@gmail.com> wrote:


    Except for majority of the world where long is 32 bit


    What majority? Linux owns the server market, the
    appliance market and much of the handset market (which apple
    dominates with their OS). And all Unix/Linux systems have
    64-bit longs on 64-bit CPUs.

    Majority of the world is embedded. Ovewhelming majority of embedded is
    32-bit or narrower.


    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From John Ames@3:633/280.2 to All on Tue Aug 5 01:32:19 2025
    On Sat, 02 Aug 2025 23:10:56 -0400
    Stefan Monnier <monnier@iro.umontreal.ca> wrote:

    And what a waste of a 64-bit architecture, to run it in 32-bit-only
    mode ...

    What do you mean by that? IIUC, the difference between 32bit and
    64bit (in terms of cost of designing and producing the CPU) was very
    small. MIPS happily designed their R4000 as 64bit while knowing that
    most of them would never get a chance to execute an instruction that
    makes use of the upper 32bits.

    This notion that the only advantage of a 64-bit architecture is a large
    address space is very curious to me. Obviously that's *one* advantage,
    but while I don't know the in-the-field history of heavy-duty business/ scientific computing the way some folks here do, I have not gotten the impression that a lot of customers were commonly running up against the
    4 GB limit in the early '90s; meanwhile, the *other* advantage - higher performance for the same MIPS on a variety of compute-bound tasks - is
    being overlooked entirely, it seems.


    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From BGB@3:633/280.2 to All on Tue Aug 5 02:47:41 2025
    On 8/4/2025 10:32 AM, John Ames wrote:
    On Sat, 02 Aug 2025 23:10:56 -0400
    Stefan Monnier <monnier@iro.umontreal.ca> wrote:

    And what a waste of a 64-bit architecture, to run it in 32-bit-only
    mode ...

    What do you mean by that? IIUC, the difference between 32bit and
    64bit (in terms of cost of designing and producing the CPU) was very
    small. MIPS happily designed their R4000 as 64bit while knowing that
    most of them would never get a chance to execute an instruction that
    makes use of the upper 32bits.

    This notion that the only advantage of a 64-bit architecture is a large address space is very curious to me. Obviously that's *one* advantage,
    but while I don't know the in-the-field history of heavy-duty business/ scientific computing the way some folks here do, I have not gotten the impression that a lot of customers were commonly running up against the
    4 GB limit in the early '90s; meanwhile, the *other* advantage - higher performance for the same MIPS on a variety of compute-bound tasks - is
    being overlooked entirely, it seems.


    Yeah.

    Using 64-bit values mostly for data manipulation, but with a 32 bit
    address space, also makes a lot of sense.

    In my project, ATM, the main reason I went to using a 48 bit address
    space was mostly because I was also using a global address space; and 32
    bit gets cramped pretty quick. Also, 48 bit means more space for 16 tag
    bits.

    For smaller configurations, it can make sense to drop back down to 32
    bits, possibly with a 24-bit physical space if lacking a DDR RAM chip or similar.

    ....


    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From BGB@3:633/280.2 to All on Tue Aug 5 02:59:57 2025
    On 8/3/2025 10:39 PM, Peter Flass wrote:
    On 8/3/25 19:07, BGB wrote:
    On 8/3/2025 7:04 PM, Lawrence D'Oliveiro wrote:
    On Sun, 03 Aug 2025 16:51:10 GMT, Anton Ertl wrote:

    The C environment for DEC OSF/1 was an I32LP64 setup, not an ILP64
    setup, so can you really call it pure?

    As far as I’m aware, I32LP64 is the standard across 64-bit *nix systems. >>>
    Microsoft’s compilers for 64-bit Windows do LLP64. Not aware of any
    platforms that do/did ILP64.

    Yeah, pretty much nothing does ILP64, and doing so would actually be a
    problem.

    Also, C type names:
    char : 8 bit
    short : 16 bit
    int : 32 bit
    long : 64 bit
    long long: 64 bit

    If 'int' were 64-bits, then what about 16 and/or 32 bit types.
    short short?
    long short?
    ...

    Current system seems preferable.
    Well, at least in absence of maybe having the compiler specify actual
    fixed-size types.

    Or, say, what if there was a world where the actual types were, say:
    _Int8, _Int16, _Int32, _Int64, _Int128
    And, then, say:
    char, short, int, long, ...
    Were seen as aliases.

    Well, maybe along with __int64 and friends, but __int64 and _Int64
    could be seen as equivalent.


    Then of course, the "stdint.h" types.
    Traditionally, these are a bunch of typedef's to the 'int' and friends.
    But, one can imagine a hypothetical world where stdint.h contained
    things like, say:
    typedef _Int32 int32_t;



    Like PL/I which lets you specify any precision: FIXED BINARY(31), FIXED BINARY(63) etc.

    C keeps borrowing more and more PL/I features.


    This would be _BitInt(n) ...


    Though, despite originally making it so that power-of-2 _BitInt(n) would
    map to the corresponding types when available, I ended up later needing
    to make them distinct, to remember the exact bit-widths, and to preserve
    the expected overflow behavior for these widths.

    There is apparently a discrepancy between BGBCC and Clang when it comes
    to this type:
    BGBCC: Storage is padded to a power of 2;
    Up to 256 bits, after which it is the next multiple of 128.
    Clang: Storage is the next multiple of 1 byte.

    But, efficiently loading and storing arbitrary N byte values is a harder problem than using a power-of-2 type and then ignoring or
    masking/extending the HOBs.

    Main harder case is store, which would need to be turned into a Load+Mask+Store absent special ISA support.



    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From BGB@3:633/280.2 to All on Tue Aug 5 03:12:49 2025
    On 8/4/2025 4:19 AM, Michael S wrote:
    On Sun, 3 Aug 2025 21:07:02 -0500
    BGB <cr88192@gmail.com> wrote:

    On 8/3/2025 7:04 PM, Lawrence D'Oliveiro wrote:
    On Sun, 03 Aug 2025 16:51:10 GMT, Anton Ertl wrote:

    The C environment for DEC OSF/1 was an I32LP64 setup, not an ILP64
    setup, so can you really call it pure?

    As far as I’m aware, I32LP64 is the standard across 64-bit *nix
    systems.

    Microsoft’s compilers for 64-bit Windows do LLP64. Not aware of any
    platforms that do/did ILP64.

    Yeah, pretty much nothing does ILP64, and doing so would actually be
    a problem.

    Also, C type names:
    char : 8 bit
    short : 16 bit
    int : 32 bit

    Except in embedded 16 bit are not rare

    long : 64 bit

    Except for majority of the world where long is 32 bit


    Possibly, this wasn't meant to address every possible use-case, but as a counter-argument to ILP64, where the more natural alternative is LP64.

    long long: 64 bit

    If 'int' were 64-bits, then what about 16 and/or 32 bit types.
    short short?
    long short?
    ...

    Current system seems preferable.
    Well, at least in absence of maybe having the compiler specify actual
    fixed-size types.

    Or, say, what if there was a world where the actual types were, say:
    _Int8, _Int16, _Int32, _Int64, _Int128
    And, then, say:
    char, short, int, long, ...
    Were seen as aliases.


    Actually, in our world the latest C standard (C23) has them, but the
    spelling is different: _BitInt(32) and unsigned _BitInt(32).
    I'm not sure if any major compiler already has them implemented. Bing
    copilot says that clang does, but I don't tend to believe eveything Bing copilot says.


    Essentially, _BitInt(n) semantics mean that, say, _BitInt(32) is not
    strictly equivalent to _Int32 or 'int', and _BitInt(16) is not
    equivalent to what _Int16 or 'short' would be.

    So, a range of power-of-2 integer types may still be needed.


    Well, maybe along with __int64 and friends, but __int64 and _Int64
    could be seen as equivalent.


    Then of course, the "stdint.h" types.
    Traditionally, these are a bunch of typedef's to the 'int' and
    friends. But, one can imagine a hypothetical world where stdint.h
    contained things like, say:
    typedef _Int32 int32_t;


    ...






    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Scott Lurndal@3:633/280.2 to All on Tue Aug 5 03:20:33 2025
    Reply-To: slp53@pacbell.net

    John Ames <commodorejohn@gmail.com> writes:
    On Sat, 02 Aug 2025 23:10:56 -0400
    Stefan Monnier <monnier@iro.umontreal.ca> wrote:

    And what a waste of a 64-bit architecture, to run it in 32-bit-only
    mode ...

    What do you mean by that? IIUC, the difference between 32bit and
    64bit (in terms of cost of designing and producing the CPU) was very
    small. MIPS happily designed their R4000 as 64bit while knowing that
    most of them would never get a chance to execute an instruction that
    makes use of the upper 32bits.

    This notion that the only advantage of a 64-bit architecture is a large >address space is very curious to me. Obviously that's *one* advantage,
    but while I don't know the in-the-field history of heavy-duty business/ >scientific computing the way some folks here do, I have not gotten the >impression that a lot of customers were commonly running up against the
    4 GB limit in the early '90s; meanwhile, the *other* advantage - higher >performance for the same MIPS on a variety of compute-bound tasks - is
    being overlooked entirely, it seems.

    Even simple data movement (e.g. optimized memcpy) will require half
    the instructions on a 64-bit architecture.

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: UsenetServer - www.usenetserver.com (3:633/280.2@fidonet)
  • From Anton Ertl@3:633/280.2 to All on Tue Aug 5 03:23:24 2025
    Lawrence D'Oliveiro <ldo@nz.invalid> writes:
    On Sat, 02 Aug 2025 09:28:17 GMT, Anton Ertl wrote:

    In my RISC-VAX scenario, the RISC-VAX would be the PDP-11 followon
    instead of the actual (CISC) VAX, so there would be no additional
    ISA.

    In order to be RISC, it would have had to add registers and remove >addressing modes from the non-load/store instructions (and replace "move" >with separate "load" and "store" instructions).

    Add registers: No, ARM A32 is RISC and has as many registers as VAX
    (including the misfeature of having the PC addressable as a GPR). But
    yes, I would tend towards more registers.

    Remove addressig modes: The memory-indirect addressing modes certainly
    don't occur in any RISC and add complexity, so I would not include
    them.

    Move: It does not matter how these instructions are called.

    "No additional ISA" or
    not, it would still have broken existing code.

    There was no existing VAX code before the VAX ISA was designed.

    Remember that VAX development started in the early-to-mid-1970s.

    This is exactly the point where the time machine would deliver the
    RISC-VAX ideas.

    RISC was
    still nothing more than a research idea at that point, which had yet to >prove itself.

    Certainly, that's why I have a time-machine in my scenario that deals
    with this problem.

    The claim by John Savard was that the VAX "was a good match to the
    technology *of its time*". It was not. It may have been a good match
    for the beliefs of the time, but that's a different thing.

    - anton
    --
    'Anyone trying for "industrial quality" ISA should avoid undefined behavior.'
    Mitch Alsup, <c17fcd89-f024-40e7-a594-88a85ac10d20o@googlegroups.com>

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: Institut fuer Computersprachen, Technische Uni (3:633/280.2@fidonet)
  • From Thomas Koenig@3:633/280.2 to All on Tue Aug 5 04:16:45 2025
    Anton Ertl <anton@mips.complang.tuwien.ac.at> schrieb:

    The claim by John Savard was that the VAX "was a good match to the
    technology *of its time*". It was not. It may have been a good match
    for the beliefs of the time, but that's a different thing.

    I concur; also, the evidence of the 801 supports that (and that
    was designed around the same time as the VAX).

    Although, personally, I think Data General might have been the
    better target. Going to Edson de Castro and telling him that he
    was on the right track with the Nova from the start, and his ideas
    should be extended, might have been politically easier than going
    to DEC.
    --
    This USENET posting was made without artificial intelligence,
    artificial impertinence, artificial arrogance, artificial stupidity,
    artificial flavorings or artificial colorants.

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Scott Lurndal@3:633/280.2 to All on Tue Aug 5 04:59:02 2025
    Reply-To: slp53@pacbell.net

    Thomas Koenig <tkoenig@netcologne.de> writes:
    Anton Ertl <anton@mips.complang.tuwien.ac.at> schrieb:

    The claim by John Savard was that the VAX "was a good match to the
    technology *of its time*". It was not. It may have been a good match
    for the beliefs of the time, but that's a different thing.

    I concur; also, the evidence of the 801 supports that (and that
    was designed around the same time as the VAX).

    Looking back at it after 50 years, hindsight is 20-20. It's
    difficult to judge the decisions made at DEC during the 70's;
    but it is easy to criticize them :-)


    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: UsenetServer - www.usenetserver.com (3:633/280.2@fidonet)
  • From Stefan Monnier@3:633/280.2 to All on Tue Aug 5 05:09:55 2025
    Scott Lurndal [2025-08-04 15:32:55] wrote:
    Michael S <already5chosen@yahoo.com> writes:
    scott@slp53.sl.home (Scott Lurndal) wrote:
    Michael S <already5chosen@yahoo.com> writes:
    BGB <cr88192@gmail.com> wrote:
    Except for majority of the world where long is 32 bit
    What majority? Linux owns the server market, the
    appliance market and much of the handset market (which apple
    dominates with their OS). And all Unix/Linux systems have
    64-bit longs on 64-bit CPUs.
    Majority of the world is embedded. Ovewhelming majority of embedded is
    32-bit or narrower.
    In terms of shipped units, perhaps (although many are narrower, as you
    point out). In terms of programmers, it's a fairly small fraction that
    do embedded programming.

    Yeah, the unit of measurement is a problem.
    I wonder how it compares if you look at number of programmers paid to
    write C code (after all, we're talking about C).

    In the desktop/server/laptop/handheld world, AFAICT the market share of
    C has shrunk significantly over the years whereas I get the impression
    that it's still quite strong in the embedded space. But I don't have
    any hard data.


    Stefan

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Michael S@3:633/280.2 to All on Tue Aug 5 05:12:13 2025
    On Mon, 4 Aug 2025 18:16:45 -0000 (UTC)
    Thomas Koenig <tkoenig@netcologne.de> wrote:

    Anton Ertl <anton@mips.complang.tuwien.ac.at> schrieb:

    The claim by John Savard was that the VAX "was a good match to the technology *of its time*". It was not. It may have been a good
    match for the beliefs of the time, but that's a different thing.


    The evidence of 801 is the 801 did not deliver until more than decade
    later. And the variant that delivered was quite different from original
    801.
    Actually, it can be argued that 801 didn't deliver until more than 15
    years late. I remember RSC from 1992H1. It was underwhelming.

    I concur; also, the evidence of the 801 supports that (and that
    was designed around the same time as the VAX).

    Although, personally, I think Data General might have been the
    better target. Going to Edson de Castro and telling him that he
    was on the right track with the Nova from the start, and his ideas
    should be extended, might have been politically easier than going
    to DEC.

    I don't quite understand the context of this comment. Can you elaborate?


    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Al Kossow@3:633/280.2 to All on Tue Aug 5 05:27:04 2025
    On 8/4/25 11:16 AM, Thomas Koenig wrote:

    Although, personally, I think Data General might have been the
    better target. Going to Edson de Castro and telling him that he
    was on the right track with the Nova from the start, and his ideas
    should be extended, might have been politically easier than going
    to DEC.


    A word-oriented 4 accumulator machine with skips, reduced to a 4 bit
    ALU to keep the cost down vs what came out of CMU to become the PDP-11?

    The essence of RISC really is just exposing what existed in the microcode engines to user-level programming and didn't really make sense until main memory systems got a lot faster.





    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Michael S@3:633/280.2 to All on Tue Aug 5 05:31:03 2025
    On Mon, 04 Aug 2025 15:09:55 -0400
    Stefan Monnier <monnier@iro.umontreal.ca> wrote:

    Scott Lurndal [2025-08-04 15:32:55] wrote:
    Michael S <already5chosen@yahoo.com> writes:
    scott@slp53.sl.home (Scott Lurndal) wrote:
    Michael S <already5chosen@yahoo.com> writes:
    BGB <cr88192@gmail.com> wrote:
    Except for majority of the world where long is 32 bit
    What majority? Linux owns the server market, the
    appliance market and much of the handset market (which apple
    dominates with their OS). And all Unix/Linux systems have
    64-bit longs on 64-bit CPUs.
    Majority of the world is embedded. Ovewhelming majority of
    embedded is 32-bit or narrower.
    In terms of shipped units, perhaps (although many are narrower, as
    you point out). In terms of programmers, it's a fairly small
    fraction that do embedded programming.

    Yeah, the unit of measurement is a problem.
    I wonder how it compares if you look at number of programmers paid to
    write C code (after all, we're talking about C).

    In the desktop/server/laptop/handheld world, AFAICT the market share
    of C has shrunk significantly over the years whereas I get the
    impression that it's still quite strong in the embedded space. But I
    don't have any hard data.


    Stefan


    Personally, [outside of Usenet and rwt forum] I know no one except
    myself who writes C targeting user mode on "big" computers (big, in my definitions, starts at smartphone). Myself, I am doing it more as a
    hobby and to make a point rather than out of professional needs. Professionally, in this range I tend to use C++. Not a small part of it
    is that C++ is more familiar than C for my younger co-workers.



    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Stefan Monnier@3:633/280.2 to All on Tue Aug 5 05:40:12 2025
    John Ames [2025-08-04 08:32:19] wrote:
    Stefan Monnier <monnier@iro.umontreal.ca> wrote:
    What do you mean by that? IIUC, the difference between 32bit and
    64bit (in terms of cost of designing and producing the CPU) was very
    small. MIPS happily designed their R4000 as 64bit while knowing that
    most of them would never get a chance to execute an instruction that
    makes use of the upper 32bits.
    This notion that the only advantage of a 64-bit architecture is a large address space is very curious to me.

    By "upper bits" I didn't mean to restrict it to the address space.
    AFAIK it would take several years before the OS and the rest of the
    tools started to support the use of instructions manipulating 64bits.
    By that time, many of those machines started to be decommissioned:
    The R4000 came out in late 1991, while the first version of Irix with
    support for the 64bit ISA on that CPU was released only in early 1996
    (there was an earlier 64bit version of Irix but only for the R8000
    processor).

    The same happened to some extent with the early amd64 machines, which
    ended up running 32bit Windows and applications compiled for the i386
    ISA. Those processors were successful mostly because they were fast at
    running i386 code (with the added marketing benefit of being "64bit
    ready"): it took 2 years for MS to release a matching OS.

    And I can't see why anyone would consider it a waste.
    AFAIK it was cheap to implement, and without it, there wouldn't have
    been the installed base of 64bit machines needed to justify investing
    into software development for that new ISA.


    Stefan

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Thomas Koenig@3:633/280.2 to All on Tue Aug 5 06:13:54 2025
    Michael S <already5chosen@yahoo.com> schrieb:
    On Mon, 4 Aug 2025 18:16:45 -0000 (UTC)
    Thomas Koenig <tkoenig@netcologne.de> wrote:

    Although, personally, I think Data General might have been the
    better target. Going to Edson de Castro and telling him that he
    was on the right track with the Nova from the start, and his ideas
    should be extended, might have been politically easier than going
    to DEC.

    I don't quite understand the context of this comment. Can you elaborate?

    De Castro had had a big success with a simple load-store
    architecture, the Nova. He did that to reduce CPU complexity
    and cost, to compete with DEC and its PDP-8. (Byte addressing
    was horrible on the Nova, though).

    Now, assume that, as a time traveler wanting to kick off an early
    RISC revolution, you are not allowed to reveal that you are a time
    traveler (which would have larger effects than just a different
    computer architecture). What do you do?

    a) You go to DEC

    b) You go to Data General

    c) You found your own company

    My guess would be that, with DEC, you would have the least chance of
    convincing corporate brass of your ideas. With Data General, you
    could try appealing to the CEO's personal history of creating the
    Nova, and thus his vanity. That could work. But your own company
    might actually be the best choice, if you can get the venture
    capital funding.

    --
    This USENET posting was made without artificial intelligence,
    artificial impertinence, artificial arrogance, artificial stupidity,
    artificial flavorings or artificial colorants.

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Scott Lurndal@3:633/280.2 to All on Tue Aug 5 06:29:35 2025
    Reply-To: slp53@pacbell.net

    Michael S <already5chosen@yahoo.com> writes:
    On Mon, 04 Aug 2025 15:09:55 -0400
    Stefan Monnier <monnier@iro.umontreal.ca> wrote:

    Scott Lurndal [2025-08-04 15:32:55] wrote:
    Michael S <already5chosen@yahoo.com> writes:
    scott@slp53.sl.home (Scott Lurndal) wrote:
    Michael S <already5chosen@yahoo.com> writes:
    BGB <cr88192@gmail.com> wrote:
    Except for majority of the world where long is 32 bit
    What majority? Linux owns the server market, the
    appliance market and much of the handset market (which apple
    dominates with their OS). And all Unix/Linux systems have
    64-bit longs on 64-bit CPUs.
    Majority of the world is embedded. Ovewhelming majority of
    embedded is 32-bit or narrower.
    In terms of shipped units, perhaps (although many are narrower, as
    you point out). In terms of programmers, it's a fairly small
    fraction that do embedded programming.

    Yeah, the unit of measurement is a problem.
    I wonder how it compares if you look at number of programmers paid to
    write C code (after all, we're talking about C).

    In the desktop/server/laptop/handheld world, AFAICT the market share
    of C has shrunk significantly over the years whereas I get the
    impression that it's still quite strong in the embedded space. But I
    don't have any hard data.


    Stefan


    Personally, [outside of Usenet and rwt forum] I know no one except
    myself who writes C targeting user mode on "big" computers (big, in my >definitions, starts at smartphone).

    Linux developers would be a significant, if not large, pool
    of C programmers.

    Myself, I am doing it more as a
    hobby and to make a point rather than out of professional needs. >Professionally, in this range I tend to use C++. Not a small part of it
    is that C++ is more familiar than C for my younger co-workers.

    Likewise, I've been using C++ rather than C since 1989, including for large-scale operating systems and hypervisors (both running on bare metal).

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: UsenetServer - www.usenetserver.com (3:633/280.2@fidonet)
  • From Michael S@3:633/280.2 to All on Tue Aug 5 06:54:51 2025
    On Mon, 4 Aug 2025 20:13:54 -0000 (UTC)
    Thomas Koenig <tkoenig@netcologne.de> wrote:

    Michael S <already5chosen@yahoo.com> schrieb:
    On Mon, 4 Aug 2025 18:16:45 -0000 (UTC)
    Thomas Koenig <tkoenig@netcologne.de> wrote:

    Although, personally, I think Data General might have been the
    better target. Going to Edson de Castro and telling him that he
    was on the right track with the Nova from the start, and his ideas
    should be extended, might have been politically easier than going
    to DEC.

    I don't quite understand the context of this comment. Can you
    elaborate?

    De Castro had had a big success with a simple load-store
    architecture, the Nova. He did that to reduce CPU complexity
    and cost, to compete with DEC and its PDP-8. (Byte addressing
    was horrible on the Nova, though).

    Now, assume that, as a time traveler wanting to kick off an early
    RISC revolution, you are not allowed to reveal that you are a time
    traveler (which would have larger effects than just a different
    computer architecture). What do you do?

    a) You go to DEC

    b) You go to Data General

    c) You found your own company

    My guess would be that, with DEC, you would have the least chance of convincing corporate brass of your ideas. With Data General, you
    could try appealing to the CEO's personal history of creating the
    Nova, and thus his vanity. That could work. But your own company
    might actually be the best choice, if you can get the venture
    capital funding.


    Why not go to somebody who has money and interest to build
    microprocessor, but no existing mini/mainframe/SuperC buisness?
    If we limit ourselves to USA then Moto, Intel, AMD, NatSemi...
    May be, even AT&T ? Or was AT&T stil banned from making computers in
    the mid 70s?










    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Stephen Fuld@3:633/280.2 to All on Tue Aug 5 07:06:17 2025
    On 8/4/2025 8:32 AM, John Ames wrote:

    snip

    This notion that the only advantage of a 64-bit architecture is a large address space is very curious to me. Obviously that's *one* advantage,
    but while I don't know the in-the-field history of heavy-duty business/ scientific computing the way some folks here do, I have not gotten the impression that a lot of customers were commonly running up against the
    4 GB limit in the early '90s;

    Not exactly the same, but I recall an issue with Windows NT where it
    initially divided the 4GB address space in 2 GB for the OS, and 2GB for
    users. Some users were "running out of address space", so Microsoft
    came up with an option to reduce the OS space to 1 GB, thus allowing up
    to 3 GB for users. I am sure others here will know more details.


    --
    - Stephen Fuld
    (e-mail address disguised to prevent spam)

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Michael S@3:633/280.2 to All on Tue Aug 5 07:08:38 2025
    On Mon, 04 Aug 2025 20:29:35 GMT
    scott@slp53.sl.home (Scott Lurndal) wrote:

    Michael S <already5chosen@yahoo.com> writes:
    On Mon, 04 Aug 2025 15:09:55 -0400
    Stefan Monnier <monnier@iro.umontreal.ca> wrote:

    Scott Lurndal [2025-08-04 15:32:55] wrote:
    Michael S <already5chosen@yahoo.com> writes:
    scott@slp53.sl.home (Scott Lurndal) wrote:
    Michael S <already5chosen@yahoo.com> writes:
    BGB <cr88192@gmail.com> wrote:
    Except for majority of the world where long is 32 bit
    What majority? Linux owns the server market, the
    appliance market and much of the handset market (which apple
    dominates with their OS). And all Unix/Linux systems have
    64-bit longs on 64-bit CPUs.
    Majority of the world is embedded. Ovewhelming majority of
    embedded is 32-bit or narrower.
    In terms of shipped units, perhaps (although many are narrower,
    as you point out). In terms of programmers, it's a fairly small
    fraction that do embedded programming.

    Yeah, the unit of measurement is a problem.
    I wonder how it compares if you look at number of programmers paid
    to write C code (after all, we're talking about C).

    In the desktop/server/laptop/handheld world, AFAICT the market
    share of C has shrunk significantly over the years whereas I get
    the impression that it's still quite strong in the embedded space.
    But I don't have any hard data.


    Stefan


    Personally, [outside of Usenet and rwt forum] I know no one except
    myself who writes C targeting user mode on "big" computers (big, in
    my definitions, starts at smartphone).

    Linux developers would be a significant, if not large, pool
    of C programmers.


    According to my understanding, Linux developers *maintain* user-mode C programs. They very rarely start new user-mode C programs from scratch.
    The last big one I can think about was git almost 2 decades ago. And
    even that happened more due to personal idiosyncrasies of its
    originator than for solid technical reasons.
    I could be wrong about it, of course.

    Myself, I am doing it more as a
    hobby and to make a point rather than out of professional needs. >Professionally, in this range I tend to use C++. Not a small part of
    it is that C++ is more familiar than C for my younger co-workers.

    Likewise, I've been using C++ rather than C since 1989, including for large-scale operating systems and hypervisors (both running on bare
    metal).

    You know my opinion about it.
    For you current project, C++ appears to be a right tool. Or, at least
    more right than C.
    For few of your previous project I am convinced that it was a wrong
    tool.
    And I know that you are convinced that I am wrong about it so we don't
    have to repeat it.










    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Michael S@3:633/280.2 to All on Tue Aug 5 07:21:34 2025
    On Mon, 4 Aug 2025 14:06:17 -0700
    Stephen Fuld <sfuld@alumni.cmu.edu.invalid> wrote:

    On 8/4/2025 8:32 AM, John Ames wrote:

    snip

    This notion that the only advantage of a 64-bit architecture is a
    large address space is very curious to me. Obviously that's *one* advantage, but while I don't know the in-the-field history of
    heavy-duty business/ scientific computing the way some folks here
    do, I have not gotten the impression that a lot of customers were
    commonly running up against the 4 GB limit in the early '90s;

    Not exactly the same, but I recall an issue with Windows NT where it initially divided the 4GB address space in 2 GB for the OS, and 2GB
    for users. Some users were "running out of address space", so
    Microsoft came up with an option to reduce the OS space to 1 GB, thus allowing up to 3 GB for users. I am sure others here will know more
    details.



    IIRC, it wasn't a problem for absolute majority of Nt users up until approximately turn of millennium. Even as late as 1999 128 MB was
    considered mid-range PC. 64 MB PCs were still sold and bought in dozens
    of millions.




    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Scott Lurndal@3:633/280.2 to All on Tue Aug 5 07:23:10 2025
    Reply-To: slp53@pacbell.net

    Michael S <already5chosen@yahoo.com> writes:
    On Mon, 04 Aug 2025 20:29:35 GMT
    scott@slp53.sl.home (Scott Lurndal) wrote:

    Michael S <already5chosen@yahoo.com> writes:
    On Mon, 04 Aug 2025 15:09:55 -0400
    Stefan Monnier <monnier@iro.umontreal.ca> wrote:

    Scott Lurndal [2025-08-04 15:32:55] wrote:
    Michael S <already5chosen@yahoo.com> writes:
    scott@slp53.sl.home (Scott Lurndal) wrote:
    Michael S <already5chosen@yahoo.com> writes:
    BGB <cr88192@gmail.com> wrote:
    Except for majority of the world where long is 32 bit
    What majority? Linux owns the server market, the
    appliance market and much of the handset market (which apple
    dominates with their OS). And all Unix/Linux systems have
    64-bit longs on 64-bit CPUs.
    Majority of the world is embedded. Ovewhelming majority of
    embedded is 32-bit or narrower.
    In terms of shipped units, perhaps (although many are narrower,
    as you point out). In terms of programmers, it's a fairly small
    fraction that do embedded programming.

    Yeah, the unit of measurement is a problem.
    I wonder how it compares if you look at number of programmers paid
    to write C code (after all, we're talking about C).

    In the desktop/server/laptop/handheld world, AFAICT the market
    share of C has shrunk significantly over the years whereas I get
    the impression that it's still quite strong in the embedded space.
    But I don't have any hard data.


    Stefan


    Personally, [outside of Usenet and rwt forum] I know no one except
    myself who writes C targeting user mode on "big" computers (big, in
    my definitions, starts at smartphone).

    Linux developers would be a significant, if not large, pool
    of C programmers.


    According to my understanding, Linux developers *maintain* user-mode C >programs. They very rarely start new user-mode C programs from scratch.
    The last big one I can think about was git almost 2 decades ago. And
    even that happened more due to personal idiosyncrasies of its
    originator than for solid technical reasons.
    I could be wrong about it, of course.

    I meant to say 'kernel developers'. My bad.


    For few of your previous project I am convinced that it was a wrong
    tool.

    Sans further details on how you consider C++ as the wrong tool
    for bare-metal operating system/hypervisor development (particularly as the subset used for those projects, which did _not_ include any
    of the standard C++ library, was just as efficient as C but provided
    much better modularization and encapsulation), I'd just say
    that your opinion wasn't widely shared amongst those who actually
    did the work.

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: UsenetServer - www.usenetserver.com (3:633/280.2@fidonet)
  • From Al Kossow@3:633/280.2 to All on Tue Aug 5 07:41:59 2025
    On 8/4/25 1:54 PM, Michael S wrote:

    Why not go to somebody who has money and interest to build
    microprocessor, but no existing mini/mainframe/SuperC buisness?

    MOS technology was still in the stone age.
    High speed CMOS didn't exist, bipolar wasn't
    very dense, and it was power hungry.
    It took a lot of power to even get 8MIPs (FPS-120B array processor)
    in 1975 and the working memory was tiny.

    HP probably had the most advanced tech with their SOS
    process but they were building stack machines (3000)
    and wouldn't integrate them until the 80s

    None of this makes any sense with the memory performance
    available at the time.

    and.. who would be the buyers?



    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Scott Lurndal@3:633/280.2 to All on Tue Aug 5 07:51:47 2025
    Reply-To: slp53@pacbell.net

    Stephen Fuld <sfuld@alumni.cmu.edu.invalid> writes:
    On 8/4/2025 8:32 AM, John Ames wrote:

    snip

    This notion that the only advantage of a 64-bit architecture is a large
    address space is very curious to me. Obviously that's *one* advantage,
    but while I don't know the in-the-field history of heavy-duty business/
    scientific computing the way some folks here do, I have not gotten the
    impression that a lot of customers were commonly running up against the
    4 GB limit in the early '90s;

    Not exactly the same, but I recall an issue with Windows NT where it >initially divided the 4GB address space in 2 GB for the OS, and 2GB for >users. Some users were "running out of address space", so Microsoft
    came up with an option to reduce the OS space to 1 GB, thus allowing up
    to 3 GB for users. I am sure others here will know more details.

    AT&T SVR[34] Unix systems had the same issue on x86, as did linux. They
    mainly used the same solution as well (give the user 3GB) of virtual
    address space.

    I believe SVR4 was also able to leverage 36-bit physical addressing to
    use more 4GB of DRAM, while still limiting a single process to 2 or 3GB
    of user virtual address space.

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: UsenetServer - www.usenetserver.com (3:633/280.2@fidonet)
  • From BGB@3:633/280.2 to All on Tue Aug 5 08:18:24 2025
    On 8/4/2025 3:54 PM, Michael S wrote:
    On Mon, 4 Aug 2025 20:13:54 -0000 (UTC)
    Thomas Koenig <tkoenig@netcologne.de> wrote:

    Michael S <already5chosen@yahoo.com> schrieb:
    On Mon, 4 Aug 2025 18:16:45 -0000 (UTC)
    Thomas Koenig <tkoenig@netcologne.de> wrote:

    Although, personally, I think Data General might have been the
    better target. Going to Edson de Castro and telling him that he
    was on the right track with the Nova from the start, and his ideas
    should be extended, might have been politically easier than going
    to DEC.

    I don't quite understand the context of this comment. Can you
    elaborate?

    De Castro had had a big success with a simple load-store
    architecture, the Nova. He did that to reduce CPU complexity
    and cost, to compete with DEC and its PDP-8. (Byte addressing
    was horrible on the Nova, though).

    Now, assume that, as a time traveler wanting to kick off an early
    RISC revolution, you are not allowed to reveal that you are a time
    traveler (which would have larger effects than just a different
    computer architecture). What do you do?

    a) You go to DEC

    b) You go to Data General

    c) You found your own company

    My guess would be that, with DEC, you would have the least chance of
    convincing corporate brass of your ideas. With Data General, you
    could try appealing to the CEO's personal history of creating the
    Nova, and thus his vanity. That could work. But your own company
    might actually be the best choice, if you can get the venture
    capital funding.


    Why not go to somebody who has money and interest to build
    microprocessor, but no existing mini/mainframe/SuperC buisness?
    If we limit ourselves to USA then Moto, Intel, AMD, NatSemi...
    May be, even AT&T ? Or was AT&T stil banned from making computers in
    the mid 70s?


    AFAIK (from what I heard about all of this):
    The ban on AT&T was the whole reason they released Unix freely.

    Then when things lifted (after the AT&T break-up), they tried to
    re-assert their control over Unix, which backfired. And, they tried to
    make and release a workstation, but by then they were competing against
    the IBM PC Clone market (and also everyone else trying to sell Unix workstations at the time), ...

    Then, in their thing of trying to re-consolidate Unix under their
    control, and fighting with the BSD people over copyright, etc. Linux and Microsoft came in and mostly ate what market they might have had.



    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Waldek Hebisch@3:633/280.2 to All on Tue Aug 5 09:24:15 2025
    In comp.arch John Ames <commodorejohn@gmail.com> wrote:
    On Sat, 02 Aug 2025 23:10:56 -0400
    Stefan Monnier <monnier@iro.umontreal.ca> wrote:

    And what a waste of a 64-bit architecture, to run it in 32-bit-only
    mode ...

    What do you mean by that? IIUC, the difference between 32bit and
    64bit (in terms of cost of designing and producing the CPU) was very
    small. MIPS happily designed their R4000 as 64bit while knowing that
    most of them would never get a chance to execute an instruction that
    makes use of the upper 32bits.

    This notion that the only advantage of a 64-bit architecture is a large address space is very curious to me. Obviously that's *one* advantage,
    but while I don't know the in-the-field history of heavy-duty business/ scientific computing the way some folks here do, I have not gotten the impression that a lot of customers were commonly running up against the
    4 GB limit in the early '90s; meanwhile, the *other* advantage - higher performance for the same MIPS on a variety of compute-bound tasks - is
    being overlooked entirely, it seems.

    Well, as log as an app fits into 32-bit address space, all other
    factors being equal one can expect 10-20% better performance
    from 32-bit addresses. Due to this customers had motivation
    to stay with 32-bits as log as possible.

    But matter is somewhat different for OS vendor: once machine
    gets more than 1GB memory 64-bit addressing in the kernel avoids
    various troubles.

    Concerning applications: server with multiple process sharing
    memory use may operate with several gigabytes using 32-bit
    addresses for applications.

    But for numeric work 512 MB of real memory and more than 3 GB
    virtual (with swapping to disc) may give adequate performance.
    But it is quite inconvenient for 32-bit OS to provide more than
    3 GB of address space to applications.

    Also heaviliy multithread application with some threads needing
    large stacks is inconvenient in 32-bit address space.

    Of course software developers wanting to develop for 64-bit
    systems need 64-bit system interfaces.

    So, supporting 32-bit applications is natural and one could expect
    for some (possibly quite long) time that 32-bit applications
    will be majority. But supporting 64-bit operation was also
    important, both for customers and for OS itself.

    BTW: AMD-64 was a special case: since 64-bit mode was bundled
    with increasing number of GPR-s, with PC-relative addressing
    and with register-based call convention on average 64-bit
    code was faster than 32-bit code. And since AMD-64 was
    relatively late in 64-bit game there was limited motivation
    to develop mode using 32-bit addressing and 64-bit instructions.
    It works in compilers and in Linux, but support is much worse
    than for using 64-bit addressing.

    --
    Waldek Hebisch

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: To protect and to server (3:633/280.2@fidonet)
  • From Waldek Hebisch@3:633/280.2 to All on Tue Aug 5 09:38:53 2025
    In comp.arch Scott Lurndal <scott@slp53.sl.home> wrote:
    Stephen Fuld <sfuld@alumni.cmu.edu.invalid> writes:
    On 8/4/2025 8:32 AM, John Ames wrote:

    snip

    This notion that the only advantage of a 64-bit architecture is a large
    address space is very curious to me. Obviously that's *one* advantage,
    but while I don't know the in-the-field history of heavy-duty business/
    scientific computing the way some folks here do, I have not gotten the
    impression that a lot of customers were commonly running up against the
    4 GB limit in the early '90s;

    Not exactly the same, but I recall an issue with Windows NT where it >>initially divided the 4GB address space in 2 GB for the OS, and 2GB for >>users. Some users were "running out of address space", so Microsoft
    came up with an option to reduce the OS space to 1 GB, thus allowing up
    to 3 GB for users. I am sure others here will know more details.

    AT&T SVR[34] Unix systems had the same issue on x86, as did linux. They mainly used the same solution as well (give the user 3GB) of virtual
    address space.

    I believe SVR4 was also able to leverage 36-bit physical addressing to
    use more 4GB of DRAM, while still limiting a single process to 2 or 3GB
    of user virtual address space.

    IIRC Linux pretty early used 3 GB for users and 1 GB for kernel.
    Other splits (including 2 GG + 2 GB) were available as an option.
    With PAE Linux offered 4 GB (or maybe 3.5 GB) and whatever amount
    of RAM was supported by PAE, but in this mode kernel was slower
    than standard one.

    --
    Waldek Hebisch

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: To protect and to server (3:633/280.2@fidonet)
  • From Waldek Hebisch@3:633/280.2 to All on Tue Aug 5 09:45:21 2025
    In comp.arch Al Kossow <aek@bitsavers.org> wrote:
    On 8/2/25 1:07 AM, Waldek Hebisch wrote:

    IIUC PRISM eventually became Alpha.

    Not really. Documents for both, including
    the rare PRISM docs are on bitsavers.
    PRISM came out of Cutler's DEC West group,
    Alpha from the East Coast. I'm not aware
    of any team member overlap.

    Well, from peoples point of view they were different efforts.
    From company point of view there was project to deliver
    high-preformace RISC-y machine, it finally succeded when
    new team did the work. I think that at least high-level
    knowledge gained in PRISM project was useful for Alpha.
    I would expect that some detailed work was reused, but
    do not know how much.

    --
    Waldek Hebisch

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: To protect and to server (3:633/280.2@fidonet)
  • From Waldek Hebisch@3:633/280.2 to All on Tue Aug 5 09:52:55 2025
    In comp.arch Anton Ertl <anton@mips.complang.tuwien.ac.at> wrote:
    antispam@fricas.org (Waldek Hebisch) writes:
    <snip>
    OTOH Unix for Alpha was claimed to be pure 64-bit.

    It depends on the kind of purity you are aspiring to. After a bunch
    of renamings it was finally called Tru64 UNIX. Not Pur64, but
    Tru64:-) Before that, it was called Digital UNIX (but once DEC had
    been bought by Compaq, that was no longer appropriate), and before
    that, DEC OSF/1 AXP.

    The C environment for DEC OSF/1 was an I32LP64 setup, not an ILP64
    setup, so can you really call it pure?

    What counts are OS interfaces. C while playing prominent role is
    just one of programming languages. While 'int' leaked to early
    system interfaces later one used abstract types for most things.
    So as long as C provided 64-bit integer type (that is long) and
    64-bit pointers this was OK.

    And as others noticed, I32LP64 was very common.

    Anyway, given system interfaces one could naturally implement
    language were the only integer type is 64-bit. That is enough
    for me that call this pure 64-bit.

    In addition there were some OS features for running ILP32 programs,
    similar to Linux' MAP_32BIT flag for mmap(). IIRC Netscape Navigator
    was compiled as ILP32 program (the C compiler had a flag for that),
    and needed these OS features.

    Again, that not a problem for _my_ notion of purity.

    --
    Waldek Hebisch

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: To protect and to server (3:633/280.2@fidonet)
  • From Dan Cross@3:633/280.2 to All on Tue Aug 5 11:31:31 2025
    In article <2025Aug3.185110@mips.complang.tuwien.ac.at>,
    Anton Ertl <anton@mips.complang.tuwien.ac.at> wrote:
    [snip]
    The C environment for DEC OSF/1 was an I32LP64 setup, not an ILP64
    setup, so can you really call it pure?

    I would. The definiton of "purity" I usually adopted during the
    transition to 64-bit CPUs was that pointers were 32-bits. The
    catchphrase at the time was, "64-bit clean", which usually meant
    that you didn't use int's to type pune for pointers.

    Many ABIs on modern day 64-bit machines are still I32LP64; I64
    is really too large in many respects.

    In addition there were some OS features for running ILP32 programs,
    similar to Linux' MAP_32BIT flag for mmap(). IIRC Netscape Navigator
    was compiled as ILP32 program (the C compiler had a flag for that),
    and needed these OS features.

    MAP_32BIT is only used on x86-64 on Linux, and was originally
    a performance hack for allocating thread stacks: apparently, it
    was cheaper to do a thread switch with a stack below the 4GiB
    barrier (sign extension artifact maybe? Who knows...). But it's
    no longer required for that. But there's no indication that it
    was for supporting ILP32 on a 64-bit system.

    In the OS kernel, often times you want to allocate physical
    address space below 4GiB for e.g. device BARs; many devices are
    either 32-bit (but have to work on 64-bit systems) or work
    better with 32-bit BARs.

    - Dan C.


    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: PANIX Public Access Internet and UNIX, NYC (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Tue Aug 5 11:34:00 2025
    On Mon, 4 Aug 2025 08:32:19 -0700, John Ames wrote:

    This notion that the only advantage of a 64-bit architecture is a large address space is very curious to me.

    That is basically it.

    Obviously that's *one* advantage, but while I don't know the
    in-the-field history of heavy-duty business/ scientific computing
    the way some folks here do, I have not gotten the impression that a
    lot of customers were commonly running up against the 4 GB limit in
    the early '90s ...

    By the latter 1990s, as GPUs became popular in the consumer market, the
    amount of VRAM on them kept growing, and taking up more and more
    significant chunks of a 32-bit address space. So that was one of the
    drivers towards 64-bit addressing.

    ... meanwhile, the *other* advantage - higher performance for the
    same MIPS on a variety of compute-bound tasks - is being overlooked
    entirely, it seems.

    I don’t think there is one. A lot of computation involves floating point, and the floating-point formats mostly remain the same ones defined by
    IEEE-754 back in the 1980s.

    In the x86 world, there is the performance boost in the switch from the
    old register-poor 32-bit 80386 instruction set to the larger register pool available in AMD’s 64-bit extensions.

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Tue Aug 5 11:39:14 2025
    On Mon, 4 Aug 2025 14:06:17 -0700, Stephen Fuld wrote:

    ... I recall an issue with Windows NT where it initially divided the
    4GB address space in 2 GB for the OS, and 2GB for users. Some users
    were "running out of address space", so Microsoft came up with an
    option to reduce the OS space to 1 GB, thus allowing up to 3 GB for
    users. I am sure others here will know more details.

    That would have been prone to breakage in poorly-written programs that
    were using signed instead of unsigned comparisons on memory block sizes.

    I hit an earlier version of this problem in about the mid-1980s, trying to help a user install WordStar on his IBM PC, which was one of the earliest machines to have 640K of RAM. The WordStar installer balked, saying he didn’t have enough free RAM!

    The solution: create a dummy RAM disk to bring the free memory size down
    below 512K. Then after the installation succeeded, the RAM disk could be removed.

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Tue Aug 5 11:41:15 2025
    On Mon, 4 Aug 2025 23:24:15 -0000 (UTC), Waldek Hebisch wrote:

    BTW: AMD-64 was a special case: since 64-bit mode was bundled with
    increasing number of GPR-s, with PC-relative addressing and with register-based call convention on average 64-bit code was faster than
    32-bit code. And since AMD-64 was relatively late in 64-bit game there
    was limited motivation to develop mode using 32-bit addressing and
    64-bit instructions. It works in compilers and in Linux, but support is
    much worse than for using 64-bit addressing.

    Intel was trying to promote this in the form of the “X32” ABI. The Linux kernel and some distros did include support for this. I don’t think it was very popular, and it may be extinct now.

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Tue Aug 5 11:43:36 2025
    On Mon, 04 Aug 2025 17:23:24 GMT, Anton Ertl wrote:

    Lawrence D'Oliveiro <ldo@nz.invalid> writes:

    On Sat, 02 Aug 2025 09:28:17 GMT, Anton Ertl wrote:

    In my RISC-VAX scenario, the RISC-VAX would be the PDP-11 followon
    instead of the actual (CISC) VAX, so there would be no additional
    ISA.

    In order to be RISC, it would have had to add registers and remove
    addressing modes from the non-load/store instructions (and replace
    "move" with separate "load" and "store" instructions).

    Add registers: No, ARM A32 is RISC and has as many registers as VAX ...

    It was the PDP-11 we were talking about as the starting point.
    Remember Anton’s claim is that it was unnecessary to do the complete
    redesign that was the VAX, that something could have been done that
    was more backward-compatible with the PDP-11.

    But no, I don’t think that was possible, and the above is why.

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Tue Aug 5 11:46:03 2025
    On Mon, 4 Aug 2025 12:27:04 -0700, Al Kossow wrote:

    The essence of RISC really is just exposing what existed in the
    microcode engines to user-level programming and didn't really make
    sense until main memory systems got a lot faster.

    How do you reconcile this with the fact that the CPU-RAM speed gap is
    even wider now than it was back then?

    I would amend that to say, RISC started to make sense when fast RAM
    became cheap enough to use as a cache to bridge the gap.

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Tue Aug 5 11:47:48 2025
    On Mon, 4 Aug 2025 20:13:54 -0000 (UTC), Thomas Koenig wrote:

    a) You go to DEC

    b) You go to Data General

    c) You found your own company

    How about d) Go talk to the man responsible for the fastest machines in
    the world around that time, i.e. Seymour Cray?

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Tue Aug 5 11:53:08 2025
    On Mon, 4 Aug 2025 17:18:24 -0500, BGB wrote:

    The ban on AT&T was the whole reason they released Unix freely.

    It was never really “freely” available.

    Then when things lifted (after the AT&T break-up), they tried to
    re-assert their control over Unix, which backfired.

    They were already tightening things up from the Seventh Edition onwards -- remember, this version rescinded the permission to use the source code for classroom teaching purposes, neatly strangling the entire market for the legendary Lions Book. Which continued to spread afterwards via samizdat, nonetheless.

    And, they tried to make and release a workstation, but by then they
    were competing against the IBM PC Clone market (and also everyone
    else trying to sell Unix workstations at the time), ...

    That was a very successful market, from about the mid-1980s until the mid- to-latter 1990s. In spite of all the vendor-lock-in and fragmentation, it mentioned to survive I think because of the sheer performance available in
    the RISC processors, which Microsoft tried to support with its new
    “Windows NT” OS, but was never able to get quite right.

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Tue Aug 5 15:35:41 2025
    On Mon, 4 Aug 2025 23:52:55 -0000 (UTC), Waldek Hebisch wrote:

    And as others noticed, I32LP64 was very common.

    Still is the most common.

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Tue Aug 5 15:37:22 2025
    On Mon, 4 Aug 2025 03:32:52 -0700, Al Kossow wrote:

    MIPS products came out of DECWRL (the research group started to build
    Titan) and were stopgaps until the "real" architecture came out
    (Cutler's out of DECWest)
    I don't think it ever got much love out of DEC corporate and were just
    done so DEC didn't completely get their lunch eaten in the Unix
    workstation market.

    There were many in high places at DEC who didn’t like Unix at all. Dave Cutler was one of them, and I think Ken Olsen, right at the top, as well.

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Tue Aug 5 15:38:08 2025
    On Mon, 4 Aug 2025 12:19:38 +0300, Michael S wrote:

    Except for majority of the world where long is 32 bit

    That only applies on Windows, as far as we can tell.

    The majority of the world is I32LP64.

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From vallor@3:633/280.2 to All on Tue Aug 5 15:56:45 2025
    On Tue, 5 Aug 2025 01:41:15 -0000 (UTC), Lawrence D'Oliveiro wrote:

    On Mon, 4 Aug 2025 23:24:15 -0000 (UTC), Waldek Hebisch wrote:

    BTW: AMD-64 was a special case: since 64-bit mode was bundled with
    increasing number of GPR-s, with PC-relative addressing and with
    register-based call convention on average 64-bit code was faster than
    32-bit code. And since AMD-64 was relatively late in 64-bit game there
    was limited motivation to develop mode using 32-bit addressing and
    64-bit instructions. It works in compilers and in Linux, but support is
    much worse than for using 64-bit addressing.

    Intel was trying to promote this in the form of the “X32” ABI. The Linux kernel and some distros did include support for this. I don’t think it was very popular, and it may be extinct now.

    It's still in the Linux kernel, but off by default.

    arch/x86/Kconfig

    I went to an O'Reilly "Foo Camp" where AMD was showing off their
    new 64-bit processor. Found it fascinating, if a little over my
    head. But I did gather that the instruction set made sense for transitioning from 32-bit software, and I think Intel missed the boat with their IA-64.

    (And I have memories of when Intel started making "EM64T" processors...)

    --
    -Scott System76 Thelio Mega v1.1 x86_64 NVIDIA RTX 3090Ti 24G
    OS: Linux 6.16.0 D: Mint 22.1 DE: Xfce 4.18
    NVIDIA: 575.64.05 Mem: 258G
    "Excuse me for butting in, but I'm interrupt-driven."

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Tue Aug 5 16:46:18 2025
    On Mon, 4 Aug 2025 18:07:48 +0300, Michael S wrote:

    Majority of the world is embedded. Ovewhelming majority of embedded is
    32-bit or narrower.

    Embedded CPUs are mostly ARM, MIPS, RISC-V ... all of which are available
    in 64-bit variants.

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From BGB@3:633/280.2 to All on Tue Aug 5 18:14:17 2025
    On 8/5/2025 1:46 AM, Lawrence D'Oliveiro wrote:
    On Mon, 4 Aug 2025 18:07:48 +0300, Michael S wrote:

    Majority of the world is embedded. Ovewhelming majority of embedded is
    32-bit or narrower.

    Embedded CPUs are mostly ARM, MIPS, RISC-V ... all of which are available
    in 64-bit variants.

    Well, along with, traditionally, 6502 and Z80, and MSP430.

    The Atmel AVR was also pretty popular for a while, though AFAIK more in
    the hobbyist space (say, more popularity due to Arduino than due to its
    use in consumer electronics). Whereas the MSP430 was fairly widespread
    in the latter (and a fairly common chip for running things like mice and keyboards).

    There were more advanced versions of the MSP430, with a 20 bit address
    space, etc. But the most readily available versions typically used a
    16-bit address space (with typically between 0.25K and 2K of RAM; and 1K
    to 48K of ROM).


    In most cases, one got C with a similar programming model; namely 'int'
    being 16 bit. Though, the Arduino platform used C++.

    I was left thinking that I had still seen a lot of K&R style C in the
    6502 and Z80 spaces, but can't seem to confirm.




    ....


    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Kerr-Mudd, John@3:633/280.2 to All on Tue Aug 5 18:25:28 2025
    :
    On Tue, 5 Aug 2025 01:39:14 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    On Mon, 4 Aug 2025 14:06:17 -0700, Stephen Fuld wrote:

    ... I recall an issue with Windows NT where it initially divided the
    4GB address space in 2 GB for the OS, and 2GB for users. Some users
    were "running out of address space", so Microsoft came up with an
    option to reduce the OS space to 1 GB, thus allowing up to 3 GB for
    users. I am sure others here will know more details.

    That would have been prone to breakage in poorly-written programs that
    were using signed instead of unsigned comparisons on memory block sizes.

    I hit an earlier version of this problem in about the mid-1980s, trying to help a user install WordStar on his IBM PC, which was one of the earliest machines to have 640K of RAM. The WordStar installer balked, saying he didn’t have enough free RAM!

    The solution: create a dummy RAM disk to bring the free memory size down below 512K. Then after the installation succeeded, the RAM disk could be removed.

    I recall the time our DOS-based install disks (network boot and re-image a
    PC from a server) failed. It was the first time we'd seen a PC with 4G (I think) of RAM, DOS was wrapping addressed memory and overwriting the
    running batch file!

    --
    Bah, and indeed Humbug.

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: Dis (3:633/280.2@fidonet)
  • From Lars Poulsen@3:633/280.2 to All on Tue Aug 5 22:52:38 2025
    On 2025-08-04, BGB <cr88192@gmail.com> wrote:
    AFAIK (from what I heard about all of this):
    The ban on AT&T was the whole reason they released Unix freely.

    Then when things lifted (after the AT&T break-up), they tried to
    re-assert their control over Unix, which backfired. And, they tried to
    make and release a workstation, but by then they were competing against
    the IBM PC Clone market (and also everyone else trying to sell Unix workstations at the time), ...

    Then, in their thing of trying to re-consolidate Unix under their
    control, and fighting with the BSD people over copyright, etc. Linux and Microsoft came in and mostly ate what market they might have had.

    From what I saw, they did not release Unix "freely" in the way we now
    think of Free and Open Source Software. It was licensed (for no money
    except handling costs for distribution) to UNIVERSITIES, with strict
    limits on redistribution, but the cost was prohibitive for small
    businesses. That is what motivated Tannenbaum to start Minix, and
    Torvalds to build Linux.

    The 3B was an absolute dog. We had a couple at ACC, because we were
    providing device drivers or something to an ATT project for a Federal
    agency. We also had first an 11/70 and later an 11/780 running 4BSD.
    The BSD systems were pretty snappy. And we had an 11/780 for the
    business side, running VMS, And a VMS 11/750 for engineering, which was
    not as well liked as the BSD until we got the Wollongong overlay so we
    could network it to the BSD system.

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Scott Lurndal@3:633/280.2 to All on Tue Aug 5 23:58:12 2025
    Reply-To: slp53@pacbell.net

    Lawrence D'Oliveiro <ldo@nz.invalid> writes:
    On Mon, 4 Aug 2025 20:13:54 -0000 (UTC), Thomas Koenig wrote:

    a) You go to DEC

    b) You go to Data General

    c) You found your own company

    How about d) Go talk to the man responsible for the fastest machines in
    the world around that time, i.e. Seymour Cray?

    I did speak with him, once, when he was visiting my
    godfather in Chippewa Falls. I was rather young at the time
    and had no clue who he was until years later, sadly.

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: UsenetServer - www.usenetserver.com (3:633/280.2@fidonet)
  • From Terje Mathisen@3:633/280.2 to All on Wed Aug 6 01:24:34 2025
    Stephen Fuld wrote:
    On 8/4/2025 8:32 AM, John Ames wrote:
    =20
    snip
    =20
    This notion that the only advantage of a 64-bit architecture is a larg=
    e
    address space is very curious to me. Obviously that's *one* advantage,=

    but while I don't know the in-the-field history of heavy-duty business=
    /
    scientific computing the way some folks here do, I have not gotten the=

    impression that a lot of customers were commonly running up against th=
    e
    4 GB limit in the early '90s;
    =20
    Not exactly the same, but I recall an issue with Windows NT where it=20 initially divided the 4GB address space in 2 GB for the OS, and 2GB for=
    =20
    users.=C2=A0 Some users were "running out of address space", so Microso=
    ft=20
    came up with an option to reduce the OS space to 1 GB, thus allowing up=
    =20
    to 3 GB for users.=C2=A0 I am sure others here will know more details.

    Any program written to Microsoft/Windows spec would work transparently=20
    with a 3:1 split, the problem was all the programs ported from unix=20
    which assumed that any negative return value was a failure code.

    In effect, the program had to promise the OS that it would behave=20
    correctly before it was allowed allocate more than 2GB of memory.

    Terje

    --=20
    - <Terje.Mathisen at tmsw.no>
    "almost all programming can be viewed as an exercise in caching"


    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Scott Lurndal@3:633/280.2 to All on Wed Aug 6 01:41:29 2025
    Reply-To: slp53@pacbell.net

    Terje Mathisen <terje.mathisen@tmsw.no> writes:
    Stephen Fuld wrote:
    On 8/4/2025 8:32 AM, John Ames wrote:
    =20
    snip
    =20
    This notion that the only advantage of a 64-bit architecture is a larg=
    e
    address space is very curious to me. Obviously that's *one* advantage,=

    but while I don't know the in-the-field history of heavy-duty business=
    /
    scientific computing the way some folks here do, I have not gotten the=

    impression that a lot of customers were commonly running up against th=
    e
    4 GB limit in the early '90s;
    =20
    Not exactly the same, but I recall an issue with Windows NT where it=20
    initially divided the 4GB address space in 2 GB for the OS, and 2GB for= >=20
    users.=C2=A0 Some users were "running out of address space", so Microso= >ft=20
    came up with an option to reduce the OS space to 1 GB, thus allowing up= >=20
    to 3 GB for users.=C2=A0 I am sure others here will know more details.

    Any program written to Microsoft/Windows spec would work transparently=20 >with a 3:1 split, the problem was all the programs ported from unix=20
    which assumed that any negative return value was a failure code.

    The only interfaces that I recall this being an issue for were
    mmap(2) and lseek(2). The latter was really related to maximum
    file size (although it applied to /dev/[k]mem and /proc/<pid>/mem
    as well). The former was handled by the standard specifying
    MAP_FAILED as the return value.

    That said, Unix generally defined -1 as the return value for all
    other system calls, and code that checked for "< 0" instead of
    -1 when calling a standard library function or system call was fundamentally broken.


    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: UsenetServer - www.usenetserver.com (3:633/280.2@fidonet)
  • From Dan Cross@3:633/280.2 to All on Wed Aug 6 03:21:19 2025
    In article <FWnkQ.830336$QtA1.728878@fx16.iad>,
    Scott Lurndal <slp53@pacbell.net> wrote:
    cross@spitfire.i.gajendra.net (Dan Cross) writes:
    In article <2025Aug3.185110@mips.complang.tuwien.ac.at>,
    Anton Ertl <anton@mips.complang.tuwien.ac.at> wrote:
    [snip]
    The C environment for DEC OSF/1 was an I32LP64 setup, not an ILP64
    setup, so can you really call it pure?

    In the OS kernel, often times you want to allocate physical
    address space below 4GiB for e.g. device BARs; many devices are
    either 32-bit (but have to work on 64-bit systems) or work
    better with 32-bit BARs.

    Indeed. Modern PCI controllers tend to support remapping
    a 64-bit physical address in the hardware to support devices
    that only advertise 32-bit bars[*]. The firmware (e.g. UEFI
    or BIOS) will setup the remapping registers and provide the
    address of the 64-bit aperture to the kernel via device tree
    or ACPI tables.

    [*] AHCI is the typical example, which uses BAR5.

    Yes; AHCI is an odd duck. They probably should have chosen BAR4
    for the ABAR and reserved 5; then they could have extended it to
    64-bit using the (BAR4, BAR5) pair.

    With the IOHC we have a lot more flexibility than we did
    previously.

    - Dan C.


    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: PANIX Public Access Internet and UNIX, NYC (3:633/280.2@fidonet)
  • From Brian G. Lucas@3:633/280.2 to All on Wed Aug 6 04:04:39 2025
    On 8/4/25 8:53 PM, Lawrence D'Oliveiro wrote:
    On Mon, 4 Aug 2025 17:18:24 -0500, BGB wrote:

    The ban on AT&T was the whole reason they released Unix freely.

    It was never really “freely” available.
    I'll say. We had to pay $20,000 for it in 1975. That was a lot
    of money for software on a mini-computer.


    Then when things lifted (after the AT&T break-up), they tried to
    re-assert their control over Unix, which backfired.

    They were already tightening things up from the Seventh Edition onwards -- remember, this version rescinded the permission to use the source code for classroom teaching purposes, neatly strangling the entire market for the legendary Lions Book. Which continued to spread afterwards via samizdat, nonetheless.

    And, they tried to make and release a workstation, but by then they
    were competing against the IBM PC Clone market (and also everyone
    else trying to sell Unix workstations at the time), ...

    That was a very successful market, from about the mid-1980s until the mid- to-latter 1990s. In spite of all the vendor-lock-in and fragmentation, it mentioned to survive I think because of the sheer performance available in the RISC processors, which Microsoft tried to support with its new
    “Windows NT” OS, but was never able to get quite right.


    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Stephen Fuld@3:633/280.2 to All on Wed Aug 6 04:52:38 2025
    On 8/4/2025 11:46 PM, Lawrence D'Oliveiro wrote:
    On Mon, 4 Aug 2025 18:07:48 +0300, Michael S wrote:

    Majority of the world is embedded. Ovewhelming majority of embedded is
    32-bit or narrower.

    Embedded CPUs are mostly ARM, MIPS, RISC-V ... all of which are available
    in 64-bit variants.

    I recently looked this up and it confirmed my earlier information. Unfortunately, I can't find the reference. :-(

    The plurality of embedded systems are 8 bit processors - about 40
    percent of the total. They are largely used for things like industrial automation, Internet of Things, SCADA, kitchen appliances, etc. 16 bit account for a small, and shrinking percentage. 32 bit is next (IIRC
    ~30-35%, but 64 bit is the fastest growing. Perhaps surprising, there
    is still a small market for 4 bit processors for things like TV remote controls, where battery life is more important than the highest performance.

    There is far more to the embedded market than phones and servers.


    --
    - Stephen Fuld
    (e-mail address disguised to prevent spam)

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Thomas Koenig@3:633/280.2 to All on Wed Aug 6 07:01:20 2025
    Michael S <already5chosen@yahoo.com> schrieb:
    On Mon, 4 Aug 2025 20:13:54 -0000 (UTC)
    Thomas Koenig <tkoenig@netcologne.de> wrote:

    My guess would be that, with DEC, you would have the least chance of
    convincing corporate brass of your ideas. With Data General, you
    could try appealing to the CEO's personal history of creating the
    Nova, and thus his vanity. That could work. But your own company
    might actually be the best choice, if you can get the venture
    capital funding.


    Why not go to somebody who has money and interest to build
    microprocessor, but no existing mini/mainframe/SuperC buisness?
    If we limit ourselves to USA then Moto, Intel, AMD, NatSemi...
    May be, even AT&T ? Or was AT&T stil banned from making computers in
    the mid 70s?

    To be efficient, a RISC needs a full-width (presumably 32 bit)
    external data bus, plus a separate address bus, which should at
    least be 26 bits, better 32. A random ARM CPU I looked at at
    bitsavers had 84 pins, which sounds reasonable.

    Building an ARM-like instead of a 68000 would have been feasible,
    but the resulting systems would have been more expensive (the
    68000 had 64 pins).

    So... a strategy could have been to establish the concept with
    minicomputers, to make money (the VAX sold big) and then move
    aggressively towards microprocessors, trying the disruptive move
    towards workstations within the same company (which would be HARD).

    As for the PC - a scaled-down, cheap, compatible, multi-cycle per
    instruction microprocessor could have worked for that market,
    but it is entirely unclear to me what this would / could
    have done to the PC market, if IBM could have been prevented
    from gaining such market dominance.

    A bit like the /360 strategy, offering a wide range of machines
    (or CPUs and systems) with different performance.

    Might have worked, might have ended as a footnote in the
    minicomputer history. As with all pieces of alternate
    history, we'll never know.

    --
    This USENET posting was made without artificial intelligence,
    artificial impertinence, artificial arrogance, artificial stupidity,
    artificial flavorings or artificial colorants.

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Wed Aug 6 10:49:21 2025
    On Tue, 5 Aug 2025 17:24:34 +0200, Terje Mathisen wrote:

    ... the problem was all the programs ported from unix which assumed
    that any negative return value was a failure code.

    If the POSIX API spec says a negative return for a particular call is an error, then a negative return for that particular call is an error.

    I can’t imagine this kind of thing blithely being carried over to any non- POSIX API calls.

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Wed Aug 6 10:53:32 2025
    On Tue, 5 Aug 2025 12:52:38 -0000 (UTC), Lars Poulsen wrote:

    The 3B was an absolute dog. We had a couple at ACC, because we were
    providing device drivers or something to an ATT project for a Federal
    agency.

    Weren’t they designed specifically for Telco use? I remember a lecturer telling us they were capable of five-9s uptime or something of that order.

    We also had first an 11/70 and later an 11/780 running 4BSD. The
    BSD systems were pretty snappy.

    The BSD folks created FFS, the Fast File System, which was able to get up
    to something like 40% of the theoretical bandwidth of the hard drive,
    which was a big advance on what came before.

    The UFS (or various flavours thereof) that BSDs use today is/are still essentially a direct descendant of that original filesystem work.

    And we had an 11/780 for the business side, running VMS, And a VMS
    11/750 for engineering, which was not as well liked as the BSD until
    we got the Wollongong overlay so we could network it to the BSD
    system.

    Did the users do all their work via SET HOST? ;)

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Wed Aug 6 10:59:07 2025
    On Tue, 5 Aug 2025 21:01:20 -0000 (UTC), Thomas Koenig wrote:

    So... a strategy could have been to establish the concept with
    minicomputers, to make money (the VAX sold big) and then move
    aggressively towards microprocessors, trying the disruptive move towards workstations within the same company (which would be HARD).

    None of the companies which tried to move in that direction were
    successful. The mass micro market had much higher volumes and lower
    margins, and those accustomed to lower-volume, higher-margin operation
    simply couldn’t adapt.

    As for the PC - a scaled-down, cheap, compatible, multi-cycle per
    instruction microprocessor could have worked for that market,
    but it is entirely unclear to me what this would / could have done to
    the PC market, if IBM could have been prevented from gaining such market dominance.

    IBM had massive marketing clout in the mainframe market. I think that was
    the basis on which customers gravitated to their products. And remember,
    the IBM PC was essentially a skunkworks project that totally went against
    the entire IBM ethos. Internally, it was seen as a one-off mistake that
    they determined never to repeat. Hence the PS/2 range.

    DEC was bigger in the minicomputer market. If DEC could have offered an open-standard machine, that could have offered serious competition to IBM.
    But what OS would they have used? They were still dominated by Unix-haters then.

    A bit like the /360 strategy, offering a wide range of machines (or CPUs
    and systems) with different performance.

    That strategy was radical in 1964, less so by the 1970s and 1980s. DEC,
    for example, offered entire ranges of machines in each of its various minicomputer families.

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Peter Flass@3:633/280.2 to All on Wed Aug 6 13:15:11 2025
    On 8/5/25 17:59, Lawrence D'Oliveiro wrote:
    On Tue, 5 Aug 2025 21:01:20 -0000 (UTC), Thomas Koenig wrote:

    So... a strategy could have been to establish the concept with
    minicomputers, to make money (the VAX sold big) and then move
    aggressively towards microprocessors, trying the disruptive move towards
    workstations within the same company (which would be HARD).

    None of the companies which tried to move in that direction were
    successful. The mass micro market had much higher volumes and lower
    margins, and those accustomed to lower-volume, higher-margin operation
    simply couldn’t adapt.

    The support issues alone were killers. Think about the
    Orange/Grey/(Blue?) Wall of VAX documentation, and then look at the
    five-page flimsy you got with a micro. The customers were willing to
    accept cr*p from a small startup, but wouldn't put up with it from IBM
    or DEC.


    As for the PC - a scaled-down, cheap, compatible, multi-cycle per
    instruction microprocessor could have worked for that market,
    but it is entirely unclear to me what this would / could have done to
    the PC market, if IBM could have been prevented from gaining such market
    dominance.

    IBM had massive marketing clout in the mainframe market. I think that was
    the basis on which customers gravitated to their products. And remember,
    the IBM PC was essentially a skunkworks project that totally went against
    the entire IBM ethos. Internally, it was seen as a one-off mistake that
    they determined never to repeat. Hence the PS/2 range.

    DEC was bigger in the minicomputer market. If DEC could have offered an open-standard machine, that could have offered serious competition to IBM. But what OS would they have used? They were still dominated by Unix-haters then.

    VMS was a heckuva good OS.


    A bit like the /360 strategy, offering a wide range of machines (or CPUs
    and systems) with different performance.

    That strategy was radical in 1964, less so by the 1970s and 1980s. DEC,
    for example, offered entire ranges of machines in each of its various minicomputer families.


    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Thomas Koenig@3:633/280.2 to All on Wed Aug 6 15:37:32 2025
    Stephen Fuld <sfuld@alumni.cmu.edu.invalid> schrieb:

    The plurality of embedded systems are 8 bit processors - about 40
    percent of the total. They are largely used for things like industrial automation, Internet of Things, SCADA, kitchen appliances, etc.

    I believe heart pacemakers run with a 6502 (well, 65C02)

    16 bi
    account for a small, and shrinking percentage. 32 bit is next (IIRC ~30-35%, but 64 bit is the fastest growing. Perhaps surprising, there
    is still a small market for 4 bit processors for things like TV remote controls, where battery life is more important than the highest performance.

    There is far more to the embedded market than phones and servers.

    Also, the processors which run in earphones etc...

    Does anybody have an estimate how many CPUs humanity has made
    so far?

    --
    This USENET posting was made without artificial intelligence,
    artificial impertinence, artificial arrogance, artificial stupidity,
    artificial flavorings or artificial colorants.

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Thomas Koenig@3:633/280.2 to All on Wed Aug 6 15:50:11 2025
    Peter Flass <Peter@Iron-Spring.com> schrieb:

    The support issues alone were killers. Think about the
    Orange/Grey/(Blue?) Wall of VAX documentation, and then look at the five-page flimsy you got with a micro. The customers were willing to
    accept cr*p from a small startup, but wouldn't put up with it from IBM
    or DEC.

    Using UNIX faced stiff competition from AT&T's internal IT people,
    who wanted to run DEC's operating systems on all PDP-11 within
    the company (basically, they wanted to kill UNIX). They pointed
    towads the large amout of documentation that DEC provided, compared
    to the low amount of UNIX, as proof of superiority. The UNIX people
    saw it differently...

    But the _real_ killer application for UNIX wasn't writing patents,
    it was phototypesetting speeches for the CEO of AT&T, who, for
    reasons of vanity, did not want to wear glasses, and it was possible
    to scale the output of the phototoypesetter so he would be able
    to read them.

    After somebody pointed out that having confidential speeches on
    one of the most well-known machines in the world, where loads of
    people had dial-up access, was not a good idea, his secretary got
    her own PDP-11 for that.

    And with support from that high up, the project flourished.
    --
    This USENET posting was made without artificial intelligence,
    artificial impertinence, artificial arrogance, artificial stupidity,
    artificial flavorings or artificial colorants.

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Wed Aug 6 16:20:57 2025
    On Wed, 6 Aug 2025 05:37:32 -0000 (UTC), Thomas Koenig wrote:

    Does anybody have an estimate how many CPUs humanity has made so far?

    More ARM chips are made each year than the entire population of the Earth.

    I think RISC-V has also achieved that status.

    Where are they all going??

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Wed Aug 6 17:28:52 2025
    On Wed, 6 Aug 2025 05:50:11 -0000 (UTC), Thomas Koenig wrote:

    Using UNIX faced stiff competition from AT&T's internal IT people, who
    wanted to run DEC's operating systems on all PDP-11 within the company (basically, they wanted to kill UNIX).

    But because AT&T controlled Unix, they were able to mould it like putty to their own uses. E.g. look at the MERT project which supported real-time
    tasks (as needed in telephone exchanges) besides conventional Unix ones.
    No way they could do this with an outside proprietary system, like those
    from DEC.

    AT&T also created its own hardware (the 3B range) to complement the
    software in serving those high-availability needs.

    But the _real_ killer application for UNIX wasn't writing patents, it
    was phototypesetting speeches for the CEO of AT&T, who, for reasons of vanity, did not want to wear glasses, and it was possible to scale the
    output of the phototoypesetter so he would be able to read them.

    Heck, no. The biggest use for the Unix documentation tools was in the
    legal department, writing up patent applications. troff was just about the only software around that could do automatic line-numbering, which was
    crucial for this purpose.

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Anton Ertl@3:633/280.2 to All on Wed Aug 6 20:24:49 2025
    Lawrence D'Oliveiro <ldo@nz.invalid> writes:
    Of all the major OSes for Alpha, Windows NT was the only one
    that couldn’t take advantage of the 64-bit architecture.

    Actually, Windows took good advantage of the 64-bit architecture:
    "64-bit Windows was initially developed on the Alpha AXP." <https://learn.microsoft.com/en-us/previous-versions/technet-magazine/cc718978(v=msdn.10)>

    - anton
    --
    'Anyone trying for "industrial quality" ISA should avoid undefined behavior.'
    Mitch Alsup, <c17fcd89-f024-40e7-a594-88a85ac10d20o@googlegroups.com>

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: Institut fuer Computersprachen, Technische Uni (3:633/280.2@fidonet)
  • From Dan Cross@3:633/280.2 to All on Wed Aug 6 20:48:51 2025
    In article <106uqej$36gll$3@dont-email.me>,
    Thomas Koenig <tkoenig@netcologne.de> wrote:
    Peter Flass <Peter@Iron-Spring.com> schrieb:

    The support issues alone were killers. Think about the
    Orange/Grey/(Blue?) Wall of VAX documentation, and then look at the
    five-page flimsy you got with a micro. The customers were willing to
    accept cr*p from a small startup, but wouldn't put up with it from IBM
    or DEC.

    Using UNIX faced stiff competition from AT&T's internal IT people,
    who wanted to run DEC's operating systems on all PDP-11 within
    the company (basically, they wanted to kill UNIX). They pointed
    towads the large amout of documentation that DEC provided, compared
    to the low amount of UNIX, as proof of superiority. The UNIX people
    saw it differently...

    I've never heard this before, and I do not believe that it is
    true. Do you have a source?

    Bell Telephone's computer center was basically an IBM shop
    before Unix was written, having written BESYS for the IBM 704,
    for instance. They made investments in GE machines around the
    time of the Multics project (e.g., they had a GE 645 and at
    least one 635). The PDP-11 used for Unix was so new that they
    had to wait a few weeks for its disk to arrive.

    Unix escaped out of research, and into the larger Bell System,
    via the legal department, as has been retold many times. It
    spread widely internally after that. After divestiture, when
    AT&T was freed to be able to compete in the computer industry,
    it was seen as a strategic asset.

    But the _real_ killer application for UNIX wasn't writing patents,
    it was phototypesetting speeches for the CEO of AT&T, who, for
    reasons of vanity, did not want to wear glasses, and it was possible
    to scale the output of the phototoypesetter so he would be able
    to read them.

    After somebody pointed out that having confidential speeches on
    one of the most well-known machines in the world, where loads of
    people had dial-up access, was not a good idea, his secretary got
    her own PDP-11 for that.

    And with support from that high up, the project flourished.

    While it is true that Charlie Brown's office got a Unix system
    of their own to run troff because its output scaled to large
    sizes, the speeches weren't the data they were worried about
    protecting: those were records from AT&T board meetings.

    At the time, the research PDP-11 used for Unix at Bell Labs was
    not one of the, "most well-known machines in the world, where
    loads of people had dial-up access" in any sense; in the grand
    scheme of things, it was pretty obscure, and had a few dozen
    users. But it was a machine where most users had "root" access,
    and it was agreed that these documents shouldn't be on the
    research machine out of concern for confidentiality.

    - Dan C.


    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: PANIX Public Access Internet and UNIX, NYC (3:633/280.2@fidonet)
  • From Anton Ertl@3:633/280.2 to All on Wed Aug 6 20:32:39 2025
    Lawrence D'Oliveiro <ldo@nz.invalid> writes:
    Not aware of any platforms that do/did ILP64.

    AFAIK the Cray-1 (1976) was the first 64-bit machine, and C for the
    Cray-1 and successors implemented, as far as I can determine

    type bits
    char 8
    short int 64
    int 64
    long int 64
    pointer 64

    ILP64 for Cray is documented in <https://en.cppreference.com/w/c/language/arithmetic_types.html>. For
    short int, I don't have a direct reference, only the statement

    |Firstly there was the word size, one rather large size fitted all,
    |integers and floats were represented in 64 bits

    <https://cray-history.net/faq-1-cray-supercomputer-families/faq-3/>

    For the 8-bit characters I found a reference (maybe somewhere else in
    that document), but I do not find it at the moment.

    Followups set to comp.arch.

    - anton
    --
    'Anyone trying for "industrial quality" ISA should avoid undefined behavior.'
    Mitch Alsup, <c17fcd89-f024-40e7-a594-88a85ac10d20o@googlegroups.com>

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: Institut fuer Computersprachen, Technische Uni (3:633/280.2@fidonet)
  • From Anton Ertl@3:633/280.2 to All on Wed Aug 6 21:05:30 2025
    BGB <cr88192@gmail.com> writes:
    If 'int' were 64-bits, then what about 16 and/or 32 bit types.
    short short?
    long short?

    Of course int16_t uint16_t int32_t uint32_t

    On what keywords should these types be based? That's up to the
    implementor. In C23 one could

    typedef signed _BitInt(16) int16_t

    etc. Around 1990, one would have just followed the example of "long
    long" of accumulating several modifiers. I would go for 16-bit
    "short" and 32-bit "long short".

    - anton
    --
    'Anyone trying for "industrial quality" ISA should avoid undefined behavior.'
    Mitch Alsup, <c17fcd89-f024-40e7-a594-88a85ac10d20o@googlegroups.com>

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: Institut fuer Computersprachen, Technische Uni (3:633/280.2@fidonet)
  • From Scott Lurndal@3:633/280.2 to All on Wed Aug 6 23:48:17 2025
    Reply-To: slp53@pacbell.net

    Lawrence D'Oliveiro <ldo@nz.invalid> writes:
    On Tue, 5 Aug 2025 17:24:34 +0200, Terje Mathisen wrote:

    ... the problem was all the programs ported from unix which assumed
    that any negative return value was a failure code.

    If the POSIX API spec says a negative return for a particular call is an >error, then a negative return for that particular call is an error.

    Please find a single POSIX API that says a negative return is an error.

    You won't have much success. POSIX explicitly states in most
    cases that the API returns -1 on error (mmap returns MAP_FAILED,
    which happens to be -1 on most implementations; regardless a
    POSIX application _must_ check for MAP_FAILED, not a negative
    return value).

    More misinformation from LDO.

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: UsenetServer - www.usenetserver.com (3:633/280.2@fidonet)
  • From John Ames@3:633/280.2 to All on Thu Aug 7 01:28:03 2025
    On Wed, 6 Aug 2025 00:59:07 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    DEC was bigger in the minicomputer market. If DEC could have offered
    an open-standard machine, that could have offered serious competition
    to IBM. But what OS would they have used? They were still dominated
    by Unix-haters then.

    DEC had plenty of experience in small-system single-user OSes by then;
    their bigger challenge would've been picking one. (CP/M owes a lot to
    the DEC lineage, although it dispenses with some of the more tedious mainframe-isms - e.g. the RUN [program] [parameters] syntax vs. just
    treating executable files on disk as commands in themselves.)


    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Anton Ertl@3:633/280.2 to All on Thu Aug 7 00:00:56 2025
    Michael S <already5chosen@yahoo.com> writes:
    On Mon, 4 Aug 2025 18:16:45 -0000 (UTC)
    Thomas Koenig <tkoenig@netcologne.de> wrote:

    Anton Ertl <anton@mips.complang.tuwien.ac.at> schrieb:

    The claim by John Savard was that the VAX "was a good match to the
    technology *of its time*". It was not. It may have been a good
    match for the beliefs of the time, but that's a different thing.


    The evidence of 801 is the 801 did not deliver until more than decade
    later. And the variant that delivered was quite different from original
    801.
    Actually, it can be argued that 801 didn't deliver until more than 15
    years late.

    Maybe for IBM. IBM had its successful S/370 business, and no real
    need for the IBM 801 after the telephone switch project for which it
    was originally developed had been canceled, so they had no hurry in productizing it. <https://en.wikipedia.org/wiki/IBM_ROMP> says:

    |The architectural work on the ROMP began in late spring of 1977, as a |spin-off of IBM Research's 801 RISC processor (hence the "Research"
    |in the acronym). Most of the architectural changes were for cost
    |reduction, such as adding 16-bit instructions for
    |byte-efficiency. [...]
    |
    |The first chips were ready in early 1981 [...] ROMP first appeared in
    |a commercial product as the processor for the IBM RT PC workstation,
    |which was introduced in 1986. To provide examples for RT PC
    |production, volume production of the ROMP and its MMU began in
    |1985. The delay between the completion of the ROMP design, and
    |introduction of the RT PC was caused by overly ambitious software
    |plans for the RT PC and its operating system (OS).

    If IBM had been in a hurry to introduce ROMP, they would have had a
    contingency plan for the RT PC system software.

    For comparison:

    HPPA: "In early 1982, work on the Precision Architecture began at HP Laboratories, defining the instruction set and virtual memory
    system. Development of the first TTL implementation started in April
    1983. With simulation of the processor having completed in 1983, a
    final processor design was delivered to software developers in July
    1984. Systems prototyping followed, with "lab prototypes" being
    produced in 1985 and product prototypes in 1986. The first processors
    were introduced in products during 1986, with the first HP 9000 Series
    840 units shipping in November of that year." <https://en.wikipedia.org/wiki/PA-RISC>

    MIPS: Inspired by IBM 801, Stanford MIPS research project 1981-1984,
    1984 MIPS Inc, R2000 and R2010 (FP) introduced May 1986 (12.5MHz), and according to
    <https://en.wikipedia.org/wiki/MIPS_Computer_Systems#History> MIPS
    delivered a workstation in the same year.

    SPARC: Berkeley RISC research project between 1980 and 1984; <https://en.wikipedia.org/wiki/Berkeley_RISC> does not mention the IBM
    801 as inspiration, but a 1978 paper by Tanenbaum. Samples for RISC-I
    in May 1982 (but could only run at 0.5MHz). No date for the
    completion of RISC-II, but given that the research project ended in
    1984, it was probably at that time. Sun developed Berkeley RISC into
    SPARC, and the first SPARC machine, the Sun-4/260 appeared in July
    1987 with a 16.67MHz processor.

    ARM: Inspired by Berkeley RISC, "Acorn initiated its RISC research
    project in October 1983" <https://en.wikipedia.org/wiki/Acorn_Computers#New_RISC_architecture>
    "The first samples of ARM silicon worked properly when first received
    and tested on 26 April 1985. Known as ARM1, these versions ran at 6
    MHz.[...] late 1986 introduction of the ARM2 design running at 8 MHz
    [...] Acorn Archimedes personal computer models A305, A310, and A440,
    launched on the 6th June 1987." <https://en.wikipedia.org/wiki/ARM_architecture_family#History> Note
    that the Acorn people originally were not computer architects or
    circuit designers. ARM1 and ARM2 did not include an MMU, cache
    controller, or FPU, however.

    There are examples of Motorola (88000, 1988), Intel (i960, 1988), IBM
    (RS/6000, 1990), and DEC (Alpha, 1992) which had successful
    established architectures, and that caused the problem of how to place
    the RISC architecture in the market, and a certain lack of urgency.
    Read up on the individual architectures and their predecessors to
    learn about the individual causes for delays (there's not much in
    Wikipedia about the development of the 88000, however).

    HP might have been in the same camp, but apparently someone high up at
    HP decided to replace all their existing architectures with RISC ASAP,
    and they succeeded.

    In any case, RISCs delivered, starting in 1986. There is no reason
    they could not have delivered earlier.


    - anton
    --
    'Anyone trying for "industrial quality" ISA should avoid undefined behavior.'
    Mitch Alsup, <c17fcd89-f024-40e7-a594-88a85ac10d20o@googlegroups.com>

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: Institut fuer Computersprachen, Technische Uni (3:633/280.2@fidonet)
  • From Anton Ertl@3:633/280.2 to All on Thu Aug 7 02:21:51 2025
    Al Kossow <aek@bitsavers.org> writes:
    [RISC] didn't really make sense until main
    memory systems got a lot faster.

    The memory system of the VAX 11/780 was plenty fast for RISC to make
    sense:

    Cache cycle time: 200ns
    Memory cycle time: 600ns
    Average memory access time: 290ns
    Average VAX instruction execution time: 2000ns

    If we assume 1.5 RISC instructions per average VAX instruction, and a
    RISC CPI of 2 cycles (400ns: the 290ns plus extra time data memory
    accesses and branches), the equivalent of a VAX instruction takes
    600ns, more then 3 times as fast as the actual VAX.

    Followups to comp.arch.

    - anton
    --
    'Anyone trying for "industrial quality" ISA should avoid undefined behavior.'
    Mitch Alsup, <c17fcd89-f024-40e7-a594-88a85ac10d20o@googlegroups.com>

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: Institut fuer Computersprachen, Technische Uni (3:633/280.2@fidonet)
  • From Thomas Koenig@3:633/280.2 to All on Thu Aug 7 02:35:23 2025
    Dan Cross <cross@spitfire.i.gajendra.net> schrieb:
    In article <106uqej$36gll$3@dont-email.me>,
    Thomas Koenig <tkoenig@netcologne.de> wrote:
    Peter Flass <Peter@Iron-Spring.com> schrieb:

    The support issues alone were killers. Think about the
    Orange/Grey/(Blue?) Wall of VAX documentation, and then look at the
    five-page flimsy you got with a micro. The customers were willing to
    accept cr*p from a small startup, but wouldn't put up with it from IBM
    or DEC.

    Using UNIX faced stiff competition from AT&T's internal IT people,
    who wanted to run DEC's operating systems on all PDP-11 within
    the company (basically, they wanted to kill UNIX). They pointed
    towads the large amout of documentation that DEC provided, compared
    to the low amount of UNIX, as proof of superiority. The UNIX people
    saw it differently...

    I've never heard this before, and I do not believe that it is
    true. Do you have a source?

    Hmm... I _think_ it was on a talk given by the UNIX people,
    but I may be misremembering.
    --
    This USENET posting was made without artificial intelligence,
    artificial impertinence, artificial arrogance, artificial stupidity,
    artificial flavorings or artificial colorants.

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Anton Ertl@3:633/280.2 to All on Thu Aug 7 02:34:55 2025
    Stefan Monnier <monnier@iro.umontreal.ca> writes:
    The same happened to some extent with the early amd64 machines, which
    ended up running 32bit Windows and applications compiled for the i386
    ISA. Those processors were successful mostly because they were fast at >running i386 code (with the added marketing benefit of being "64bit
    ready"): it took 2 years for MS to release a matching OS.

    Apr 2003: Opteron launch
    Sep 2003: Athlon 64 launch
    Oct 2003 (IIRC): I buy an Athlon 64
    Nov 2003: Fedora Core 1 released for IA-32, X86-64, PowerPC

    I installed Fedora Core 1 on my Athlon64 box in early 2004.

    Why wait for MS?

    - anton
    --
    'Anyone trying for "industrial quality" ISA should avoid undefined behavior.'
    Mitch Alsup, <c17fcd89-f024-40e7-a594-88a85ac10d20o@googlegroups.com>

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: Institut fuer Computersprachen, Technische Uni (3:633/280.2@fidonet)
  • From BGB@3:633/280.2 to All on Thu Aug 7 03:12:32 2025
    On 8/6/2025 6:05 AM, Anton Ertl wrote:
    BGB <cr88192@gmail.com> writes:
    If 'int' were 64-bits, then what about 16 and/or 32 bit types.
    short short?
    long short?

    Of course int16_t uint16_t int32_t uint32_t


    Well, assuming a post C99 world.


    On what keywords should these types be based? That's up to the
    implementor. In C23 one could

    typedef signed _BitInt(16) int16_t


    Possible, though one can realize that _BitInt(16) is not equivalent to a normal 16-bit integer.

    _BitInt(16) sa, sb;
    _BitInt(32) lc;
    sa=0x5678;
    sb=0x789A;
    lc=sa+sb;

    Would give:
    0xFFFFCF12
    Rather than 0xCF12 (as would be expected with 'short' or similar).

    Because _BitInt(16) would not auto-promote before the addition, but
    rather would produce a _BitInt(16) result which is then widened to 32
    bits via sign extension.


    etc. Around 1990, one would have just followed the example of "long
    long" of accumulating several modifiers. I would go for 16-bit
    "short" and 32-bit "long short".


    OK.

    Apparently at least some went for "__int32" instead.


    - anton


    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Al Kossow@3:633/280.2 to All on Thu Aug 7 03:20:18 2025
    On 8/6/25 7:00 AM, Anton Ertl wrote:

    In any case, RISCs delivered, starting in 1986.

    http://bitsavers.org/pdf/ridge/Ridge_Hardware_Reference_Manual_Aug82.pdf



    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From John Levine@3:633/280.2 to All on Thu Aug 7 03:25:25 2025
    According to Anton Ertl <anton@mips.complang.tuwien.ac.at>:
    Lawrence D'Oliveiro <ldo@nz.invalid> writes:
    Not aware of any platforms that do/did ILP64.

    AFAIK the Cray-1 (1976) was the first 64-bit machine, ...

    The IBM 7030 STRETCH was the first 64 bit machine, shipped in 1961,
    but I would be surprised if anyone had written a C compiler for it.

    It was bit addressable but memories in those days were so small that a full bit address was only 24 bits. So if I were writing a C compiler, pointers and ints would be 32 bits, char 8 bits, long 64 bits.

    (There is a thing called STRETCH C Compiler but it's completely unrelated.)
    --
    Regards,
    John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
    Please consider the environment before reading this e-mail. https://jl.ly

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: Taughannock Networks (3:633/280.2@fidonet)
  • From Anton Ertl@3:633/280.2 to All on Thu Aug 7 02:47:39 2025
    Thomas Koenig <tkoenig@netcologne.de> writes:
    De Castro had had a big success with a simple load-store
    architecture, the Nova. He did that to reduce CPU complexity
    and cost, to compete with DEC and its PDP-8. (Byte addressing
    was horrible on the Nova, though).

    The PDP-8, and its 16-bit followup, the Nova, may be load/store, but
    it is not a register machine nor byte-addressed, while the PDP-11 is,
    and the RISC-VAX would be, too.

    Now, assume that, as a time traveler wanting to kick off an early
    RISC revolution, you are not allowed to reveal that you are a time
    traveler (which would have larger effects than just a different
    computer architecture). What do you do?

    a) You go to DEC

    b) You go to Data General

    c) You found your own company

    Even if I am allowed to reveal that I am a time traveler, that may not
    help; how would I prove it?

    Yes, convincing people in the mid-1970s to bet the company on RISC is
    a hard sell, that's I asked for "a magic wand that would convince the
    DEC management and workforce that I know how to design their next
    architecture, and how to compile for it" in <2025Mar1.125817@mips.complang.tuwien.ac.at>.

    Some arguments that might help:

    Complexity in CISC and how it breeds complexity elsewhere; e.g., the interaction of having more than one data memory access per
    instruction, virtual memory, and precise exceptions.

    How the CDC 6600 achieved performance (pipelining) and how non-complex
    its instructions are.

    I guess I would read through RISC-vs-CISC literature before entering
    the time machine in order to have some additional arguments.


    Concerning your three options, I think it will be a problem in any
    case. Data General's first bet was on FHP, a microcoded machine with user-writeable microcode, so maybe even more in the wrong direction
    than VAX; I can imagine a high-performance OoO VAX implementation, but
    for an architecture with exposed microcode like FHP an OoO
    implementation would probably be pretty challenging. The backup
    project that eventually came through was also a CISC.

    Concerning founding ones own company, one would have to convince
    venture capital, and then run the RISC of being bought by one of the
    big players, who buries the architecture. And even if you survive,
    you then have to build up the whole thing: production, marketing,
    sales, software support, ...

    In any case, the original claim was about the VAX, so of course the
    question at hand is what DEC could have done instead.

    - anton
    --
    'Anyone trying for "industrial quality" ISA should avoid undefined behavior.'
    Mitch Alsup, <c17fcd89-f024-40e7-a594-88a85ac10d20o@googlegroups.com>

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: Institut fuer Computersprachen, Technische Uni (3:633/280.2@fidonet)
  • From Scott Lurndal@3:633/280.2 to All on Thu Aug 7 04:22:03 2025
    Reply-To: slp53@pacbell.net

    BGB <cr88192@gmail.com> writes:
    On 8/6/2025 6:05 AM, Anton Ertl wrote:
    BGB <cr88192@gmail.com> writes:
    If 'int' were 64-bits, then what about 16 and/or 32 bit types.
    short short?
    long short?

    Of course int16_t uint16_t int32_t uint32_t


    Well, assuming a post C99 world.

    'typedef' was around long before C99 happened to
    standardize the aforementioned typedefs.

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: UsenetServer - www.usenetserver.com (3:633/280.2@fidonet)
  • From Peter Flass@3:633/280.2 to All on Thu Aug 7 05:11:08 2025
    On 8/6/25 10:25, John Levine wrote:
    According to Anton Ertl <anton@mips.complang.tuwien.ac.at>:
    Lawrence D'Oliveiro <ldo@nz.invalid> writes:
    Not aware of any platforms that do/did ILP64.

    AFAIK the Cray-1 (1976) was the first 64-bit machine, ...

    The IBM 7030 STRETCH was the first 64 bit machine, shipped in 1961,
    but I would be surprised if anyone had written a C compiler for it.

    It was bit addressable but memories in those days were so small that a full bit
    address was only 24 bits. So if I were writing a C compiler, pointers and ints
    would be 32 bits, char 8 bits, long 64 bits.

    (There is a thing called STRETCH C Compiler but it's completely unrelated.)

    I don't get why bit-addressability was a thing? Intel iAPX 432 had it,
    too, and it seems like all it does is drastically shrink your address
    space and complexify instruction and operand fetch to (maybe) save a few bytes.

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Peter Flass@3:633/280.2 to All on Thu Aug 7 05:12:30 2025
    On 8/6/25 09:47, Anton Ertl wrote:


    Even if I am allowed to reveal that I am a time traveler, that may not
    help; how would I prove it?

    I'm a time-traveler from the 1960s!


    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From John Levine@3:633/280.2 to All on Thu Aug 7 05:50:17 2025
    According to Peter Flass <Peter@Iron-Spring.com>:
    It was bit addressable but memories in those days were so small that a full bit
    address was only 24 bits. So if I were writing a C compiler, pointers and ints
    would be 32 bits, char 8 bits, long 64 bits.

    (There is a thing called STRETCH C Compiler but it's completely unrelated.)

    I don't get why bit-addressability was a thing? Intel iAPX 432 had it,
    too, and it seems like all it does is drastically shrink your address
    space and complexify instruction and operand fetch to (maybe) save a few >bytes.

    STRETCH had a severe case of second system syndrome, and was full of
    complex features that weren't worth the effort and it was impressive
    that IBM got it to work and to run as fast as it did.

    In that era memory was expensive, and usually measured in K, not M.
    The idea was presumably to pack data as tightly as possible.

    In the 1970s I briefly used a B1700 which was bit addressable and had reloadable
    microcode so COBOL programs used the COBOL instruction set, FORTRAN programs used the FORTRAN instruction set, and so forth, with each one having whatever word or byte sizes they wanted. In retrospect it seems like a lot of
    premature optimization.
    --
    Regards,
    John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
    Please consider the environment before reading this e-mail. https://jl.ly

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: Taughannock Networks (3:633/280.2@fidonet)
  • From Scott Lurndal@3:633/280.2 to All on Thu Aug 7 06:30:00 2025
    Reply-To: slp53@pacbell.net

    John Levine <johnl@taugh.com> writes:
    According to Peter Flass <Peter@Iron-Spring.com>:
    It was bit addressable but memories in those days were so small that a full bit
    address was only 24 bits. So if I were writing a C compiler, pointers and ints
    would be 32 bits, char 8 bits, long 64 bits.

    (There is a thing called STRETCH C Compiler but it's completely unrelated.) >>
    I don't get why bit-addressability was a thing? Intel iAPX 432 had it, >>too, and it seems like all it does is drastically shrink your address >>space and complexify instruction and operand fetch to (maybe) save a few >>bytes.

    STRETCH had a severe case of second system syndrome, and was full of
    complex features that weren't worth the effort and it was impressive
    that IBM got it to work and to run as fast as it did.

    In that era memory was expensive, and usually measured in K, not M.
    The idea was presumably to pack data as tightly as possible.

    In the 1970s I briefly used a B1700 which was bit addressable and had reloadable
    microcode so COBOL programs used the COBOL instruction set, FORTRAN programs >used the FORTRAN instruction set, and so forth, with each one having whatever >word or byte sizes they wanted. In retrospect it seems like a lot of >premature optimization.

    We had a B1900 in the software lab, but I don't recall anyone
    actually using it - I believe it had been moved from Santa
    Barbara (Small Systems plant) and may have been used for
    reproducing customer issues, but by 1983, there weren't many
    small systems customers remaining.

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: UsenetServer - www.usenetserver.com (3:633/280.2@fidonet)
  • From Robert Swindells@3:633/280.2 to All on Thu Aug 7 08:30:56 2025
    On Wed, 06 Aug 2025 14:00:56 GMT, Anton Ertl wrote:

    For comparison:

    SPARC: Berkeley RISC research project between 1980 and 1984; <https://en.wikipedia.org/wiki/Berkeley_RISC> does not mention the IBM
    801 as inspiration, but a 1978 paper by Tanenbaum. Samples for RISC-I
    in May 1982 (but could only run at 0.5MHz). No date for the completion
    of RISC-II, but given that the research project ended in 1984, it was probably at that time. Sun developed Berkeley RISC into SPARC, and the
    first SPARC machine, the Sun-4/260 appeared in July 1987 with a 16.67MHz processor.

    The Katevenis thesis on RISC-II contains a timeline on p6, it lists fabrication of it in spring 83 with testing during summer 83.

    There is also a bibliography entry of an informal discussion with John
    Cocke at Berkeley about the 801 in June 1983

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lars Poulsen@3:633/280.2 to All on Thu Aug 7 09:09:25 2025
    On Tue, 5 Aug 2025 12:52:38 -0000 (UTC), Lars Poulsen wrote:
    The 3B was an absolute dog. We had a couple at ACC, because we were
    providing device drivers or something to an ATT project for a Federal
    agency.

    On 2025-08-06, Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    Weren’t they designed specifically for Telco use? I remember a lecturer telling us they were capable of five-9s uptime or something of that order.

    That may have been true of the 3B5. Ours were 3B2 desktops.

    We also had first an 11/70 and later an 11/780 running 4BSD. The
    BSD systems were pretty snappy.
    And we had an 11/780 for the business side, running VMS, And a VMS
    11/750 for engineering, which was not as well liked as the BSD until
    we got the Wollongong overlay so we could network it to the BSD
    system.

    Did the users do all their work via SET HOST? ;)

    Of course not - how would you do that from a BSD system?

    We had a communications frontend board that could plug into a UNIBUS
    and different microcode that could be X.29/X.25 or IBM3278 emulation of
    VT100 terminals attached to a serial port board on the microprocessor.
    We had terminal device drivers for RSX-11M, VMS and 4BSD, as well as
    a driver that would let programs on the mini bring up an X.25 circuit or emulate a 327x terninal. Business users were hardwired to the business
    VAX (and users on the ENG VAXen could "set host" to the business
    machine). The BSD users and the engr VAX users' terminals were wired
    to ports on the X.29/X.25 boards, and could use the X.29 switching to
    get between the VMS machines, or to the BSD VAX in our East Coast
    Office.

    Later, we got a VDH connection to the ArpaNet node in El Segundo,
    and we ran TCP/IP over a 1200 bps x.25 virtual channel btw Santa Barbara
    and Maryland, so we had seamless email and telnet.

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lars Poulsen@3:633/280.2 to All on Thu Aug 7 09:12:26 2025
    On 2025-08-06, Anton Ertl <anton@mips.complang.tuwien.ac.at> wrote:
    Lawrence D'Oliveiro <ldo@nz.invalid> writes:
    Not aware of any platforms that do/did ILP64.

    AFAIK the Cray-1 (1976) was the first 64-bit machine, and C for the
    Cray-1 and successors implemented, as far as I can determine

    type bits
    char 8
    short int 64
    int 64
    long int 64
    pointer 64

    Not having a 16-bit integer type and not having a 32-bit integer type
    would make it very hard to adapt portable code, such as TCP/IP protocol processing.

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From John Levine@3:633/280.2 to All on Thu Aug 7 09:15:54 2025
    AFAIK the Cray-1 (1976) was the first 64-bit machine, and C for the
    Cray-1 and successors implemented, as far as I can determine

    type bits
    char 8
    short int 64
    int 64
    long int 64
    pointer 64

    Not having a 16-bit integer type and not having a 32-bit integer type
    would make it very hard to adapt portable code, such as TCP/IP protocol >processing.

    I'd think this was obvious, but if the code depends on word sizes and doesn't declare its variables to use those word sizes, I don't think "portable" is the right term.

    Perhaps "happens to work on some computers similar to the one it was originally written on."
    --
    Regards,
    John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
    Please consider the environment before reading this e-mail. https://jl.ly

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: Taughannock Networks (3:633/280.2@fidonet)
  • From Lars Poulsen@3:633/280.2 to All on Thu Aug 7 09:32:47 2025
    ["Followup-To:" header set to comp.arch.]
    On 2025-08-06, John Levine <johnl@taugh.com> wrote:
    AFAIK the Cray-1 (1976) was the first 64-bit machine, and C for the
    Cray-1 and successors implemented, as far as I can determine

    type bits
    char 8
    short int 64
    int 64
    long int 64
    pointer 64

    Not having a 16-bit integer type and not having a 32-bit integer type
    would make it very hard to adapt portable code, such as TCP/IP protocol >>processing.

    I'd think this was obvious, but if the code depends on word sizes and doesn't declare its variables to use those word sizes, I don't think "portable" is the
    right term.

    My concern is how do you express yopur desire for having e.g. an int16 ?
    All the portable code I know defines int8, int16, int32 by means of a
    typedef that adds an appropriate alias for each of these back to a
    native type. If "short" is 64 bits, how do you define a 16 bit?
    Or did the compiler have native types __int16 etc?

    - Lars

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Thu Aug 7 09:36:11 2025
    On Wed, 6 Aug 2025 12:11:08 -0700, Peter Flass wrote:

    I don't get why bit-addressability was a thing? Intel iAPX 432 had it,
    too, and it seems like all it does is drastically shrink your address
    space and complexify instruction and operand fetch to (maybe) save a few bytes.

    But with 64-bit addressing, it only means sacrificing the bottom 3 bits.

    With normal load/store, you can insist that these 3 bits be zero, whereas
    in bit-aligned load/store, they can specify a nonzero bit offset.

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Thu Aug 7 09:38:15 2025
    On Wed, 06 Aug 2025 10:32:39 GMT, Anton Ertl wrote:

    Lawrence D'Oliveiro <ldo@nz.invalid> writes:

    Not aware of any platforms that do/did ILP64.

    AFAIK the Cray-1 (1976) was the first 64-bit machine ...

    But it was not byte-addressable. Its precursor CDC machines had 60-bit
    words, as I recall. DEC’s “large systems” family from around that era (PDP-6, PDP-10) had 36-bit words. And there were likely some other vendors offering 48-bit words, that kind of thing. Maybe some with word lengths
    even longer than 64 bits.

    I was thinking more specifically of machines from the byte-addressable
    era.

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Thu Aug 7 09:40:48 2025
    On Wed, 06 Aug 2025 10:24:49 GMT, Anton Ertl wrote:

    Lawrence D'Oliveiro <ldo@nz.invalid> writes:

    Of all the major OSes for Alpha, Windows NT was the only one that
    couldn’t take advantage of the 64-bit architecture.

    Actually, Windows took good advantage of the 64-bit architecture:
    "64-bit Windows was initially developed on the Alpha AXP." <https://learn.microsoft.com/en-us/previous-versions/technet-magazine/cc718978(v=msdn.10)>

    Remember the Alpha was first released in 1992. No shipping version of
    Windows NT ever ran on it in anything other than “TASO” (“Truncated Address-Space Option”, i.e. 32-bit-only addressing) mode.

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Thu Aug 7 09:43:12 2025
    On Wed, 6 Aug 2025 23:09:25 -0000 (UTC), Lars Poulsen wrote:

    On Wed, 6 Aug 2025 00:53:32 -0000 (UTC), Lawrence D'Oliveiro wrote:

    On Tue, 5 Aug 2025 12:52:38 -0000 (UTC), Lars Poulsen wrote:

    And we had an 11/780 for the business side, running VMS, And a VMS
    11/750 for engineering, which was not as well liked as the BSD until
    we got the Wollongong overlay so we could network it to the BSD
    system.

    Did the users do all their work via SET HOST? ;)

    Of course not - how would you do that from a BSD system?

    You did say “running VMS” with the “Wollongong overlay”, did you not?

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Thu Aug 7 09:45:44 2025
    On Wed, 6 Aug 2025 08:28:03 -0700, John Ames wrote:

    CP/M owes a lot to the DEC lineage, although it dispenses with some
    of the more tedious mainframe-isms - e.g. the RUN [program]
    [parameters] syntax vs. just treating executable files on disk as
    commands in themselves.)

    It added its own misfeatures, though. Like single-letter device names,
    but only for disks. Non-file-structured devices were accessed via “reserved” file names, which continue to bedevil Microsoft Windows to
    this day, aggravated by a totally perverse extension of the concept to
    paths with hierarchical directory names.

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From EricP@3:633/280.2 to All on Thu Aug 7 10:21:31 2025
    Robert Swindells wrote:
    On Wed, 06 Aug 2025 14:00:56 GMT, Anton Ertl wrote:

    For comparison:

    SPARC: Berkeley RISC research project between 1980 and 1984;
    <https://en.wikipedia.org/wiki/Berkeley_RISC> does not mention the IBM
    801 as inspiration, but a 1978 paper by Tanenbaum. Samples for RISC-I
    in May 1982 (but could only run at 0.5MHz). No date for the completion
    of RISC-II, but given that the research project ended in 1984, it was
    probably at that time. Sun developed Berkeley RISC into SPARC, and the
    first SPARC machine, the Sun-4/260 appeared in July 1987 with a 16.67MHz
    processor.

    The Katevenis thesis on RISC-II contains a timeline on p6, it lists fabrication of it in spring 83 with testing during summer 83.

    There is also a bibliography entry of an informal discussion with John
    Cocke at Berkeley about the 801 in June 1983

    There is a citation to Cocke as "private communication" in 1980 by
    Patterson in The Case for the Reduced Instruction Set Computer, 1980.

    "REASONS FOR INCREASED COMPLEXITY

    Why have computers become more complex? We can think of several reasons:
    Speed of Memory vs. Speed of CPU. John Cocke says that the complexity began with the transition from the 701 to the 709 [Cocke80]. The 701 CPU was about ten times as fast as the core main memory; this made any primitives that
    were implemented as subroutines much slower than primitives that were instructions. Thus the floating point subroutines became part of the 709 architecture with dramatic gains. Making the 709 more complex resulted
    in an advance that made it more cost-effective than the 701. Since then,
    many "higher-level" instructions have been added to machines in an attempt
    to improve performance. Note that this trend began because of the imbalance
    in speeds; it is not clear that architects have asked themselves whether
    this imbalance still holds for their designs."




    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: ---:- FTN<->UseNet Gate -:--- (3:633/280.2@fidonet)
  • From Charlie Gibbs@3:633/280.2 to All on Thu Aug 7 11:36:50 2025
    On 2025-08-06, Peter Flass <Peter@Iron-Spring.com> wrote:

    On 8/6/25 09:47, Anton Ertl wrote:

    Even if I am allowed to reveal that I am a time traveler, that may not
    help; how would I prove it?

    I'm a time-traveler from the 1960s!

    I'm starting to tell people that I'm a traveller
    from a distant land known as the past.

    --
    /~\ Charlie Gibbs | Growth for the sake of
    \ / <cgibbs@kltpzyxm.invalid> | growth is the ideology
    X I'm really at ac.dekanfrus | of the cancer cell.
    / \ if you read it the right way. | -- Edward Abbey

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: ---:- FTN<->UseNet Gate -:--- (3:633/280.2@fidonet)
  • From Charlie Gibbs@3:633/280.2 to All on Thu Aug 7 11:49:18 2025
    On 2025-08-06, Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    On Wed, 6 Aug 2025 08:28:03 -0700, John Ames wrote:

    CP/M owes a lot to the DEC lineage, although it dispenses with some
    of the more tedious mainframe-isms - e.g. the RUN [program]
    [parameters] syntax vs. just treating executable files on disk as
    commands in themselves.)

    It added its own misfeatures, though. Like single-letter device names,
    but only for disks. Non-file-structured devices were accessed via “reserved” file names, which continue to bedevil Microsoft Windows to this day, aggravated by a totally perverse extension of the concept to
    paths with hierarchical directory names.

    Funny how people ridicule COBOL's reserved words, while accepting MS-DOS/Windows' CON, LPT, etc. If only a trailing colon (which I
    always used) were mandatory; that would put device names cleanly
    into a different name space, eliminating the problem.

    But, you know, Microsoft...

    --
    /~\ Charlie Gibbs | Growth for the sake of
    \ / <cgibbs@kltpzyxm.invalid> | growth is the ideology
    X I'm really at ac.dekanfrus | of the cancer cell.
    / \ if you read it the right way. | -- Edward Abbey

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: ---:- FTN<->UseNet Gate -:--- (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Thu Aug 7 12:22:05 2025
    On Wed, 06 Aug 2025 20:21:31 -0400, EricP wrote:

    There is a citation to Cocke as "private communication" in 1980 by
    Patterson in The Case for the Reduced Instruction Set Computer,
    1980.

    "REASONS FOR INCREASED COMPLEXITY

    Why have computers become more complex? We can think of several
    reasons: Speed of Memory vs. Speed of CPU. John Cocke says that the complexity began with the transition from the 701 to the 709
    [Cocke80]. The 701 CPU was about ten times as fast as the core main
    memory; this made any primitives that were implemented as
    subroutines much slower than primitives that were instructions. Thus
    the floating point subroutines became part of the 709 architecture
    with dramatic gains. Making the 709 more complex resulted in an
    advance that made it more cost-effective than the 701. Since then,
    many "higher-level" instructions have been added to machines in an
    attempt to improve performance. Note that this trend began because
    of the imbalance in speeds; it is not clear that architects have
    asked themselves whether this imbalance still holds for their
    designs."

    That disparity between CPU and RAM speeds is even greater today than
    it was back then. Yet we have moved away from adding ever-more-complex instructions, and are getting better performance with simpler ones.

    How come? Caching.

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From John Levine@3:633/280.2 to All on Thu Aug 7 12:56:08 2025
    According to Lars Poulsen <lars@cleo.beagle-ears.com>:
    ["Followup-To:" header set to comp.arch.]
    On 2025-08-06, John Levine <johnl@taugh.com> wrote:
    AFAIK the Cray-1 (1976) was the first 64-bit machine, and C for the
    Cray-1 and successors implemented, as far as I can determine

    type bits
    char 8
    short int 64
    int 64
    long int 64
    pointer 64

    Not having a 16-bit integer type and not having a 32-bit integer type >>>would make it very hard to adapt portable code, such as TCP/IP protocol >>>processing.

    I'd think this was obvious, but if the code depends on word sizes and doesn't
    declare its variables to use those word sizes, I don't think "portable" is the
    right term.

    My concern is how do you express yopur desire for having e.g. an int16 ?
    All the portable code I know defines int8, int16, int32 by means of a
    typedef that adds an appropriate alias for each of these back to a
    native type. If "short" is 64 bits, how do you define a 16 bit?

    In modern C you use the values in limits.h to pick the type, and define
    macros that mask values to the size you need. In older C you did the same thing in much uglier ways. Writing code that is portable across different
    word sizes has always been tedious.

    Or did the compiler have native types __int16 etc?

    Given how long ago it was, I doubt it.
    --
    Regards,
    John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
    Please consider the environment before reading this e-mail. https://jl.ly

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: Taughannock Networks (3:633/280.2@fidonet)
  • From Thomas Koenig@3:633/280.2 to All on Thu Aug 7 15:29:33 2025
    Anton Ertl <anton@mips.complang.tuwien.ac.at> schrieb:
    Thomas Koenig <tkoenig@netcologne.de> writes:
    De Castro had had a big success with a simple load-store
    architecture, the Nova. He did that to reduce CPU complexity
    and cost, to compete with DEC and its PDP-8. (Byte addressing
    was horrible on the Nova, though).

    The PDP-8, and its 16-bit followup, the Nova, may be load/store, but
    it is not a register machine nor byte-addressed, while the PDP-11 is,
    and the RISC-VAX would be, too.

    Now, assume that, as a time traveler wanting to kick off an early
    RISC revolution, you are not allowed to reveal that you are a time
    traveler (which would have larger effects than just a different
    computer architecture). What do you do?

    a) You go to DEC

    b) You go to Data General

    c) You found your own company

    Even if I am allowed to reveal that I am a time traveler, that may not
    help; how would I prove it?

    Bring an mobile phone or tablet with you, install Stockfish,
    and beat everybody at chess.

    But making it known that you are a time traveller (and being able
    to prove it) would very probably invite all sorts of questions
    from all sorts of people about the future (or even about things
    in the then-present which were declassified in the future), and
    these people might not tke "no" or "I don't know" for an answer.

    [...]

    Yes, convincing people in the mid-1970s to bet the company on RISC is
    a hard sell, that's I asked for "a magic wand that would convince the
    DEC management and workforce that I know how to design their next architecture, and how to compile for it" in
    <2025Mar1.125817@mips.complang.tuwien.ac.at>.

    Some arguments that might help:

    Complexity in CISC and how it breeds complexity elsewhere; e.g., the interaction of having more than one data memory access per
    instruction, virtual memory, and precise exceptions.

    How the CDC 6600 achieved performance (pipelining) and how non-complex
    its instructions are.

    I guess I would read through RISC-vs-CISC literature before entering
    the time machine in order to have some additional arguments.


    Concerning your three options, I think it will be a problem in any
    case. Data General's first bet was on FHP, a microcoded machine with user-writeable microcode,

    That would have been the right time, I think - convince de Castro
    that, instead of writable microcode, RISC is the right direction.
    Fountainhead project started in July 1975, more or less contemporary
    with the VAX, and an alternate-Fountainhead could probably have
    been introduced at the same time, in 1977.

    so maybe even more in the wrong direction
    than VAX; I can imagine a high-performance OoO VAX implementation, but
    for an architecture with exposed microcode like FHP an OoO
    implementation would probably be pretty challenging. The backup
    project that eventually came through was also a CISC.

    Sure.


    Concerning founding ones own company, one would have to convince
    venture capital, and then run the RISC of being bought by one of the
    big players, who buries the architecture. And even if you survive,
    you then have to build up the whole thing: production, marketing,
    sales, software support, ...

    That is one of the things I find astonishing - how a company like
    DG grew from a kitche-table affair to the size they had.

    In any case, the original claim was about the VAX, so of course the
    question at hand is what DEC could have done instead.

    - anton


    --
    This USENET posting was made without artificial intelligence,
    artificial impertinence, artificial arrogance, artificial stupidity,
    artificial flavorings or artificial colorants.

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Anton Ertl@3:633/280.2 to All on Thu Aug 7 20:27:40 2025
    EricP <ThatWouldBeTelling@thevillage.com> writes:
    There is a citation to Cocke as "private communication" in 1980 by
    Patterson in The Case for the Reduced Instruction Set Computer, 1980.

    "REASONS FOR INCREASED COMPLEXITY

    Why have computers become more complex? We can think of several reasons: >Speed of Memory vs. Speed of CPU. John Cocke says that the complexity began >with the transition from the 701 to the 709 [Cocke80]. The 701 CPU was about >ten times as fast as the core main memory; this made any primitives that
    were implemented as subroutines much slower than primitives that were >instructions. Thus the floating point subroutines became part of the 709 >architecture with dramatic gains. Making the 709 more complex resulted
    in an advance that made it more cost-effective than the 701. Since then,
    many "higher-level" instructions have been added to machines in an attempt
    to improve performance. Note that this trend began because of the imbalance >in speeds; it is not clear that architects have asked themselves whether
    this imbalance still holds for their designs."

    At the start of this thread
    <2025Jul29.104514@mips.complang.tuwien.ac.at>, I made exactly this
    argument about the relation between memory speed and clock rate. In
    that posting, I wrote:

    |my guess is that in the VAX 11/780 timeframe, 2-3MHz DRAM access
    |within a row would have been possible. Moreover, the VAX 11/780 has a
    |cache

    In the meantime, this discussion and some additional searching has
    unearthed that the VAX 11/780 memory subsystem has 600ns main memory
    cycle time (apparently without contiguous-access (row) optimization),
    with the cache lowering the average memory cycle time to 290ns.

    - anton
    --
    'Anyone trying for "industrial quality" ISA should avoid undefined behavior.'
    Mitch Alsup, <c17fcd89-f024-40e7-a594-88a85ac10d20o@googlegroups.com>

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: Institut fuer Computersprachen, Technische Uni (3:633/280.2@fidonet)
  • From Waldek Hebisch@3:633/280.2 to All on Thu Aug 7 21:06:06 2025
    In comp.arch Anton Ertl <anton@mips.complang.tuwien.ac.at> wrote:
    EricP <ThatWouldBeTelling@thevillage.com> writes:
    There is a citation to Cocke as "private communication" in 1980 by >>Patterson in The Case for the Reduced Instruction Set Computer, 1980.

    "REASONS FOR INCREASED COMPLEXITY

    Why have computers become more complex? We can think of several reasons: >>Speed of Memory vs. Speed of CPU. John Cocke says that the complexity began >>with the transition from the 701 to the 709 [Cocke80]. The 701 CPU was about >>ten times as fast as the core main memory; this made any primitives that >>were implemented as subroutines much slower than primitives that were >>instructions. Thus the floating point subroutines became part of the 709 >>architecture with dramatic gains. Making the 709 more complex resulted
    in an advance that made it more cost-effective than the 701. Since then, >>many "higher-level" instructions have been added to machines in an attempt >>to improve performance. Note that this trend began because of the imbalance >>in speeds; it is not clear that architects have asked themselves whether >>this imbalance still holds for their designs."

    At the start of this thread
    <2025Jul29.104514@mips.complang.tuwien.ac.at>, I made exactly this
    argument about the relation between memory speed and clock rate. In
    that posting, I wrote:

    |my guess is that in the VAX 11/780 timeframe, 2-3MHz DRAM access
    |within a row would have been possible. Moreover, the VAX 11/780 has a |cache

    In the meantime, this discussion and some additional searching has
    unearthed that the VAX 11/780 memory subsystem has 600ns main memory
    cycle time (apparently without contiguous-access (row) optimization),

    Memory subsystem was able to operate at bus speed: during memory
    cycle memory delivered 64 bits. Bus was 32-bit and needed 3 cycles
    (200 ns each) to transfer 64-bit. Making memory faster would
    require redesigning the bus.

    with the cache lowering the average memory cycle time to 290ns.

    For processor miss penalty was 1800 ns (documentation say that
    was du to bus protocol overhead). Cache hit rate was claimed
    to be 95%.

    --
    Waldek Hebisch

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: To protect and to server (3:633/280.2@fidonet)
  • From Terje Mathisen@3:633/280.2 to All on Thu Aug 7 23:44:55 2025
    Peter Flass wrote:
    On 8/6/25 10:25, John Levine wrote:
    According to Anton Ertl <anton@mips.complang.tuwien.ac.at>:
    Lawrence D'Oliveiro <ldo@nz.invalid> writes:
    Not aware of any platforms that do/did ILP64.

    AFAIK the Cray-1 (1976) was the first 64-bit machine, ...

    The IBM 7030 STRETCH was the first 64 bit machine, shipped in 1961,
    but I would be surprised if anyone had written a C compiler for it.

    It was bit addressable but memories in those days were so small that a=
    =20
    full bit
    address was only 24 bits.=C2=A0 So if I were writing a C compiler, poi= nters=20
    and ints
    would be 32 bits, char 8 bits, long 64 bits.

    (There is a thing called STRETCH C Compiler but it's completely=20
    unrelated.)
    =20
    I don't get why bit-addressability was a thing? Intel iAPX 432 had it, =

    too, and it seems like all it does is drastically shrink your address=20 space and complexify instruction and operand fetch to (maybe) save a fe=
    w=20
    bytes.

    Bit addressing, presumably combined with an easy way to mask the=20 results/pick an arbitrary number of bits less or equal to register=20
    width, makes it easier to impement compression/decompression/codecs.

    However, since the only thing needed to do the same on current CPUs is a =

    single shift after an aligned load, this feature costs far too much in=20 reduced address space compared to what you gain.

    In the real world, all important codecs (like mp4 or aes crypto) end up=20
    as dedicated hardware, either AES opcodes or a standalone VLSI slice=20
    capable of CABAC decoding. The main reason is energy: A cell phone or=20
    laptop cannot stream video all day without having hardware support for=20
    the decoding task.

    One possibly relevant anecdote: Back in the later 1990'ies, when Intel=20
    was producing the first quad core Pentium Pro-style cpus, I showed them=20
    that it was in fact possible for one of those CPUs to decode a maximum=20
    h264 bitstream, with 40 Mbit/s of CABAC coded data, in pure software.
    (Their own sw engineers had claimed that every other frame of a 60 Hz HD =

    video would have to be skipped.)

    What Intel did was to license h264 decoding IP since that would use far=20
    less power and leave 3 of the 4 cores totally idle.

    Terje

    --=20
    - <Terje.Mathisen at tmsw.no>
    "almost all programming can be viewed as an exercise in caching"


    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Peter Flass@3:633/280.2 to All on Fri Aug 8 00:26:32 2025
    On 8/6/25 22:29, Thomas Koenig wrote:


    That is one of the things I find astonishing - how a company like
    DG grew from a kitche-table affair to the size they had.


    Recent history is littered with companies like this. The microcomputer revolution spawned scores of companies that started in someone's garage, ballooned to major presence overnight, and then disappeared - bankrupt,
    bought out, split up, etc. Look at all the players in the S-100 CP/M
    space, or Digital Research. Only a few, like Apple and Microsoft, made
    it out alive.

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Peter Flass@3:633/280.2 to All on Fri Aug 8 00:34:28 2025
    On 8/7/25 06:44, Terje Mathisen wrote:

    Bit addressing, presumably combined with an easy way to mask the results/pick an arbitrary number of bits less or equal to register
    width, makes it easier to impement compression/decompression/codecs.

    However, since the only thing needed to do the same on current CPUs is a single shift after an aligned load, this feature costs far too much in reduced address space compared to what you gain.


    Bit addressing *as an option* (Bit Load, Bit store instructions, etc.)
    is a great idea, for example it greatly simplifies BitBlt logic. The
    432's use of bit addressing for everything, especially instructions,
    seems just too cute. I forget the details, it's been a while since I
    looked, but it forced extremely small code segments which, combined with
    the segmentation logic, etc. really impacted performance.

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From John Ames@3:633/280.2 to All on Fri Aug 8 01:28:52 2025
    On Wed, 6 Aug 2025 23:45:44 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    It added its own misfeatures, though.

    Unfortunately, yes. "User areas" in particular are just a completely
    useless bastard child of proper subdirectories and something like
    TOPS-10's programmer/project pairs; even making user area 0 a "common
    area" accessible from any of the others would've helped, but they
    didn't do that. It's a sign of how misconceived they were that MS-DOS
    (in re-implementing CP/M) dropped them entirely and nobody complained,
    then added real subdirectories later.


    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Anton Ertl@3:633/280.2 to All on Fri Aug 8 00:57:59 2025
    Peter Flass <Peter@Iron-Spring.com> writes:
    [IBM STRETCH bit-addressable]
    I don't get why bit-addressability was a thing? Intel iAPX 432 had it,
    too

    One might come to think that it's the signature of overambitious
    projects that eventually fail.

    However, in the case of the IBM STRETCH, I think there's a good
    excuse: If you go from word addressing to subunit addressing (not sure
    why Stretch went there, however; does a supercomputer need that?), why
    stop at characters (especially given that character size at the time
    was still not settled)? Why not continue down to bits?

    The S/360 then found the compromise that conquered the world: Byte
    addressing with 8-bit bytes.

    Why iAPX432 went for bit addressing at a time when byte addressing and
    the 8-bit byte was firmly established, over ten years after the S/360
    and 5 years after the PDP-11 is a mystery, however.

    - anton
    --
    'Anyone trying for "industrial quality" ISA should avoid undefined behavior.'
    Mitch Alsup, <c17fcd89-f024-40e7-a594-88a85ac10d20o@googlegroups.com>

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: Institut fuer Computersprachen, Technische Uni (3:633/280.2@fidonet)
  • From John Ames@3:633/280.2 to All on Fri Aug 8 01:38:56 2025
    On Thu, 7 Aug 2025 02:22:05 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    That disparity between CPU and RAM speeds is even greater today than
    it was back then. Yet we have moved away from adding ever-more-complex instructions, and are getting better performance with simpler ones.

    How come? Caching.

    Yes, but complex instructions also make pipelining and out-of-order
    execution much more difficult - to the extent that, as far back as the
    Pentium Pro, Intel has had to implement the x86 instruction set as a
    microcoded program running on top of a simpler RISC architecture.


    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Terje Mathisen@3:633/280.2 to All on Fri Aug 8 01:52:05 2025
    John Ames wrote:
    On Thu, 7 Aug 2025 02:22:05 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    That disparity between CPU and RAM speeds is even greater today than
    it was back then. Yet we have moved away from adding ever-more-complex
    instructions, and are getting better performance with simpler ones.

    How come? Caching.

    Yes, but complex instructions also make pipelining and out-of-order
    execution much more difficult - to the extent that, as far back as the Pentium Pro, Intel has had to implement the x86 instruction set as a microcoded program running on top of a simpler RISC architecture.

    That's simply wrong:

    The PPro had close to zero microcode actually running in any user program.

    What it did have was decoders that would look at complex operations and
    spit out two or more basic operations, like load+execute.

    Later on we've seen the opposite where cmp+branch could be combined into
    a single internal op.

    Terje

    --
    - <Terje.Mathisen at tmsw.no>
    "almost all programming can be viewed as an exercise in caching"

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Dennis Boone@3:633/280.2 to All on Fri Aug 8 01:54:16 2025
    However, in the case of the IBM STRETCH, I think there's a good
    excuse: If you go from word addressing to subunit addressing (not sure
    why Stretch went there, however; does a supercomputer need that?), why
    stop at characters (especially given that character size at the time
    was still not settled)? Why not continue down to bits?

    Remember who they built STRETCH for.

    De

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: ---:- FTN<->UseNet Gate -:--- (3:633/280.2@fidonet)
  • From Stephen Fuld@3:633/280.2 to All on Fri Aug 8 06:01:07 2025
    On 8/7/2025 7:57 AM, Anton Ertl wrote:
    Peter Flass <Peter@Iron-Spring.com> writes:
    [IBM STRETCH bit-addressable]
    I don't get why bit-addressability was a thing? Intel iAPX 432 had it,
    too

    One might come to think that it's the signature of overambitious
    projects that eventually fail.

    Interesting. While it seems to be sufficient to predict the failure of
    a project, it certainly isn't necessary. So I think calling it a
    signature is too extreme.


    However, in the case of the IBM STRETCH, I think there's a good
    excuse: If you go from word addressing to subunit addressing (not sure
    why Stretch went there, however; does a supercomputer need that?)

    While perhaps not absolutely necessary, it is very useful. For example, inputting the parameters for, and showing the results of a simulation in
    human readable format, And for a compiler. While you could do all of
    those things on another (different architecture) computer, and transfer
    the results via say magnetic tape, that is pretty inconvenient and
    increases the cost for that additional computer. And there is
    interaction with the console.


    , why
    stop at characters (especially given that character size at the time
    was still not settled)? Why not continue down to bits?

    According to Wikipedia

    https://en.wikipedia.org/wiki/IBM_7030_Stretch#Data_formats

    it supported both binary and decimal fixed point arithmetic (so it helps
    to have four bit "characters", the floating point representation had a
    four bit sign, and alphanumeric characters could be anywhere from 1-8
    bits. And as you say, 6 bit characters were common, especially for
    scientific computers.


    The S/360 then found the compromise that conquered the world: Byte
    addressing with 8-bit bytes.

    Yes, but several years later.

    Another factor that may have contributed. According to the same
    Wikipedia article, the requirements for the system came from Edward
    Teller then at Lawrence Livermore Labs, so there may have been some
    classified requirement that led to bit addressability.


    Why iAPX432 went for bit addressing at a time when byte addressing and
    the 8-bit byte was firmly established, over ten years after the S/360
    and 5 years after the PDP-11 is a mystery, however.

    Agreed.



    --
    - Stephen Fuld
    (e-mail address disguised to prevent spam)

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Al Kossow@3:633/280.2 to All on Fri Aug 8 06:34:09 2025
    The TI TMS34020 graphics processor may have been the last CPU to have bit addressing.


    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From John Levine@3:633/280.2 to All on Fri Aug 8 06:54:01 2025
    According to Terje Mathisen <terje.mathisen@tmsw.no>:
    I don't get why bit-addressability was a thing? Intel iAPX 432 had it,
    too, and it seems like all it does is drastically shrink your address
    space and complexify instruction and operand fetch to (maybe) save a few
    bytes.

    Bit addressing, presumably combined with an easy way to mask the >results/pick an arbitrary number of bits less or equal to register
    width, makes it easier to impement compression/decompression/codecs.

    STRETCH was designed in the late 1950s. Shannon-Fano coding was invented
    in the 1940s, and Huffman published his paper on optimal coding in 1950,
    but modern codes like LZ were only invented in the 1970s. I doubt anyone
    did compression or decompression on STRETCH other than packing and unpacking bit fields.

    IBMs commercial machines were digit or character addressed, with a variety of different representations. They didn't know what the natural byte size would be so they let you use whatever you wanted. That made it easy to pack and unpack bitfields to store data compactly in fields of exactly the minimum size.

    The NSA was an important customer, for whom they built the 7950 HARVEST coprocessor
    and it's quite plausible that they had applications for which bit addressing was useful.

    The paper on the design of S/360 said they looked at addressing of 6 bit characters, and 8 bit characters, with 4-bit BCD digits sometimes stored in them. It was evident at the time that 6 bit characters were too small, so
    8 bits it was. They don't mention bit addressing, so they'd presumably already decided that was a bad idea.



    --
    Regards,
    John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
    Please consider the environment before reading this e-mail. https://jl.ly

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: Taughannock Networks (3:633/280.2@fidonet)
  • From George Neuner@3:633/280.2 to All on Fri Aug 8 11:53:11 2025
    On Thu, 7 Aug 2025 17:52:05 +0200, Terje Mathisen
    <terje.mathisen@tmsw.no> wrote:

    John Ames wrote:
    On Thu, 7 Aug 2025 02:22:05 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    That disparity between CPU and RAM speeds is even greater today than
    it was back then. Yet we have moved away from adding ever-more-complex
    instructions, and are getting better performance with simpler ones.

    How come? Caching.

    Yes, but complex instructions also make pipelining and out-of-order
    execution much more difficult - to the extent that, as far back as the
    Pentium Pro, Intel has had to implement the x86 instruction set as a
    microcoded program running on top of a simpler RISC architecture.

    That's simply wrong:

    The PPro had close to zero microcode actually running in any user program.

    What it did have was decoders that would look at complex operations and
    spit out two or more basic operations, like load+execute.

    Later on we've seen the opposite where cmp+branch could be combined into
    a single internal op.

    Terje

    You say "tomato". 8-)

    It's still "microcode" for some definition ... just not a classic
    "interpreter" implementation where a library of routines implements
    the high level instructions.

    The decoder converts x86 instructions into traces of equivalent wide
    micro instructions which are directly executable by the core. The
    traces then are cached separately [there is a $I0 "microcache" below
    $I1] and can be re-executed (e.g., for loops) as long as they remain
    in the microcache. If they age out, the decoder has to produce them
    again from the "source" x86 instructions.

    So the core is executing microinstructions - not x86 - and the program
    as executed reasonably can be said to be "microcoded" ... again for
    some definition.

    YMMV.

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Dan Cross@3:633/280.2 to All on Fri Aug 8 11:57:53 2025
    In article <107008b$3g8jl$1@dont-email.me>,
    Thomas Koenig <tkoenig@netcologne.de> wrote:
    Dan Cross <cross@spitfire.i.gajendra.net> schrieb:
    In article <106uqej$36gll$3@dont-email.me>,
    Thomas Koenig <tkoenig@netcologne.de> wrote:
    Peter Flass <Peter@Iron-Spring.com> schrieb:

    The support issues alone were killers. Think about the
    Orange/Grey/(Blue?) Wall of VAX documentation, and then look at the
    five-page flimsy you got with a micro. The customers were willing to
    accept cr*p from a small startup, but wouldn't put up with it from IBM >>>> or DEC.

    Using UNIX faced stiff competition from AT&T's internal IT people,
    who wanted to run DEC's operating systems on all PDP-11 within
    the company (basically, they wanted to kill UNIX). They pointed
    towads the large amout of documentation that DEC provided, compared
    to the low amount of UNIX, as proof of superiority. The UNIX people
    saw it differently...

    I've never heard this before, and I do not believe that it is
    true. Do you have a source?

    Hmm... I _think_ it was on a talk given by the UNIX people,
    but I may be misremembering.

    I have heard similar stories about DEC, but not AT&T. The Unix
    fortune file used to (in)famously have a quote from Ken Olsen
    about the relative volume of documentation between Unix and VMS
    (reproduced below).

    - Dan C.

    - --->BEGIN FORTUNE<---

    One of the questions that comes up all the time is: How
    enthusiastic is our support for UNIX?
    Unix was written on our machines and for our machines many
    years ago. Today, much of UNIX being done is done on our machines.
    Ten percent of our VAXs are going for UNIX use. UNIX is a simple
    language, easy to understand, easy to get started with. It's great for students, great for somewhat casual users, and it's great for
    interchanging programs between different machines. And so, because of
    its popularity in these markets, we support it. We have good UNIX on
    VAX and good UNIX on PDP-11s.
    It is our belief, however, that serious professional users will
    run out of things they can do with UNIX. They'll want a real system and
    will end up doing VMS when they get to be serious about programming.
    With UNIX, if you're looking for something, you can easily and
    quickly check that small manual and find out that it's not there. With
    VMS, no matter what you look for -- it's literally a five-foot shelf of documentation -- if you look long enough it's there. That's the
    difference -- the beauty of UNIX is it's simple; and the beauty of VMS
    is that it's all there.
    -- Ken Olsen, President of DEC, 1984

    - --->END FORTUNE<---

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: PANIX Public Access Internet and UNIX, NYC (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Fri Aug 8 13:51:08 2025
    On Thu, 7 Aug 2025 15:44:55 +0200, Terje Mathisen wrote:

    However, since the only thing needed to do the same on current CPUs is a single shift after an aligned load, this feature costs far too much in reduced address space compared to what you gain.

    Reserving the bottom 3 bits for a bit offset in a 64-bit address, even if
    it is unused in most instructions, doesn’t seem like such a big cost. And
    it unifies the pointer representation for all data types, which can make things more convenient in a higher-level language.

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Fri Aug 8 13:57:17 2025
    On Thu, 7 Aug 2025 07:26:32 -0700, Peter Flass wrote:

    On 8/6/25 22:29, Thomas Koenig wrote:

    That is one of the things I find astonishing - how a company like DG
    grew from a kitche-table affair to the size they had.

    Recent history is littered with companies like this.

    DG were famously the setting for that Tracy Kidder book, “The Soul Of A
    New Machine”, chronicling their belated and high-pressure project to enter the 32-bit virtual-memory supermini market and compete with DEC’s VAX.

    Looking at things with the eyes of a software guy, I found some of their hardware decisions questionable. Like they thought they were very clever
    to avoid having separate privilege modes in the processor status register
    like the VAX did: instead, they encoded the access privilege mode in the address itself.

    I guess they thought that 32 address bits left plenty to spare for
    something like this. But I think it just shortened the life of their 32-
    bit architecture by that much more.

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Anton Ertl@3:633/280.2 to All on Fri Aug 8 16:16:51 2025
    George Neuner <gneuner2@comcast.net> writes:
    On Thu, 7 Aug 2025 17:52:05 +0200, Terje Mathisen
    <terje.mathisen@tmsw.no> wrote:

    John Ames wrote:
    The PPro had close to zero microcode actually running in any user program.

    What it did have was decoders that would look at complex operations and >>spit out two or more basic operations, like load+execute.

    Later on we've seen the opposite where cmp+branch could be combined into
    a single internal op.

    Terje

    You say "tomato". 8-)

    It's still "microcode" for some definition ... just not a classic >"interpreter" implementation where a library of routines implements
    the high level instructions.

    Exactly, for most instructions there is no microcode. There are
    microops, with 118 bits on the Pentium Pro (P6). They are not RISC instructions (no RISC has 118-bit instructions). At best one might
    argue that one P6 microinstruction typically does what a RISC
    instruction does in a RISC. But in the end the reorder buffer still
    has to deal with the CISC instructions.

    The decoder converts x86 instructions into traces of equivalent wide
    micro instructions which are directly executable by the core. The
    traces then are cached separately [there is a $I0 "microcache" below
    $I1] and can be re-executed (e.g., for loops) as long as they remain
    in the microcache.

    No such cache in the P6 or any of its descendents until the Sandy
    Bridge (2011). The Pentium 4 has a microop cache, but eventually
    (with Core Duo, Core2 Duo) was replaced with P6 descendents that have
    no microop cache. Actually, the Core 2 Duo has a loop buffer which
    might be seen as a tiny microop cache. Microop caches and loop
    buffers still have to contain information about which microops belong
    to the same CISC instruction, because otherwise the reorder buffer
    could not commit/execute* CISC instructions.

    * OoO microarchitecture terminology calls what the reorder buffer does
    "retire" or "commit". But this is where the speculative execution
    becomes architecturally visible ("commit"), so from an architectural
    view it is execution.

    Followups set to comp.arch

    - anton
    --
    'Anyone trying for "industrial quality" ISA should avoid undefined behavior.'
    Mitch Alsup, <c17fcd89-f024-40e7-a594-88a85ac10d20o@googlegroups.com>

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: Institut fuer Computersprachen, Technische Uni (3:633/280.2@fidonet)
  • From Michael S@3:633/280.2 to All on Fri Aug 8 18:43:00 2025
    On Fri, 8 Aug 2025 03:57:17 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:


    I guess they thought that 32 address bits left plenty to spare for
    something like this. But I think it just shortened the life of their
    32- bit architecture by that much more.


    The history proved them right. Eagle series didn't last long enough to
    run out of 512MB address space.


    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Anton Ertl@3:633/280.2 to All on Wed Aug 13 01:28:27 2025
    cross@spitfire.i.gajendra.net (Dan Cross) writes:
    MAP_32BIT is only used on x86-64 on Linux, and was originally
    a performance hack for allocating thread stacks: apparently, it
    was cheaper to do a thread switch with a stack below the 4GiB
    barrier (sign extension artifact maybe? Who knows...). But it's
    no longer required for that. But there's no indication that it
    was for supporting ILP32 on a 64-bit system.

    Reading up about x32, it requires quite a bit more than just
    allocating everything in the low 2GB.

    My memories (from reading about it, I never compiled a program for
    that usage myself) are that on Digital OSF/1, the corresponding usage
    did just that: Configure the compiler for ILP32, and allocate all
    memory in the low 2GB. I expect that types such as off_t would be
    defined appropriately, and any pointers in library-defined structures
    (e.g., FILE from <stdio.h>) consumed 8 bytes, even though the ILP32
    code only accessed the bottom 4. Or maybe they had compiled the
    library also for ILP32. In those days fewer shared libraries were in
    play, and the number of system calls and their interface complexity in
    OSF/1 was probably closer to Unix v6 or so than to Linux today (or in
    2012, when x32 was introduced), so all of that required a lot less
    work.

    - anton
    --
    'Anyone trying for "industrial quality" ISA should avoid undefined behavior.'
    Mitch Alsup, <c17fcd89-f024-40e7-a594-88a85ac10d20o@googlegroups.com>

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: Institut fuer Computersprachen, Technische Uni (3:633/280.2@fidonet)
  • From Scott Lurndal@3:633/280.2 to All on Wed Aug 13 02:08:58 2025
    Reply-To: slp53@pacbell.net

    anton@mips.complang.tuwien.ac.at (Anton Ertl) writes: >cross@spitfire.i.gajendra.net (Dan Cross) writes:
    MAP_32BIT is only used on x86-64 on Linux, and was originally
    a performance hack for allocating thread stacks: apparently, it
    was cheaper to do a thread switch with a stack below the 4GiB
    barrier (sign extension artifact maybe? Who knows...). But it's
    no longer required for that. But there's no indication that it
    was for supporting ILP32 on a 64-bit system.

    Reading up about x32, it requires quite a bit more than just
    allocating everything in the low 2GB.

    The primary issue on x86 was with the API definitions. Several
    legacy API declarations used signed integers (int) for
    address parameters. This limited addresses to 2GB on
    a 32-bit system.

    https://en.wikipedia.org/wiki/Large-file_support

    The Large File Summit (I was one of the Unisys reps at the LFS)
    specified a standard way to support files larger than 2GB
    on 32-bit systems that used signed integers for file offsets
    and file size.

    Also, https://en.wikipedia.org/wiki/2_GB_limit


    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: UsenetServer - www.usenetserver.com (3:633/280.2@fidonet)
  • From BGB@3:633/280.2 to All on Wed Aug 13 02:53:37 2025
    On 8/12/2025 11:08 AM, Scott Lurndal wrote:
    anton@mips.complang.tuwien.ac.at (Anton Ertl) writes:
    cross@spitfire.i.gajendra.net (Dan Cross) writes:
    MAP_32BIT is only used on x86-64 on Linux, and was originally
    a performance hack for allocating thread stacks: apparently, it
    was cheaper to do a thread switch with a stack below the 4GiB
    barrier (sign extension artifact maybe? Who knows...). But it's
    no longer required for that. But there's no indication that it
    was for supporting ILP32 on a 64-bit system.

    Reading up about x32, it requires quite a bit more than just
    allocating everything in the low 2GB.

    The primary issue on x86 was with the API definitions. Several
    legacy API declarations used signed integers (int) for
    address parameters. This limited addresses to 2GB on
    a 32-bit system.

    https://en.wikipedia.org/wiki/Large-file_support

    The Large File Summit (I was one of the Unisys reps at the LFS)
    specified a standard way to support files larger than 2GB
    on 32-bit systems that used signed integers for file offsets
    and file size.

    Also, https://en.wikipedia.org/wiki/2_GB_limit


    Also, IIRC, the major point of X32 was that it would narrow pointers and similar back down to 32 bits, requiring special versions of any shared libraries or similar.

    But, it is unattractive to have both 32 and 64 bit versions of all the SO's.

    Though, admittedly, not messed with it much personally...


    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From aph@littlepinkcloud.invalid@3:633/280.2 to All on Wed Aug 13 03:57:20 2025
    In comp.arch BGB <cr88192@gmail.com> wrote:

    Also, IIRC, the major point of X32 was that it would narrow pointers and similar back down to 32 bits, requiring special versions of any shared libraries or similar.

    But, it is unattractive to have both 32 and 64 bit versions of all the SO's.

    We have done something similar for years at Red Hat: not X32, but
    x86_32, and it was pretty easy. If you're building a 32-bit OS anyway
    (which we were) all you have to do is copy all 32-bit libraries from
    one one repo to the other.

    I thought the AArch64 ILP32 design was pretty neat, but no one seems
    to have been interested. I guess there wasn't an advantage worth the
    effort.

    Andrew.

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: ---:- FTN<->UseNet Gate -:--- (3:633/280.2@fidonet)
  • From John Levine@3:633/280.2 to All on Wed Aug 13 05:09:27 2025
    According to <aph@littlepinkcloud.invalid>:
    In comp.arch BGB <cr88192@gmail.com> wrote:

    Also, IIRC, the major point of X32 was that it would narrow pointers and
    similar back down to 32 bits, requiring special versions of any shared
    libraries or similar.

    But, it is unattractive to have both 32 and 64 bit versions of all the SO's.

    We have done something similar for years at Red Hat: not X32, but
    x86_32, and it was pretty easy. If you're building a 32-bit OS anyway
    (which we were) all you have to do is copy all 32-bit libraries from
    one one repo to the other.

    FreeBSD does the same thing. The 32 bit libraries are installed by default
    on 64 bit systems because, by current standards, they're not very big.

    I've stopped installing them because I know I don't have any 32 bit apps
    left but on systems with old packages, who knows?

    --
    Regards,
    John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
    Please consider the environment before reading this e-mail. https://jl.ly

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: Taughannock Networks (3:633/280.2@fidonet)
  • From Anton Ertl@3:633/280.2 to All on Wed Aug 13 16:11:02 2025
    aph@littlepinkcloud.invalid writes:
    I thought the AArch64 ILP32 design was pretty neat, but no one seems
    to have been interested. I guess there wasn't an advantage worth the
    effort.

    Alpha: On Digital OSF/1 the advantage was to be able to run programs
    that work on ILP32, but not I32LP64.

    x32: I expect that maintained Unix programs ran on I32LP64 in 2012,
    and unmaintained ones did not get an x32 port anyway. And if there
    are cases where my expectations do not hold, there still is i386. The
    only advantage of x32 was a speed advantage on select programs.
    That's apparently not enough to gain a critical mass of x32 programs.

    Aarch64-ILP32: My guess is that the situation is very similar to the
    x32 situation. Admittedly, there are CPUs without ARM A32/T32
    support, but if there was any significant program for these CPUs that
    does not work with I32LP64, the manufacturer would have chosen to
    include the A32/T32 option. Given that the situation is the same as
    for x32, the result is the same: What I find about it are discussions
    about deprecation and removal <https://www.phoronix.com/news/GCC-Deprecates-ARM64-ILP32>.

    Concerning performance, <https://static.linaro.org/connect/bkk16/Presentations/Wednesday/BKK16-305B.pdf>
    shows SPECint 2006 benchmarks on two unnamed platforms. Out of 12
    benchmark programs, ILP32 shows a speedup by a factor ~1.55 on
    429.mcf, ~1.2 on 471.omnetpp, ~1.1 on 483.xalancbmk, ~1.05 on 403.gcc,
    and ~0.95 (i.e., slowdowns) on 401.bzip2, 456.hmmer, 458.sjeng.

    That slide deck concludes with:

    |Do We Care? Enough?
    |
    |A lot of code to maintain for little gain.

    Apparently the answers to these questions is no.

    Followups to comp.arch.

    - anton
    --
    'Anyone trying for "industrial quality" ISA should avoid undefined behavior.'
    Mitch Alsup, <c17fcd89-f024-40e7-a594-88a85ac10d20o@googlegroups.com>

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: Institut fuer Computersprachen, Technische Uni (3:633/280.2@fidonet)
  • From Anton Ertl@3:633/280.2 to All on Wed Aug 13 18:22:17 2025
    Thomas Koenig <tkoenig@netcologne.de> writes:
    To be efficient, a RISC needs a full-width (presumably 32 bit)
    external data bus, plus a separate address bus, which should at
    least be 26 bits, better 32. A random ARM CPU I looked at at
    bitsavers had 84 pins, which sounds reasonable.

    Building an ARM-like instead of a 68000 would have been feasible,
    but the resulting systems would have been more expensive (the
    68000 had 64 pins).

    One could have done a RISC-VAX microprocessor with 16-bit data bus and
    24-bit address bus, like the 68000, or even an 8-bit data bus, and
    without FPU and MMU and without PDP-11 decoder. The performance would
    have been memory-bandwidth-limited and therefore simular to the 68000
    and 68008, respectively (unless extra love was spent on the memory
    interface, e.g., with row optimization), with a few memory accesses
    saved by having more registers. This would still have made sense in a
    world where the same architecture was available (with better
    performance) on the supermini of the day, the RISC-VAX: Write your
    code on the cheap micro RISC-VAX and this will give you the
    performance advantages in a few years when proper 32-bit computing
    arrives (or on more expensive systems today).

    So... a strategy could have been to establish the concept with
    minicomputers, to make money (the VAX sold big) and then move
    aggressively towards microprocessors, trying the disruptive move
    towards workstations within the same company (which would be HARD).

    For workstations one would need the MMU and the FPU as extra chips.

    Getting a company to avoid trying to milk the cash cow for longer
    (short-term profits) by burying in-company progress (that other
    companies then make, i.e., long-term loss) may be hard, but given that
    some companies have survived, it's obviously possible.

    HP seems to have avoided the problem at various stages: They had their
    own HP3000 and HP9000/500 architectures, but found ways to drop that
    for HPPA without losing too many customers, then they dropped HPPa for
    IA-64, and IA-64 for AMD64, and they still survive. They also managed
    to become one of the biggest PC makers, but found it necessary to
    split the PC and big-machine businesses into two companies.

    As for the PC - a scaled-down, cheap, compatible, multi-cycle per
    instruction microprocessor could have worked for that market,
    but it is entirely unclear to me what this would / could
    have done to the PC market, if IBM could have been prevented
    from gaining such market dominance.

    The IBM PC success was based on the open architecture, on being more
    advanced than the Apple II and not too expensive, and the IBM name
    certainly helped at the start. In the long run it was an Intel and
    Microsoft success, not an IBM success. And Intel's 8086 success was
    initially helped by being able to port 8080 programs (with 8080->8086 assemblers).

    So how could one capture the PC market? The RISC-VAX would probably
    have been too expensive for a PC, even with an 8-bit data bus and a
    reduced instruction set, along the lines of RV32E. Or maybe that
    would have been feasible, in which case one would provide 8080->reduced-RISC-VAX and 6502->reduced-RISC-VAX assemblers to make
    porting easier. And then try to sell it to IBM Boca Raton.

    An alternative would be to sell it as a faster and better upgrade path
    for the 8088 later, as competition to the 80286. Have a RISC-VAX
    (without MMU und FPU) with an additional 8086 decoder for running
    legacy programs (should be possible in the 134,000 transistors that the
    80286 has): Users could run their existing code, as well as
    future-oriented (actually present-oriented) 32-bit code. The next
    step would be adding the TLB for paging.

    Concerning on how to do it from the business side: The microprocessor
    business (at least, maybe more) should probably be spun off as an
    independent company, such that customers would not need to worry about
    being at a disadvantage compared to DEC in-house demands.

    One can also imagine other ways: Instead of the reduced-RISC-VAX, Try
    to get a PDP-11 variant with 8-bit data bus into the actual IBM PC
    (instead of the 8088), or set up your own PC business based on such a processor; and then the logical upgrade path would be to the successor
    of the PDP-11, the RISC-VAX (with PDP-11 decoder).

    What about the fears of the majority in the company working on big
    computers? They would continue to make big computers, with initially
    faster and later more CPUs than PCs. That's what we are seeing today.

    - anton
    --
    'Anyone trying for "industrial quality" ISA should avoid undefined behavior.'
    Mitch Alsup, <c17fcd89-f024-40e7-a594-88a85ac10d20o@googlegroups.com>

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: Institut fuer Computersprachen, Technische Uni (3:633/280.2@fidonet)
  • From Anton Ertl@3:633/280.2 to All on Wed Aug 13 19:37:27 2025
    Lawrence D'Oliveiro <ldo@nz.invalid> writes:
    On Tue, 5 Aug 2025 21:01:20 -0000 (UTC), Thomas Koenig wrote:

    So... a strategy could have been to establish the concept with
    minicomputers, to make money (the VAX sold big) and then move
    aggressively towards microprocessors, trying the disruptive move towards
    workstations within the same company (which would be HARD).

    None of the companies which tried to move in that direction were
    successful. The mass micro market had much higher volumes and lower
    margins, and those accustomed to lower-volume, higher-margin operation >simply couldn’t adapt.

    At leas some of the Nova-based microprocessors were relatively cheap,
    and still did not succeed. I think that the essential parts of the
    success of the 8088 were:

    * Offered 1MB of address space. In a cumbersome way, but still; and
    AFAIK less cumbersome than what you would do on a mini or Apple III.
    Intel's architects did not understand that themselves, as shown by
    the 80286, which offered decent support for multiple processes, each
    with 64KB address space. Users actually preferred single-tasking of
    programs that can access more than 64KB easily to multitasking of
    64KB (or 64KB+64KB) processes.

    * Cheap to design computers for, in particular the 8-bit bus and small
    package.

    * Support for porting 8080 assembly code to the 8086 architecture.
    That was not needed for long, but it provided a boost in available
    software at a critical time.

    - anton
    --
    'Anyone trying for "industrial quality" ISA should avoid undefined behavior.'
    Mitch Alsup, <c17fcd89-f024-40e7-a594-88a85ac10d20o@googlegroups.com>

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: Institut fuer Computersprachen, Technische Uni (3:633/280.2@fidonet)
  • From Scott Lurndal@3:633/280.2 to All on Thu Aug 14 00:26:18 2025
    Reply-To: slp53@pacbell.net

    anton@mips.complang.tuwien.ac.at (Anton Ertl) writes:
    Thomas Koenig <tkoenig@netcologne.de> writes:
    To be efficient, a RISC needs a full-width (presumably 32 bit)
    external data bus, plus a separate address bus, which should at
    least be 26 bits, better 32. A random ARM CPU I looked at at
    bitsavers had 84 pins, which sounds reasonable.

    Building an ARM-like instead of a 68000 would have been feasible,
    but the resulting systems would have been more expensive (the
    68000 had 64 pins).

    One could have done a RISC-VAX microprocessor with 16-bit data bus and
    24-bit address bus.

    LSI11?

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: UsenetServer - www.usenetserver.com (3:633/280.2@fidonet)
  • From Scott Lurndal@3:633/280.2 to All on Thu Aug 14 00:44:29 2025
    Reply-To: slp53@pacbell.net

    anton@mips.complang.tuwien.ac.at (Anton Ertl) writes:
    Thomas Koenig <tkoenig@netcologne.de> writes:
    <snip>
    So how could one capture the PC market? The RISC-VAX would probably
    have been too expensive for a PC, even with an 8-bit data bus and a
    reduced instruction set, along the lines of RV32E. Or maybe that
    would have been feasible, in which case one would provide >8080->reduced-RISC-VAX and 6502->reduced-RISC-VAX assemblers to make
    porting easier. And then try to sell it to IBM Boca Raton.

    https://en.wikipedia.org/wiki/Rainbow_100

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: UsenetServer - www.usenetserver.com (3:633/280.2@fidonet)
  • From Anton Ertl@3:633/280.2 to All on Thu Aug 14 03:46:59 2025
    scott@slp53.sl.home (Scott Lurndal) writes:
    anton@mips.complang.tuwien.ac.at (Anton Ertl) writes:
    Thomas Koenig <tkoenig@netcologne.de> writes:
    <snip>
    So how could one capture the PC market? The RISC-VAX would probably
    have been too expensive for a PC, even with an 8-bit data bus and a
    reduced instruction set, along the lines of RV32E. Or maybe that
    would have been feasible, in which case one would provide >>8080->reduced-RISC-VAX and 6502->reduced-RISC-VAX assemblers to make >>porting easier. And then try to sell it to IBM Boca Raton.

    https://en.wikipedia.org/wiki/Rainbow_100

    That's completely different from what I suggest above, and DEC
    obviously did not capture the PC market with that.

    - anton
    --
    'Anyone trying for "industrial quality" ISA should avoid undefined behavior.'
    Mitch Alsup, <c17fcd89-f024-40e7-a594-88a85ac10d20o@googlegroups.com>

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: Institut fuer Computersprachen, Technische Uni (3:633/280.2@fidonet)
  • From Anton Ertl@3:633/280.2 to All on Thu Aug 14 03:50:35 2025
    scott@slp53.sl.home (Scott Lurndal) writes:
    anton@mips.complang.tuwien.ac.at (Anton Ertl) writes:
    Thomas Koenig <tkoenig@netcologne.de> writes:
    Building an ARM-like instead of a 68000 would have been feasible,
    but the resulting systems would have been more expensive (the
    68000 had 64 pins).

    One could have done a RISC-VAX microprocessor with 16-bit data bus and >>24-bit address bus.

    LSI11?

    The LSI11 uses four 40-pin chips from the MCP-1600 chipset (which is fascinating in itself <https://en.wikipedia.org/wiki/MCP-1600>) for a
    total of 160 pins; and it supported only 16 address bits without extra
    chips. That was certainly even more expensive (and also slower and
    less capable) than what I suggest above, but it was several years
    earlier, and what I envision was not possible in one chip then.

    - anton
    --
    'Anyone trying for "industrial quality" ISA should avoid undefined behavior.'
    Mitch Alsup, <c17fcd89-f024-40e7-a594-88a85ac10d20o@googlegroups.com>

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: Institut fuer Computersprachen, Technische Uni (3:633/280.2@fidonet)
  • From Peter Flass@3:633/280.2 to All on Thu Aug 14 05:09:35 2025
    On 8/13/25 11:26, Ted Nolan <tednolan> wrote:
    In article <2025Aug13.194659@mips.complang.tuwien.ac.at>,
    Anton Ertl <anton@mips.complang.tuwien.ac.at> wrote:
    scott@slp53.sl.home (Scott Lurndal) writes:
    anton@mips.complang.tuwien.ac.at (Anton Ertl) writes:
    Thomas Koenig <tkoenig@netcologne.de> writes:
    <snip>
    So how could one capture the PC market? The RISC-VAX would probably
    have been too expensive for a PC, even with an 8-bit data bus and a
    reduced instruction set, along the lines of RV32E. Or maybe that
    would have been feasible, in which case one would provide
    8080->reduced-RISC-VAX and 6502->reduced-RISC-VAX assemblers to make
    porting easier. And then try to sell it to IBM Boca Raton.

    https://en.wikipedia.org/wiki/Rainbow_100

    That's completely different from what I suggest above, and DEC
    obviously did not capture the PC market with that.


    They did manage to crack the college market some where CS departments
    had DEC hardware anyway. I know USC (original) had a Rainbow computer
    lab circa 1985. That "in" didn't translate to anything else though.

    Skidmore College was a DEC shop back in the day.

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Waldek Hebisch@3:633/280.2 to All on Thu Aug 14 05:35:09 2025
    In comp.arch Scott Lurndal <scott@slp53.sl.home> wrote:
    Terje Mathisen <terje.mathisen@tmsw.no> writes:
    Stephen Fuld wrote:
    On 8/4/2025 8:32 AM, John Ames wrote:
    =20
    snip
    =20
    This notion that the only advantage of a 64-bit architecture is a larg= >>e
    address space is very curious to me. Obviously that's *one* advantage,=

    but while I don't know the in-the-field history of heavy-duty business= >>/
    scientific computing the way some folks here do, I have not gotten the=

    impression that a lot of customers were commonly running up against th= >>e
    4 GB limit in the early '90s;
    =20
    Not exactly the same, but I recall an issue with Windows NT where it=20
    initially divided the 4GB address space in 2 GB for the OS, and 2GB for= >>=20
    users.=C2=A0 Some users were "running out of address space", so Microso= >>ft=20
    came up with an option to reduce the OS space to 1 GB, thus allowing up= >>=20
    to 3 GB for users.=C2=A0 I am sure others here will know more details.

    Any program written to Microsoft/Windows spec would work transparently=20 >>with a 3:1 split, the problem was all the programs ported from unix=20 >>which assumed that any negative return value was a failure code.

    The only interfaces that I recall this being an issue for were
    mmap(2) and lseek(2). The latter was really related to maximum
    file size (although it applied to /dev/[k]mem and /proc/<pid>/mem
    as well). The former was handled by the standard specifying
    MAP_FAILED as the return value.

    That said, Unix generally defined -1 as the return value for all
    other system calls, and code that checked for "< 0" instead of
    -1 when calling a standard library function or system call was fundamentally broken.

    I remeber RIM. When I compiled it on Linux and tried it I got error
    due to check for "< 0". Change to '== -1" fixed it. Possibly there
    were similar troubles in other programs that I do not remember.

    --
    Waldek Hebisch

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: To protect and to server (3:633/280.2@fidonet)
  • From Dennis Boone@3:633/280.2 to All on Fri Aug 15 03:12:40 2025
    The LSI11 uses four 40-pin chips from the MCP-1600 chipset (which is fascinating in itself <https://en.wikipedia.org/wiki/MCP-1600>) for a
    total of 160 pins; and it supported only 16 address bits without extra chips. That was certainly even more expensive (and also slower and
    less capable) than what I suggest above, but it was several years
    earlier, and what I envision was not possible in one chip then.

    Maybe compare 808x to something more in its weight class? The 8-bit
    8080 was 1974, 16-bit 8086 1978, 16/8-bit 8088 1979.

    The DEC F-11 (~1979) and J-11 (~1982) microprocessor designs were
    capable of 22 bit addressing on a single 40-pin carrier.

    De

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: ---:- FTN<->UseNet Gate -:--- (3:633/280.2@fidonet)
  • From EricP@3:633/280.2 to All on Fri Aug 15 05:22:46 2025
    Dennis Boone wrote:
    The LSI11 uses four 40-pin chips from the MCP-1600 chipset (which is fascinating in itself <https://en.wikipedia.org/wiki/MCP-1600>) for a total of 160 pins; and it supported only 16 address bits without extra chips. That was certainly even more expensive (and also slower and
    less capable) than what I suggest above, but it was several years
    earlier, and what I envision was not possible in one chip then.

    Maybe compare 808x to something more in its weight class? The 8-bit
    8080 was 1974, 16-bit 8086 1978, 16/8-bit 8088 1979.

    The DEC F-11 (~1979) and J-11 (~1982) microprocessor designs were
    capable of 22 bit addressing on a single 40-pin carrier.

    De

    For those interested in a blast from the past, on the Wikipedia WD16 page https://en.wikipedia.org/wiki/Western_Digital_WD16

    is a link to a copy of Electronic Design magazine from 1977 which
    has a set of articles on microprocessors starting on page 60.

    Its a nice summary of the state of the microprocessor world circa 1977.

    https://www.worldradiohistory.com/Archive-Electronic-Design/1977/Electronic-Design-V25-N21-1977-1011.pdf

    Table 1 General Purpose Microprocessors on pg 62 shows 8 different
    16-bit microprocessor chip sets including the WD16.

    Table 3 on pg 66 show ~11 bit slice families that can be used to build
    larger microcoded processors, such as AMD 2900 4-bit slice series.

    It also has many data sheets on various micros starting on pg 88
    and 16-bit ones starting on pg 170, mostly chips you never heard
    on like the Ferranti F100L, but also some you'll know like the
    Data General MicroNova mN601 on page 178.
    The Western Digital WD-16 is on pg 190.


    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: ---:- FTN<->UseNet Gate -:--- (3:633/280.2@fidonet)
  • From Al Kossow@3:633/280.2 to All on Fri Aug 15 05:59:00 2025
    On 8/14/25 10:12 AM, Dennis Boone wrote:
    The DEC F-11 (~1979) and J-11 (~1982) microprocessor designs were
    capable of 22 bit addressing on a single 40-pin carrier.

    The only single die PDP-11 DEC produced was the T-11 and it didn't
    have an MMU

    The J-11 is a Harris two chip hybrid, and is in a >40 pin chip carrier. http://simh.trailing-edge.com/semi/j11.html

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From OrangeFish@3:633/280.2 to All on Sat Aug 16 01:42:09 2025
    On 2025-08-12 15:09, John Levine wrote:
    According to <aph@littlepinkcloud.invalid>:
    In comp.arch BGB <cr88192@gmail.com> wrote:

    Also, IIRC, the major point of X32 was that it would narrow pointers and >>> similar back down to 32 bits, requiring special versions of any shared
    libraries or similar.

    But, it is unattractive to have both 32 and 64 bit versions of all the SO's.

    We have done something similar for years at Red Hat: not X32, but
    x86_32, and it was pretty easy. If you're building a 32-bit OS anyway
    (which we were) all you have to do is copy all 32-bit libraries from
    one one repo to the other.

    FreeBSD does the same thing. The 32 bit libraries are installed by default on 64 bit systems because, by current standards, they're not very big.

    Same is true for Solaris Sparc.

    OF.


    I've stopped installing them because I know I don't have any 32 bit apps
    left but on systems with old packages, who knows?



    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)