So going for microcode no longer was the best choice for the VAX, but AE>>>neither the VAX designers nor their competition realized this, and AE>>>commercial RISCs only appeared in 1986.
That is certainly true but there were other mistakes too. One is that JL>>they underestimated how cheap memory would get, leading to the overcomplex JL>>instruction and address modes and the tiny 512 byte page size.
Concerning code density, while VAX code is compact, RISC-V code with the
C extension is more compact
<2025Mar4.093916@mips.complang.tuwien.ac.at>, so in our time-traveling
scenario that would not be a reason for going for the VAX ISA.
Another aspect from those measurements is that the 68k instruction set (with only one memory operand for any compute instructions, and 16-bit granularity) has a code density similar to the VAX.
Another, which is not entirely their fault, is that they did not expect JL>>compilers to improve as fast as they did, leading to a machine which was fun to
program in assembler but full of stuff that was useless to compilers and JL>>instructions like POLY that should have been subroutines. The 801 project and
PL.8 compiler were well underway at IBM by the time the VAX shipped, but DEC
presumably didn't know about it.
DEC probably was aware from the work of William Wulf and his students
what optimizing compilers can do and how to write them. After all,
they used his language BLISS and its compiler themselves.
POLY would have made sense in a world where microcode makes sense: If microcode can be executed faster than subroutines, put a building
stone for transcendental library functions into microcode. Of course, given that microcode no longer made sense for VAX, POLY did not make
sense for it, either.
Related to the microcode issue they also don't seem to have anticipated how JL>>important pipelining would be. Some minor changes to the VAX, like not letting
one address modify another in the same instruction, would have made it a lot
easier to pipeline.
My RISC alternative to the VAX 11/780 (RISC-VAX) would probably have
to use pipelining (maybe a three-stage pipeline like the first ARM) to achieve its clock rate goals; that would eat up some of the savings in implementation complexity that avoiding the actual VAX would have
given us.
Another issue would be is how to implement the PDP-11 emulation mode.
I would add a PDP-11 decoder (as the actual VAX 11/780 probably has)
that would decode PDP-11 code into RISC-VAX instructions, or into what RISC-VAX instructions are decoded into. The cost of that is probably similar to that in the actual VAX 11/780. If the RISC-VAX ISA has a MIPS/Alpha/RISC-V-like handling of conditions, the common microcode
would have to support both the PDP-11 and the RISC-VAX handling of conditions; probably not that expensive, but maybe one still would
prefer a ARM/SPARC/HPPA-like handling of conditions.
As for a RISC-VAX: To little old naive me, it seems that it would have
been possible to create an alternative microcode load that would be able
to support a RISC ISA on the same hardware, if the idea had occured to a well-connected group of graduate students. How good a RISC might have
been feasible?
On 7/30/25 10:17 AM, Lars Poulsen wrote:
As for a RISC-VAX: To little old naive me, it seems that it would have
been possible to create an alternative microcode load that would be able
to support a RISC ISA on the same hardware, if the idea had occured to a
well-connected group of graduate students. How good a RISC might have
been feasible?
Early RISC like instruction sets existed on microcoded machines.
The Ridge-32 for example, whose designers came out of the HP 3000 world, was claimed
at the time to be the first commercial RISC system.
Pyramid may have been another example, but very little (at least by me) is known of their ISA
John Levine <johnl@taugh.com> writes:that
That is certainly true but there were other mistakes too. One is
overcomplexthey underestimated how cheap memory would get, leading to the
instruction and address modes and the tiny 512 byte page size.
In the days of VAX-11/780, it was "obvious" that operating systemswould> JL>>Another, which is not entirely their fault, is that they did
which was fun tocompilers to improve as fast as they did, leading to a machine
compilers andprogram in assembler but full of stuff that was useless to
project andinstructions like POLY that should have been subroutines. The 801
shipped, but DECPL.8 compiler were well underway at IBM by the time the VAX
presumably didn't know about it.
On 7/30/25 10:17, Lars Poulsen wrote:
John Levine <johnl@taugh.com> writes:that
That is certainly true but there were other mistakes too. One is
overcomplexthey underestimated how cheap memory would get, leading to the
instruction and address modes and the tiny 512 byte page size.
That's a simple mistake to fix in software, though - always work with multiples of pages, like 16 or more.
instruction and address modes and the tiny 512 byte page size.
That's a simple mistake to fix in software, though - always work with >multiples of pages, like 16 or more.
shipped, but DECPL.8 compiler were well underway at IBM by the time the VAX
presumably didn't know about it.
I only did a little VAX assembler. Maybe if I'd done more I'd have
coding patterns as a reflex, but the number of possible variant
instructions always had me stuck in a mental loop: "Do I want a one- or
two- (or three-) address instruction here?
In the days of VAX-11/780, it was "obvious" that operating systems would
be written in assembler in order to be efficient, and the instruction
set allowed high productivity for writing systems programs in "native"
code.
As for a RISC-VAX: To little old naive me, it seems that it would have
been possible to create an alternative microcode load that would be able
to support a RISC ISA on the same hardware, if the idea had occured to a >well-connected group of graduate students. How good a RISC might have
been feasible?
Lars Poulsen <lars@cleo.beagle-ears.com> writes:
In the days of VAX-11/780, it was "obvious" that operating systems would
be written in assembler in order to be efficient, and the instruction
set allowed high productivity for writing systems programs in "native" >>code.
Yes. I don't think that the productivity would have suffered from a >load/store architecture, though.
As for a RISC-VAX: To little old naive me, it seems that it would have
been possible to create an alternative microcode load that would be able
to support a RISC ISA on the same hardware, if the idea had occured to a >>well-connected group of graduate students. How good a RISC might have
been feasible?
Did the VAX 11/780 have writable microcode?
Given that the VAX 11/780 was not (much) pipelined, I don't expect
that using an alternative microcode that implements a RISC ISA would
have performed well.
Digital eventually did move VMS to Alpha, but it was neither
cheap, nor easy. Most alpha customers were existing VAX
customers - it's not clear that DEC actually grew the customer
base by switching to Alpha.
Lars Poulsen <lars@cleo.beagle-ears.com> writes:
In the days of VAX-11/780, it was "obvious" that operating systems would
be written in assembler in order to be efficient, and the instruction
set allowed high productivity for writing systems programs in "native" >>code.
Yes. I don't think that the productivity would have suffered from a load/store architecture, though.
As for a RISC-VAX: To little old naive me, it seems that it would have
been possible to create an alternative microcode load that would be able
to support a RISC ISA on the same hardware, if the idea had occured to a >>well-connected group of graduate students. How good a RISC might have
been feasible?
Did the VAX 11/780 have writable microcode?
Given that the VAX 11/780 was not (much) pipelined, I don't expect
that using an alternative microcode that implements a RISC ISA would
have performed well.
anton@mips.complang.tuwien.ac.at (Anton Ertl) writes:
Lars Poulsen <lars@cleo.beagle-ears.com> writes:
In the days of VAX-11/780, it was "obvious" that operating systems would >>> be written in assembler in order to be efficient, and the instruction
set allowed high productivity for writing systems programs in "native"
code.
Yes. I don't think that the productivity would have suffered from a
load/store architecture, though.
As for a RISC-VAX: To little old naive me, it seems that it would have
been possible to create an alternative microcode load that would be able >>> to support a RISC ISA on the same hardware, if the idea had occured to a >>> well-connected group of graduate students. How good a RISC might have
been feasible?
Did the VAX 11/780 have writable microcode?
Yes.
Given that the VAX 11/780 was not (much) pipelined, I don't expect
that using an alternative microcode that implements a RISC ISA would
have performed well.
A new ISA also requires development of the complete software
infrastructure for building applications (compilers, linkers,
assemblers); updating the OS, rebuilding existing applications
for the new ISA, field and customer training, etc.
Digital eventually did move VMS to Alpha, but it was neither
cheap, nor easy. Most alpha customers were existing VAX
customers - it's not clear that DEC actually grew the customer
base by switching to Alpha.
Wasn't PRISM/MICA supposed to solve this problem, or am I confusing it
with something else?
In article <106k15u$qgip$6@dont-email.me>,
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Fri, 1 Aug 2025 20:06:43 -0700, Peter Flass wrote:
Wasn't PRISM/MICA supposed to solve this problem, or am I confusing it
with something else?
PRISM was going to be a new hardware architecture, and MICA the OS to run
on it. Yes, they were supposed to solve the problem of where DEC was going >> to go since the VAX architecture was clearly being left in the dust by
RISC.
I think the MICA kernel was going to support the concept of
“personalities”, so that a VMS-compatible environment could be implemented
by one set of upper layers, while another set could provide Unix
functionality.
I think the project was taking too long, and not making enough progress.
So DEC management cancelled the whole thing, and brought out a MIPS-based
machine instead.
The guy in charge got annoyed at the killing of his pet project and left
in a huff. He took some of those ideas with him to his new employer, to
create a new OS for them.
The new employer was Microsoft. The guy in question was Dave Cutler. The
OS they brought out was called “Windows NT”.
And it's *still* not finished!
On 8/1/25 11:11, Scott Lurndal wrote:
anton@mips.complang.tuwien.ac.at (Anton Ertl) writes:
Lars Poulsen <lars@cleo.beagle-ears.com> writes:
In the days of VAX-11/780, it was "obvious" that operating systems would >>>> be written in assembler in order to be efficient, and the instruction
set allowed high productivity for writing systems programs in "native" >>>> code.
Yes. I don't think that the productivity would have suffered from a
load/store architecture, though.
As for a RISC-VAX: To little old naive me, it seems that it would have >>>> been possible to create an alternative microcode load that would be able >>>> to support a RISC ISA on the same hardware, if the idea had occured to a >>>> well-connected group of graduate students. How good a RISC might have
been feasible?
Did the VAX 11/780 have writable microcode?
Yes.
Given that the VAX 11/780 was not (much) pipelined, I don't expect
that using an alternative microcode that implements a RISC ISA would
have performed well.
A new ISA also requires development of the complete software
infrastructure for building applications (compilers, linkers,
assemblers); updating the OS, rebuilding existing applications
for the new ISA, field and customer training, etc.
Digital eventually did move VMS to Alpha, but it was neither
cheap, nor easy. Most alpha customers were existing VAX
customers - it's not clear that DEC actually grew the customer
base by switching to Alpha.
Wasn't PRISM/MICA supposed to solve this problem, or am I confusing it
with something else?
IIUC PRISM eventually became Alpha.
And Windows on Alpha had a brief shining moment in the sun (no
pun intended).
anton@mips.complang.tuwien.ac.at (Anton Ertl) writes:
Given that the VAX 11/780 was not (much) pipelined, I don't expect
that using an alternative microcode that implements a RISC ISA would
have performed well.
A new ISA also requires development of the complete software
infrastructure for building applications (compilers, linkers,
assemblers); updating the OS, rebuilding existing applications
for the new ISA, field and customer training, etc.
Digital eventually did move VMS to Alpha, but it was neither
cheap, nor easy. Most alpha customers were existing VAX
customers - it's not clear that DEC actually grew the customer
base by switching to Alpha.
1) Performance, and that cost DEC customers since RISCs were
introduced in the mid-1980s. DecStations were introduced to reduce
this bleeding, but of course this meant that these customers were
not VAX customers.
Anton Ertl <anton@mips.complang.tuwien.ac.at> schrieb:
1) Performance, and that cost DEC customers since RISCs were
introduced in the mid-1980s. DecStations were introduced to reduce
this bleeding, but of course this meant that these customers were
not VAX customers.
Or, even more importantly, VMS customers.
One big selling point of Alpha was 64-bit architecture, but IIUC
VMS was never fully ported to 64-bits, that is a lot of VMS
software used 32-bit addresses and some system interfaces were
32-bit only. OTOH Unix for Alpha was claimed to be pure 64-bit.
I guess I'm getting DecStations and VaxStations mixed up. Maybe one of
their problems was brand confusion.
In my RISC-VAX scenario, the RISC-VAX would be the PDP-11 followon
instead of the actual (CISC) VAX, so there would be no additional
ISA.
Vobis (a German discount computer reseller) offered Alpha-based Windows
boxes in 1993 and another model in 1997. Far too expensive for private
users ...
On Sat, 2 Aug 2025 09:07:14 -0000 (UTC), Thomas Koenig wrote:
Vobis (a German discount computer reseller) offered Alpha-based WindowsAnd what a waste of a 64-bit architecture, to run it in 32-bit-only
boxes in 1993 and another model in 1997. Far too expensive for private
users ...
mode ...
Lawrence D'Oliveiro [2025-08-02 23:21:18] wrote:
On Sat, 2 Aug 2025 09:07:14 -0000 (UTC), Thomas Koenig wrote:
Vobis (a German discount computer reseller) offered Alpha-based
Windows boxes in 1993 and another model in 1997. Far too expensive
for private users ...
And what a waste of a 64-bit architecture, to run it in 32-bit-only
mode ...
What do you mean by that?
On Sat, 02 Aug 2025 23:10:56 -0400, Stefan Monnier wrote:
Lawrence D'Oliveiro [2025-08-02 23:21:18] wrote:
On Sat, 2 Aug 2025 09:07:14 -0000 (UTC), Thomas Koenig wrote:
Vobis (a German discount computer reseller) offered Alpha-based
Windows boxes in 1993 and another model in 1997. Far too expensive
for private users ...
And what a waste of a 64-bit architecture, to run it in 32-bit-only
mode ...
What do you mean by that?
Of all the major OSes for Alpha, Windows NT was the only one
that couldn’t take advantage of the 64-bit architecture.
In comp.arch Anton Ertl <anton@mips.complang.tuwien.ac.at> wrote:
Did the VAX 11/780 have writable microcode?
Yes, 12 kB (2K words 96-bit each).
One piece of supporting sofware
was a VAX emulator IIRC called FX11: it allowed running unmodified
VAX binaries.
OTOH Unix for Alpha was claimed to be pure 64-bit.
The C environment for DEC OSF/1 was an I32LP64 setup, not an ILP64
setup, so can you really call it pure?
On Sun, 03 Aug 2025 16:51:10 GMT, Anton Ertl wrote:
The C environment for DEC OSF/1 was an I32LP64 setup, not an ILP64
setup, so can you really call it pure?
As far as I’m aware, I32LP64 is the standard across 64-bit *nix systems.
Microsoft’s compilers for 64-bit Windows do LLP64. Not aware of any platforms that do/did ILP64.
On 8/3/2025 7:04 PM, Lawrence D'Oliveiro wrote:
On Sun, 03 Aug 2025 16:51:10 GMT, Anton Ertl wrote:
The C environment for DEC OSF/1 was an I32LP64 setup, not an ILP64
setup, so can you really call it pure?
As far as I’m aware, I32LP64 is the standard across 64-bit *nix systems. >>
Microsoft’s compilers for 64-bit Windows do LLP64. Not aware of any
platforms that do/did ILP64.
Yeah, pretty much nothing does ILP64, and doing so would actually be a problem.
Also, C type names:
char : 8 bit
short : 16 bit
int : 32 bit
long : 64 bit
long long: 64 bit
If 'int' were 64-bits, then what about 16 and/or 32 bit types.
short short?
long short?
...
Current system seems preferable.
Well, at least in absence of maybe having the compiler specify actual fixed-size types.
Or, say, what if there was a world where the actual types were, say:
_Int8, _Int16, _Int32, _Int64, _Int128
And, then, say:
char, short, int, long, ...
Were seen as aliases.
Well, maybe along with __int64 and friends, but __int64 and _Int64 could
be seen as equivalent.
Then of course, the "stdint.h" types.
Traditionally, these are a bunch of typedef's to the 'int' and friends.
But, one can imagine a hypothetical world where stdint.h contained
things like, say:
typedef _Int32 int32_t;
C keeps borrowing more and more PL/I features.
On 8/3/2025 7:04 PM, Lawrence D'Oliveiro wrote:any
On Sun, 03 Aug 2025 16:51:10 GMT, Anton Ertl wrote:
=20
The C environment for DEC OSF/1 was an I32LP64 setup, not an ILP64=20
setup, so can you really call it pure? =20
As far as I=E2=80=99m aware, I32LP64 is the standard across 64-bit *nix systems.
=20
Microsoft=E2=80=99s compilers for 64-bit Windows do LLP64. Not aware of=
platforms that do/did ILP64. =20=20
Yeah, pretty much nothing does ILP64, and doing so would actually be
a problem.
=20
Also, C type names:
char : 8 bit
short : 16 bit
int : 32 bit
long : 64 bit
long long: 64 bit
=20
If 'int' were 64-bits, then what about 16 and/or 32 bit types.
short short?
long short?
...
=20
Current system seems preferable.
Well, at least in absence of maybe having the compiler specify actual=20 fixed-size types.
=20
Or, say, what if there was a world where the actual types were, say:
_Int8, _Int16, _Int32, _Int64, _Int128
And, then, say:
char, short, int, long, ...
Were seen as aliases.
=20
Well, maybe along with __int64 and friends, but __int64 and _Int64
could be seen as equivalent.
=20
=20
Then of course, the "stdint.h" types.
Traditionally, these are a bunch of typedef's to the 'int' and
friends. But, one can imagine a hypothetical world where stdint.h
contained things like, say:
typedef _Int32 int32_t;
=20
=20
...
=20
=20
On 8/3/25 19:07, BGB wrote:f any
On 8/3/2025 7:04 PM, Lawrence D'Oliveiro wrote: =20
On Sun, 03 Aug 2025 16:51:10 GMT, Anton Ertl wrote:
=20
The C environment for DEC OSF/1 was an I32LP64 setup, not an ILP64
setup, so can you really call it pure? =20
As far as I=E2=80=99m aware, I32LP64 is the standard across 64-bit *nix
systems.
Microsoft=E2=80=99s compilers for 64-bit Windows do LLP64. Not aware o=
=20platforms that do/did ILP64. =20=20
Yeah, pretty much nothing does ILP64, and doing so would actually
be a problem.
=20
Also, C type names:
=C2=A0 char=C2=A0=C2=A0=C2=A0=C2=A0 :=C2=A0 8 bit
=C2=A0 short=C2=A0=C2=A0=C2=A0 : 16 bit
=C2=A0 int=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 : 32 bit
=C2=A0 long=C2=A0=C2=A0=C2=A0=C2=A0 : 64 bit
=C2=A0 long long: 64 bit
=20
If 'int' were 64-bits, then what about 16 and/or 32 bit types.
=C2=A0 short short?
=C2=A0 long short?
=C2=A0 ...
=20
Current system seems preferable.
Well, at least in absence of maybe having the compiler specify
actual fixed-size types.
=20
Or, say, what if there was a world where the actual types were, say:
=C2=A0 _Int8, _Int16, _Int32, _Int64, _Int128
And, then, say:
=C2=A0 char, short, int, long, ...
Were seen as aliases.
=20
Well, maybe along with __int64 and friends, but __int64 and _Int64
could be seen as equivalent.
=20
=20
Then of course, the "stdint.h" types.
Traditionally, these are a bunch of typedef's to the 'int' and
friends. But, one can imagine a hypothetical world where stdint.h
contained things like, say:
=C2=A0 typedef _Int32 int32_t;
=20
=20
Like PL/I which lets you specify any precision: FIXED BINARY(31),
FIXED BINARY(63) etc.
=20
C keeps borrowing more and more PL/I features.
=20
antispam@fricas.org (Waldek Hebisch) writes:
One piece of supporting sofware
was a VAX emulator IIRC called FX11: it allowed running unmodified
VAX binaries.
There was also a static binary translator for DecStation binaries. I
never used it, but a collegue tried to. He found that on the Prolog
systems that he tried it with (I think it was Quintus or SICStus), it
did not work, because that system did unusual things with the binary,
and that did not work on the result of the binary translation. Moral
of the story: Better use dynamic binary translation (which Apple did
for their 68K->PowerPC transition at around the same time).
OTOH Unix for Alpha was claimed to be pure 64-bit.
It depends on the kind of purity you are aspiring to. After a bunch
of renamings it was finally called Tru64 UNIX. Not Pur64, but
Tru64:-) Before that, it was called Digital UNIX (but once DEC had
been bought by Compaq, that was no longer appropriate), and before
that, DEC OSF/1 AXP.
The C environment for DEC OSF/1 was an I32LP64 setup, not an ILP64
setup, so can you really call it pure?
In addition there were some OS features for running ILP32 programs,
similar to Linux' MAP_32BIT flag for mmap(). IIRC Netscape Navigator
was compiled as ILP32 program (the C compiler had a flag for that),
and needed these OS features.
- anton
May be, MIPS-to-Alpha was static simply because it had much lower
priority within DEC?
Actually, in our world the latest C standard (C23) has them, but the
spelling is different: _BitInt(32) and unsigned _BitInt(32).
I'm not sure if any major compiler already has them implemented. Bing
copilot says that clang does, but I don't tend to believe eveything Bing >copilot says.
May be, MIPS-to-Alpha was static simply because it had much lower
priority within DEC?
On Sun, 3 Aug 2025 21:07:02 -0500
BGB <cr88192@gmail.com> wrote:
Except for majority of the world where long is 32 bit
Michael S <already5chosen@yahoo.com> writes:
On Sun, 3 Aug 2025 21:07:02 -0500
BGB <cr88192@gmail.com> wrote:
Except for majority of the world where long is 32 bit
What majority? Linux owns the server market, the
appliance market and much of the handset market (which apple
dominates with their OS). And all Unix/Linux systems have
64-bit longs on 64-bit CPUs.
Scott Lurndal wrote:
Michael S <already5chosen@yahoo.com> writes:
On Sun, 3 Aug 2025 21:07:02 -0500
BGB <cr88192@gmail.com> wrote:
Except for majority of the world where long is 32 bit
What majority? Linux owns the server market, the
appliance market and much of the handset market (which apple
dominates with their OS). And all Unix/Linux systems have
64-bit longs on 64-bit CPUs.
Apple/iPhone might dominate in the US market (does it?), but in the rest
of the world Android (with linux) is far larger. World total is 72%
Android, 28% iOS.
Michael S <already5chosen@yahoo.com> writes:
On Sun, 3 Aug 2025 21:07:02 -0500
BGB <cr88192@gmail.com> wrote:
Except for majority of the world where long is 32 bit
What majority? Linux owns the server market, the
appliance market and much of the handset market (which apple
dominates with their OS). And all Unix/Linux systems have
64-bit longs on 64-bit CPUs.
And what a waste of a 64-bit architecture, to run it in 32-bit-only
mode ...
What do you mean by that? IIUC, the difference between 32bit and
64bit (in terms of cost of designing and producing the CPU) was very
small. MIPS happily designed their R4000 as 64bit while knowing that
most of them would never get a chance to execute an instruction that
makes use of the upper 32bits.
On Sat, 02 Aug 2025 23:10:56 -0400
Stefan Monnier <monnier@iro.umontreal.ca> wrote:
And what a waste of a 64-bit architecture, to run it in 32-bit-only
mode ...
What do you mean by that? IIUC, the difference between 32bit and
64bit (in terms of cost of designing and producing the CPU) was very
small. MIPS happily designed their R4000 as 64bit while knowing that
most of them would never get a chance to execute an instruction that
makes use of the upper 32bits.
This notion that the only advantage of a 64-bit architecture is a large address space is very curious to me. Obviously that's *one* advantage,
but while I don't know the in-the-field history of heavy-duty business/ scientific computing the way some folks here do, I have not gotten the impression that a lot of customers were commonly running up against the
4 GB limit in the early '90s; meanwhile, the *other* advantage - higher performance for the same MIPS on a variety of compute-bound tasks - is
being overlooked entirely, it seems.
On 8/3/25 19:07, BGB wrote:
On 8/3/2025 7:04 PM, Lawrence D'Oliveiro wrote:
On Sun, 03 Aug 2025 16:51:10 GMT, Anton Ertl wrote:
The C environment for DEC OSF/1 was an I32LP64 setup, not an ILP64
setup, so can you really call it pure?
As far as I’m aware, I32LP64 is the standard across 64-bit *nix systems. >>>
Microsoft’s compilers for 64-bit Windows do LLP64. Not aware of any
platforms that do/did ILP64.
Yeah, pretty much nothing does ILP64, and doing so would actually be a
problem.
Also, C type names:
char : 8 bit
short : 16 bit
int : 32 bit
long : 64 bit
long long: 64 bit
If 'int' were 64-bits, then what about 16 and/or 32 bit types.
short short?
long short?
...
Current system seems preferable.
Well, at least in absence of maybe having the compiler specify actual
fixed-size types.
Or, say, what if there was a world where the actual types were, say:
_Int8, _Int16, _Int32, _Int64, _Int128
And, then, say:
char, short, int, long, ...
Were seen as aliases.
Well, maybe along with __int64 and friends, but __int64 and _Int64
could be seen as equivalent.
Then of course, the "stdint.h" types.
Traditionally, these are a bunch of typedef's to the 'int' and friends.
But, one can imagine a hypothetical world where stdint.h contained
things like, say:
typedef _Int32 int32_t;
Like PL/I which lets you specify any precision: FIXED BINARY(31), FIXED BINARY(63) etc.
C keeps borrowing more and more PL/I features.
On Sun, 3 Aug 2025 21:07:02 -0500
BGB <cr88192@gmail.com> wrote:
On 8/3/2025 7:04 PM, Lawrence D'Oliveiro wrote:
On Sun, 03 Aug 2025 16:51:10 GMT, Anton Ertl wrote:
The C environment for DEC OSF/1 was an I32LP64 setup, not an ILP64
setup, so can you really call it pure?
As far as I’m aware, I32LP64 is the standard across 64-bit *nix
systems.
Microsoft’s compilers for 64-bit Windows do LLP64. Not aware of any
platforms that do/did ILP64.
Yeah, pretty much nothing does ILP64, and doing so would actually be
a problem.
Also, C type names:
char : 8 bit
short : 16 bit
int : 32 bit
Except in embedded 16 bit are not rare
long : 64 bit
Except for majority of the world where long is 32 bit
long long: 64 bit
If 'int' were 64-bits, then what about 16 and/or 32 bit types.
short short?
long short?
...
Current system seems preferable.
Well, at least in absence of maybe having the compiler specify actual
fixed-size types.
Or, say, what if there was a world where the actual types were, say:
_Int8, _Int16, _Int32, _Int64, _Int128
And, then, say:
char, short, int, long, ...
Were seen as aliases.
Actually, in our world the latest C standard (C23) has them, but the
spelling is different: _BitInt(32) and unsigned _BitInt(32).
I'm not sure if any major compiler already has them implemented. Bing
copilot says that clang does, but I don't tend to believe eveything Bing copilot says.
Well, maybe along with __int64 and friends, but __int64 and _Int64
could be seen as equivalent.
Then of course, the "stdint.h" types.
Traditionally, these are a bunch of typedef's to the 'int' and
friends. But, one can imagine a hypothetical world where stdint.h
contained things like, say:
typedef _Int32 int32_t;
...
On Sat, 02 Aug 2025 23:10:56 -0400
Stefan Monnier <monnier@iro.umontreal.ca> wrote:
And what a waste of a 64-bit architecture, to run it in 32-bit-only
mode ...
What do you mean by that? IIUC, the difference between 32bit and
64bit (in terms of cost of designing and producing the CPU) was very
small. MIPS happily designed their R4000 as 64bit while knowing that
most of them would never get a chance to execute an instruction that
makes use of the upper 32bits.
This notion that the only advantage of a 64-bit architecture is a large >address space is very curious to me. Obviously that's *one* advantage,
but while I don't know the in-the-field history of heavy-duty business/ >scientific computing the way some folks here do, I have not gotten the >impression that a lot of customers were commonly running up against the
4 GB limit in the early '90s; meanwhile, the *other* advantage - higher >performance for the same MIPS on a variety of compute-bound tasks - is
being overlooked entirely, it seems.
On Sat, 02 Aug 2025 09:28:17 GMT, Anton Ertl wrote:
In my RISC-VAX scenario, the RISC-VAX would be the PDP-11 followon
instead of the actual (CISC) VAX, so there would be no additional
ISA.
In order to be RISC, it would have had to add registers and remove >addressing modes from the non-load/store instructions (and replace "move" >with separate "load" and "store" instructions).
"No additional ISA" or
not, it would still have broken existing code.
Remember that VAX development started in the early-to-mid-1970s.
RISC was
still nothing more than a research idea at that point, which had yet to >prove itself.
The claim by John Savard was that the VAX "was a good match to the
technology *of its time*". It was not. It may have been a good match
for the beliefs of the time, but that's a different thing.
Anton Ertl <anton@mips.complang.tuwien.ac.at> schrieb:
The claim by John Savard was that the VAX "was a good match to the
technology *of its time*". It was not. It may have been a good match
for the beliefs of the time, but that's a different thing.
I concur; also, the evidence of the 801 supports that (and that
was designed around the same time as the VAX).
Michael S <already5chosen@yahoo.com> writes:
scott@slp53.sl.home (Scott Lurndal) wrote:In terms of shipped units, perhaps (although many are narrower, as you
Michael S <already5chosen@yahoo.com> writes:Majority of the world is embedded. Ovewhelming majority of embedded is
BGB <cr88192@gmail.com> wrote:What majority? Linux owns the server market, the
Except for majority of the world where long is 32 bit
appliance market and much of the handset market (which apple
dominates with their OS). And all Unix/Linux systems have
64-bit longs on 64-bit CPUs.
32-bit or narrower.
point out). In terms of programmers, it's a fairly small fraction that
do embedded programming.
Anton Ertl <anton@mips.complang.tuwien.ac.at> schrieb:
The claim by John Savard was that the VAX "was a good match to the technology *of its time*". It was not. It may have been a good
match for the beliefs of the time, but that's a different thing.
I concur; also, the evidence of the 801 supports that (and that
was designed around the same time as the VAX).
Although, personally, I think Data General might have been the
better target. Going to Edson de Castro and telling him that he
was on the right track with the Nova from the start, and his ideas
should be extended, might have been politically easier than going
to DEC.
Although, personally, I think Data General might have been the
better target. Going to Edson de Castro and telling him that he
was on the right track with the Nova from the start, and his ideas
should be extended, might have been politically easier than going
to DEC.
Scott Lurndal [2025-08-04 15:32:55] wrote:
Michael S <already5chosen@yahoo.com> writes:
scott@slp53.sl.home (Scott Lurndal) wrote:In terms of shipped units, perhaps (although many are narrower, as
Michael S <already5chosen@yahoo.com> writes:Majority of the world is embedded. Ovewhelming majority of
BGB <cr88192@gmail.com> wrote:What majority? Linux owns the server market, the
Except for majority of the world where long is 32 bit
appliance market and much of the handset market (which apple
dominates with their OS). And all Unix/Linux systems have
64-bit longs on 64-bit CPUs.
embedded is 32-bit or narrower.
you point out). In terms of programmers, it's a fairly small
fraction that do embedded programming.
Yeah, the unit of measurement is a problem.
I wonder how it compares if you look at number of programmers paid to
write C code (after all, we're talking about C).
In the desktop/server/laptop/handheld world, AFAICT the market share
of C has shrunk significantly over the years whereas I get the
impression that it's still quite strong in the embedded space. But I
don't have any hard data.
Stefan
Stefan Monnier <monnier@iro.umontreal.ca> wrote:
What do you mean by that? IIUC, the difference between 32bit andThis notion that the only advantage of a 64-bit architecture is a large address space is very curious to me.
64bit (in terms of cost of designing and producing the CPU) was very
small. MIPS happily designed their R4000 as 64bit while knowing that
most of them would never get a chance to execute an instruction that
makes use of the upper 32bits.
On Mon, 4 Aug 2025 18:16:45 -0000 (UTC)
Thomas Koenig <tkoenig@netcologne.de> wrote:
Although, personally, I think Data General might have been the
better target. Going to Edson de Castro and telling him that he
was on the right track with the Nova from the start, and his ideas
should be extended, might have been politically easier than going
to DEC.
I don't quite understand the context of this comment. Can you elaborate?
On Mon, 04 Aug 2025 15:09:55 -0400
Stefan Monnier <monnier@iro.umontreal.ca> wrote:
Scott Lurndal [2025-08-04 15:32:55] wrote:
Michael S <already5chosen@yahoo.com> writes:
scott@slp53.sl.home (Scott Lurndal) wrote:In terms of shipped units, perhaps (although many are narrower, as
Michael S <already5chosen@yahoo.com> writes:Majority of the world is embedded. Ovewhelming majority of
BGB <cr88192@gmail.com> wrote:What majority? Linux owns the server market, the
Except for majority of the world where long is 32 bit
appliance market and much of the handset market (which apple
dominates with their OS). And all Unix/Linux systems have
64-bit longs on 64-bit CPUs.
embedded is 32-bit or narrower.
you point out). In terms of programmers, it's a fairly small
fraction that do embedded programming.
Yeah, the unit of measurement is a problem.
I wonder how it compares if you look at number of programmers paid to
write C code (after all, we're talking about C).
In the desktop/server/laptop/handheld world, AFAICT the market share
of C has shrunk significantly over the years whereas I get the
impression that it's still quite strong in the embedded space. But I
don't have any hard data.
Stefan
Personally, [outside of Usenet and rwt forum] I know no one except
myself who writes C targeting user mode on "big" computers (big, in my >definitions, starts at smartphone).
Myself, I am doing it more as a
hobby and to make a point rather than out of professional needs. >Professionally, in this range I tend to use C++. Not a small part of it
is that C++ is more familiar than C for my younger co-workers.
Michael S <already5chosen@yahoo.com> schrieb:
On Mon, 4 Aug 2025 18:16:45 -0000 (UTC)
Thomas Koenig <tkoenig@netcologne.de> wrote:
Although, personally, I think Data General might have been the
better target. Going to Edson de Castro and telling him that he
was on the right track with the Nova from the start, and his ideas
should be extended, might have been politically easier than going
to DEC.
I don't quite understand the context of this comment. Can you
elaborate?
De Castro had had a big success with a simple load-store
architecture, the Nova. He did that to reduce CPU complexity
and cost, to compete with DEC and its PDP-8. (Byte addressing
was horrible on the Nova, though).
Now, assume that, as a time traveler wanting to kick off an early
RISC revolution, you are not allowed to reveal that you are a time
traveler (which would have larger effects than just a different
computer architecture). What do you do?
a) You go to DEC
b) You go to Data General
c) You found your own company
My guess would be that, with DEC, you would have the least chance of convincing corporate brass of your ideas. With Data General, you
could try appealing to the CEO's personal history of creating the
Nova, and thus his vanity. That could work. But your own company
might actually be the best choice, if you can get the venture
capital funding.
This notion that the only advantage of a 64-bit architecture is a large address space is very curious to me. Obviously that's *one* advantage,
but while I don't know the in-the-field history of heavy-duty business/ scientific computing the way some folks here do, I have not gotten the impression that a lot of customers were commonly running up against the
4 GB limit in the early '90s;
Michael S <already5chosen@yahoo.com> writes:
On Mon, 04 Aug 2025 15:09:55 -0400
Stefan Monnier <monnier@iro.umontreal.ca> wrote:
Scott Lurndal [2025-08-04 15:32:55] wrote:
Michael S <already5chosen@yahoo.com> writes:
scott@slp53.sl.home (Scott Lurndal) wrote:In terms of shipped units, perhaps (although many are narrower,
Michael S <already5chosen@yahoo.com> writes:Majority of the world is embedded. Ovewhelming majority of
BGB <cr88192@gmail.com> wrote:What majority? Linux owns the server market, the
Except for majority of the world where long is 32 bit
appliance market and much of the handset market (which apple
dominates with their OS). And all Unix/Linux systems have
64-bit longs on 64-bit CPUs.
embedded is 32-bit or narrower.
as you point out). In terms of programmers, it's a fairly small
fraction that do embedded programming.
Yeah, the unit of measurement is a problem.
I wonder how it compares if you look at number of programmers paid
to write C code (after all, we're talking about C).
In the desktop/server/laptop/handheld world, AFAICT the market
share of C has shrunk significantly over the years whereas I get
the impression that it's still quite strong in the embedded space.
But I don't have any hard data.
Stefan
Personally, [outside of Usenet and rwt forum] I know no one except
myself who writes C targeting user mode on "big" computers (big, in
my definitions, starts at smartphone).
Linux developers would be a significant, if not large, pool
of C programmers.
Myself, I am doing it more as a
hobby and to make a point rather than out of professional needs. >Professionally, in this range I tend to use C++. Not a small part of
it is that C++ is more familiar than C for my younger co-workers.
Likewise, I've been using C++ rather than C since 1989, including for large-scale operating systems and hypervisors (both running on bare
metal).
On 8/4/2025 8:32 AM, John Ames wrote:
snip
This notion that the only advantage of a 64-bit architecture is a
large address space is very curious to me. Obviously that's *one* advantage, but while I don't know the in-the-field history of
heavy-duty business/ scientific computing the way some folks here
do, I have not gotten the impression that a lot of customers were
commonly running up against the 4 GB limit in the early '90s;
Not exactly the same, but I recall an issue with Windows NT where it initially divided the 4GB address space in 2 GB for the OS, and 2GB
for users. Some users were "running out of address space", so
Microsoft came up with an option to reduce the OS space to 1 GB, thus allowing up to 3 GB for users. I am sure others here will know more
details.
On Mon, 04 Aug 2025 20:29:35 GMT
scott@slp53.sl.home (Scott Lurndal) wrote:
Michael S <already5chosen@yahoo.com> writes:
On Mon, 04 Aug 2025 15:09:55 -0400
Stefan Monnier <monnier@iro.umontreal.ca> wrote:
Scott Lurndal [2025-08-04 15:32:55] wrote:
Michael S <already5chosen@yahoo.com> writes:
scott@slp53.sl.home (Scott Lurndal) wrote:In terms of shipped units, perhaps (although many are narrower,
Michael S <already5chosen@yahoo.com> writes:Majority of the world is embedded. Ovewhelming majority of
BGB <cr88192@gmail.com> wrote:What majority? Linux owns the server market, the
Except for majority of the world where long is 32 bit
appliance market and much of the handset market (which apple
dominates with their OS). And all Unix/Linux systems have
64-bit longs on 64-bit CPUs.
embedded is 32-bit or narrower.
as you point out). In terms of programmers, it's a fairly small
fraction that do embedded programming.
Yeah, the unit of measurement is a problem.
I wonder how it compares if you look at number of programmers paid
to write C code (after all, we're talking about C).
In the desktop/server/laptop/handheld world, AFAICT the market
share of C has shrunk significantly over the years whereas I get
the impression that it's still quite strong in the embedded space.
But I don't have any hard data.
Stefan
Personally, [outside of Usenet and rwt forum] I know no one except
myself who writes C targeting user mode on "big" computers (big, in
my definitions, starts at smartphone).
Linux developers would be a significant, if not large, pool
of C programmers.
According to my understanding, Linux developers *maintain* user-mode C >programs. They very rarely start new user-mode C programs from scratch.
The last big one I can think about was git almost 2 decades ago. And
even that happened more due to personal idiosyncrasies of its
originator than for solid technical reasons.
I could be wrong about it, of course.
For few of your previous project I am convinced that it was a wrong
tool.
Why not go to somebody who has money and interest to build
microprocessor, but no existing mini/mainframe/SuperC buisness?
On 8/4/2025 8:32 AM, John Ames wrote:
snip
This notion that the only advantage of a 64-bit architecture is a large
address space is very curious to me. Obviously that's *one* advantage,
but while I don't know the in-the-field history of heavy-duty business/
scientific computing the way some folks here do, I have not gotten the
impression that a lot of customers were commonly running up against the
4 GB limit in the early '90s;
Not exactly the same, but I recall an issue with Windows NT where it >initially divided the 4GB address space in 2 GB for the OS, and 2GB for >users. Some users were "running out of address space", so Microsoft
came up with an option to reduce the OS space to 1 GB, thus allowing up
to 3 GB for users. I am sure others here will know more details.
On Mon, 4 Aug 2025 20:13:54 -0000 (UTC)
Thomas Koenig <tkoenig@netcologne.de> wrote:
Michael S <already5chosen@yahoo.com> schrieb:
On Mon, 4 Aug 2025 18:16:45 -0000 (UTC)
Thomas Koenig <tkoenig@netcologne.de> wrote:
Although, personally, I think Data General might have been the
better target. Going to Edson de Castro and telling him that he
was on the right track with the Nova from the start, and his ideas
should be extended, might have been politically easier than going
to DEC.
I don't quite understand the context of this comment. Can you
elaborate?
De Castro had had a big success with a simple load-store
architecture, the Nova. He did that to reduce CPU complexity
and cost, to compete with DEC and its PDP-8. (Byte addressing
was horrible on the Nova, though).
Now, assume that, as a time traveler wanting to kick off an early
RISC revolution, you are not allowed to reveal that you are a time
traveler (which would have larger effects than just a different
computer architecture). What do you do?
a) You go to DEC
b) You go to Data General
c) You found your own company
My guess would be that, with DEC, you would have the least chance of
convincing corporate brass of your ideas. With Data General, you
could try appealing to the CEO's personal history of creating the
Nova, and thus his vanity. That could work. But your own company
might actually be the best choice, if you can get the venture
capital funding.
Why not go to somebody who has money and interest to build
microprocessor, but no existing mini/mainframe/SuperC buisness?
If we limit ourselves to USA then Moto, Intel, AMD, NatSemi...
May be, even AT&T ? Or was AT&T stil banned from making computers in
the mid 70s?
On Sat, 02 Aug 2025 23:10:56 -0400
Stefan Monnier <monnier@iro.umontreal.ca> wrote:
And what a waste of a 64-bit architecture, to run it in 32-bit-only
mode ...
What do you mean by that? IIUC, the difference between 32bit and
64bit (in terms of cost of designing and producing the CPU) was very
small. MIPS happily designed their R4000 as 64bit while knowing that
most of them would never get a chance to execute an instruction that
makes use of the upper 32bits.
This notion that the only advantage of a 64-bit architecture is a large address space is very curious to me. Obviously that's *one* advantage,
but while I don't know the in-the-field history of heavy-duty business/ scientific computing the way some folks here do, I have not gotten the impression that a lot of customers were commonly running up against the
4 GB limit in the early '90s; meanwhile, the *other* advantage - higher performance for the same MIPS on a variety of compute-bound tasks - is
being overlooked entirely, it seems.
Stephen Fuld <sfuld@alumni.cmu.edu.invalid> writes:
On 8/4/2025 8:32 AM, John Ames wrote:
snip
This notion that the only advantage of a 64-bit architecture is a large
address space is very curious to me. Obviously that's *one* advantage,
but while I don't know the in-the-field history of heavy-duty business/
scientific computing the way some folks here do, I have not gotten the
impression that a lot of customers were commonly running up against the
4 GB limit in the early '90s;
Not exactly the same, but I recall an issue with Windows NT where it >>initially divided the 4GB address space in 2 GB for the OS, and 2GB for >>users. Some users were "running out of address space", so Microsoft
came up with an option to reduce the OS space to 1 GB, thus allowing up
to 3 GB for users. I am sure others here will know more details.
AT&T SVR[34] Unix systems had the same issue on x86, as did linux. They mainly used the same solution as well (give the user 3GB) of virtual
address space.
I believe SVR4 was also able to leverage 36-bit physical addressing to
use more 4GB of DRAM, while still limiting a single process to 2 or 3GB
of user virtual address space.
On 8/2/25 1:07 AM, Waldek Hebisch wrote:
IIUC PRISM eventually became Alpha.
Not really. Documents for both, including
the rare PRISM docs are on bitsavers.
PRISM came out of Cutler's DEC West group,
Alpha from the East Coast. I'm not aware
of any team member overlap.
antispam@fricas.org (Waldek Hebisch) writes:<snip>
OTOH Unix for Alpha was claimed to be pure 64-bit.
It depends on the kind of purity you are aspiring to. After a bunch
of renamings it was finally called Tru64 UNIX. Not Pur64, but
Tru64:-) Before that, it was called Digital UNIX (but once DEC had
been bought by Compaq, that was no longer appropriate), and before
that, DEC OSF/1 AXP.
The C environment for DEC OSF/1 was an I32LP64 setup, not an ILP64
setup, so can you really call it pure?
In addition there were some OS features for running ILP32 programs,
similar to Linux' MAP_32BIT flag for mmap(). IIRC Netscape Navigator
was compiled as ILP32 program (the C compiler had a flag for that),
and needed these OS features.
[snip]
The C environment for DEC OSF/1 was an I32LP64 setup, not an ILP64
setup, so can you really call it pure?
In addition there were some OS features for running ILP32 programs,
similar to Linux' MAP_32BIT flag for mmap(). IIRC Netscape Navigator
was compiled as ILP32 program (the C compiler had a flag for that),
and needed these OS features.
This notion that the only advantage of a 64-bit architecture is a large address space is very curious to me.
Obviously that's *one* advantage, but while I don't know the
in-the-field history of heavy-duty business/ scientific computing
the way some folks here do, I have not gotten the impression that a
lot of customers were commonly running up against the 4 GB limit in
the early '90s ...
... meanwhile, the *other* advantage - higher performance for the
same MIPS on a variety of compute-bound tasks - is being overlooked
entirely, it seems.
... I recall an issue with Windows NT where it initially divided the
4GB address space in 2 GB for the OS, and 2GB for users. Some users
were "running out of address space", so Microsoft came up with an
option to reduce the OS space to 1 GB, thus allowing up to 3 GB for
users. I am sure others here will know more details.
BTW: AMD-64 was a special case: since 64-bit mode was bundled with
increasing number of GPR-s, with PC-relative addressing and with register-based call convention on average 64-bit code was faster than
32-bit code. And since AMD-64 was relatively late in 64-bit game there
was limited motivation to develop mode using 32-bit addressing and
64-bit instructions. It works in compilers and in Linux, but support is
much worse than for using 64-bit addressing.
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
On Sat, 02 Aug 2025 09:28:17 GMT, Anton Ertl wrote:
In my RISC-VAX scenario, the RISC-VAX would be the PDP-11 followon
instead of the actual (CISC) VAX, so there would be no additional
ISA.
In order to be RISC, it would have had to add registers and remove
addressing modes from the non-load/store instructions (and replace
"move" with separate "load" and "store" instructions).
Add registers: No, ARM A32 is RISC and has as many registers as VAX ...
The essence of RISC really is just exposing what existed in the
microcode engines to user-level programming and didn't really make
sense until main memory systems got a lot faster.
a) You go to DEC
b) You go to Data General
c) You found your own company
The ban on AT&T was the whole reason they released Unix freely.
Then when things lifted (after the AT&T break-up), they tried to
re-assert their control over Unix, which backfired.
And, they tried to make and release a workstation, but by then they
were competing against the IBM PC Clone market (and also everyone
else trying to sell Unix workstations at the time), ...
And as others noticed, I32LP64 was very common.
MIPS products came out of DECWRL (the research group started to build
Titan) and were stopgaps until the "real" architecture came out
(Cutler's out of DECWest)
I don't think it ever got much love out of DEC corporate and were just
done so DEC didn't completely get their lunch eaten in the Unix
workstation market.
Except for majority of the world where long is 32 bit
On Mon, 4 Aug 2025 23:24:15 -0000 (UTC), Waldek Hebisch wrote:
BTW: AMD-64 was a special case: since 64-bit mode was bundled with
increasing number of GPR-s, with PC-relative addressing and with
register-based call convention on average 64-bit code was faster than
32-bit code. And since AMD-64 was relatively late in 64-bit game there
was limited motivation to develop mode using 32-bit addressing and
64-bit instructions. It works in compilers and in Linux, but support is
much worse than for using 64-bit addressing.
Intel was trying to promote this in the form of the “X32” ABI. The Linux kernel and some distros did include support for this. I don’t think it was very popular, and it may be extinct now.
Majority of the world is embedded. Ovewhelming majority of embedded is
32-bit or narrower.
On Mon, 4 Aug 2025 18:07:48 +0300, Michael S wrote:
Majority of the world is embedded. Ovewhelming majority of embedded is
32-bit or narrower.
Embedded CPUs are mostly ARM, MIPS, RISC-V ... all of which are available
in 64-bit variants.
On Mon, 4 Aug 2025 14:06:17 -0700, Stephen Fuld wrote:
... I recall an issue with Windows NT where it initially divided the
4GB address space in 2 GB for the OS, and 2GB for users. Some users
were "running out of address space", so Microsoft came up with an
option to reduce the OS space to 1 GB, thus allowing up to 3 GB for
users. I am sure others here will know more details.
That would have been prone to breakage in poorly-written programs that
were using signed instead of unsigned comparisons on memory block sizes.
I hit an earlier version of this problem in about the mid-1980s, trying to help a user install WordStar on his IBM PC, which was one of the earliest machines to have 640K of RAM. The WordStar installer balked, saying he didn’t have enough free RAM!
The solution: create a dummy RAM disk to bring the free memory size down below 512K. Then after the installation succeeded, the RAM disk could be removed.
AFAIK (from what I heard about all of this):
The ban on AT&T was the whole reason they released Unix freely.
Then when things lifted (after the AT&T break-up), they tried to
re-assert their control over Unix, which backfired. And, they tried to
make and release a workstation, but by then they were competing against
the IBM PC Clone market (and also everyone else trying to sell Unix workstations at the time), ...
Then, in their thing of trying to re-consolidate Unix under their
control, and fighting with the BSD people over copyright, etc. Linux and Microsoft came in and mostly ate what market they might have had.
On Mon, 4 Aug 2025 20:13:54 -0000 (UTC), Thomas Koenig wrote:
a) You go to DEC
b) You go to Data General
c) You found your own company
How about d) Go talk to the man responsible for the fastest machines in
the world around that time, i.e. Seymour Cray?
On 8/4/2025 8:32 AM, John Ames wrote:e
=20
snip
=20
This notion that the only advantage of a 64-bit architecture is a larg=
/address space is very curious to me. Obviously that's *one* advantage,=
but while I don't know the in-the-field history of heavy-duty business=
escientific computing the way some folks here do, I have not gotten the=
impression that a lot of customers were commonly running up against th=
=204 GB limit in the early '90s;=20
Not exactly the same, but I recall an issue with Windows NT where it=20 initially divided the 4GB address space in 2 GB for the OS, and 2GB for=
users.=C2=A0 Some users were "running out of address space", so Microso=ft=20
came up with an option to reduce the OS space to 1 GB, thus allowing up==20
to 3 GB for users.=C2=A0 I am sure others here will know more details.
Stephen Fuld wrote:
On 8/4/2025 8:32 AM, John Ames wrote:e
=20
snip
=20
This notion that the only advantage of a 64-bit architecture is a larg=
address space is very curious to me. Obviously that's *one* advantage,=
/but while I don't know the in-the-field history of heavy-duty business=
scientific computing the way some folks here do, I have not gotten the=
eimpression that a lot of customers were commonly running up against th=
4 GB limit in the early '90s;=20
Not exactly the same, but I recall an issue with Windows NT where it=20
initially divided the 4GB address space in 2 GB for the OS, and 2GB for= >=20
users.=C2=A0 Some users were "running out of address space", so Microso= >ft=20
came up with an option to reduce the OS space to 1 GB, thus allowing up= >=20
to 3 GB for users.=C2=A0 I am sure others here will know more details.
Any program written to Microsoft/Windows spec would work transparently=20 >with a 3:1 split, the problem was all the programs ported from unix=20
which assumed that any negative return value was a failure code.
cross@spitfire.i.gajendra.net (Dan Cross) writes:
In article <2025Aug3.185110@mips.complang.tuwien.ac.at>,
Anton Ertl <anton@mips.complang.tuwien.ac.at> wrote:
[snip]
The C environment for DEC OSF/1 was an I32LP64 setup, not an ILP64
setup, so can you really call it pure?
In the OS kernel, often times you want to allocate physical
address space below 4GiB for e.g. device BARs; many devices are
either 32-bit (but have to work on 64-bit systems) or work
better with 32-bit BARs.
Indeed. Modern PCI controllers tend to support remapping
a 64-bit physical address in the hardware to support devices
that only advertise 32-bit bars[*]. The firmware (e.g. UEFI
or BIOS) will setup the remapping registers and provide the
address of the 64-bit aperture to the kernel via device tree
or ACPI tables.
[*] AHCI is the typical example, which uses BAR5.
On Mon, 4 Aug 2025 17:18:24 -0500, BGB wrote:I'll say. We had to pay $20,000 for it in 1975. That was a lot
The ban on AT&T was the whole reason they released Unix freely.
It was never really “freely” available.
Then when things lifted (after the AT&T break-up), they tried to
re-assert their control over Unix, which backfired.
They were already tightening things up from the Seventh Edition onwards -- remember, this version rescinded the permission to use the source code for classroom teaching purposes, neatly strangling the entire market for the legendary Lions Book. Which continued to spread afterwards via samizdat, nonetheless.
And, they tried to make and release a workstation, but by then they
were competing against the IBM PC Clone market (and also everyone
else trying to sell Unix workstations at the time), ...
That was a very successful market, from about the mid-1980s until the mid- to-latter 1990s. In spite of all the vendor-lock-in and fragmentation, it mentioned to survive I think because of the sheer performance available in the RISC processors, which Microsoft tried to support with its new
“Windows NT” OS, but was never able to get quite right.
On Mon, 4 Aug 2025 18:07:48 +0300, Michael S wrote:
Majority of the world is embedded. Ovewhelming majority of embedded is
32-bit or narrower.
Embedded CPUs are mostly ARM, MIPS, RISC-V ... all of which are available
in 64-bit variants.
On Mon, 4 Aug 2025 20:13:54 -0000 (UTC)
Thomas Koenig <tkoenig@netcologne.de> wrote:
My guess would be that, with DEC, you would have the least chance of
convincing corporate brass of your ideas. With Data General, you
could try appealing to the CEO's personal history of creating the
Nova, and thus his vanity. That could work. But your own company
might actually be the best choice, if you can get the venture
capital funding.
Why not go to somebody who has money and interest to build
microprocessor, but no existing mini/mainframe/SuperC buisness?
If we limit ourselves to USA then Moto, Intel, AMD, NatSemi...
May be, even AT&T ? Or was AT&T stil banned from making computers in
the mid 70s?
... the problem was all the programs ported from unix which assumed
that any negative return value was a failure code.
The 3B was an absolute dog. We had a couple at ACC, because we were
providing device drivers or something to an ATT project for a Federal
agency.
We also had first an 11/70 and later an 11/780 running 4BSD. The
BSD systems were pretty snappy.
And we had an 11/780 for the business side, running VMS, And a VMS
11/750 for engineering, which was not as well liked as the BSD until
we got the Wollongong overlay so we could network it to the BSD
system.
So... a strategy could have been to establish the concept with
minicomputers, to make money (the VAX sold big) and then move
aggressively towards microprocessors, trying the disruptive move towards workstations within the same company (which would be HARD).
As for the PC - a scaled-down, cheap, compatible, multi-cycle per
instruction microprocessor could have worked for that market,
but it is entirely unclear to me what this would / could have done to
the PC market, if IBM could have been prevented from gaining such market dominance.
A bit like the /360 strategy, offering a wide range of machines (or CPUs
and systems) with different performance.
On Tue, 5 Aug 2025 21:01:20 -0000 (UTC), Thomas Koenig wrote:
So... a strategy could have been to establish the concept with
minicomputers, to make money (the VAX sold big) and then move
aggressively towards microprocessors, trying the disruptive move towards
workstations within the same company (which would be HARD).
None of the companies which tried to move in that direction were
successful. The mass micro market had much higher volumes and lower
margins, and those accustomed to lower-volume, higher-margin operation
simply couldn’t adapt.
As for the PC - a scaled-down, cheap, compatible, multi-cycle per
instruction microprocessor could have worked for that market,
but it is entirely unclear to me what this would / could have done to
the PC market, if IBM could have been prevented from gaining such market
dominance.
IBM had massive marketing clout in the mainframe market. I think that was
the basis on which customers gravitated to their products. And remember,
the IBM PC was essentially a skunkworks project that totally went against
the entire IBM ethos. Internally, it was seen as a one-off mistake that
they determined never to repeat. Hence the PS/2 range.
DEC was bigger in the minicomputer market. If DEC could have offered an open-standard machine, that could have offered serious competition to IBM. But what OS would they have used? They were still dominated by Unix-haters then.
A bit like the /360 strategy, offering a wide range of machines (or CPUs
and systems) with different performance.
That strategy was radical in 1964, less so by the 1970s and 1980s. DEC,
for example, offered entire ranges of machines in each of its various minicomputer families.
The plurality of embedded systems are 8 bit processors - about 40
percent of the total. They are largely used for things like industrial automation, Internet of Things, SCADA, kitchen appliances, etc.
16 bi
account for a small, and shrinking percentage. 32 bit is next (IIRC ~30-35%, but 64 bit is the fastest growing. Perhaps surprising, there
is still a small market for 4 bit processors for things like TV remote controls, where battery life is more important than the highest performance.
There is far more to the embedded market than phones and servers.
The support issues alone were killers. Think about the
Orange/Grey/(Blue?) Wall of VAX documentation, and then look at the five-page flimsy you got with a micro. The customers were willing to
accept cr*p from a small startup, but wouldn't put up with it from IBM
or DEC.
Does anybody have an estimate how many CPUs humanity has made so far?
Using UNIX faced stiff competition from AT&T's internal IT people, who
wanted to run DEC's operating systems on all PDP-11 within the company (basically, they wanted to kill UNIX).
But the _real_ killer application for UNIX wasn't writing patents, it
was phototypesetting speeches for the CEO of AT&T, who, for reasons of vanity, did not want to wear glasses, and it was possible to scale the
output of the phototoypesetter so he would be able to read them.
Of all the major OSes for Alpha, Windows NT was the only one
that couldn’t take advantage of the 64-bit architecture.
Peter Flass <Peter@Iron-Spring.com> schrieb:
The support issues alone were killers. Think about the
Orange/Grey/(Blue?) Wall of VAX documentation, and then look at the
five-page flimsy you got with a micro. The customers were willing to
accept cr*p from a small startup, but wouldn't put up with it from IBM
or DEC.
Using UNIX faced stiff competition from AT&T's internal IT people,
who wanted to run DEC's operating systems on all PDP-11 within
the company (basically, they wanted to kill UNIX). They pointed
towads the large amout of documentation that DEC provided, compared
to the low amount of UNIX, as proof of superiority. The UNIX people
saw it differently...
But the _real_ killer application for UNIX wasn't writing patents,
it was phototypesetting speeches for the CEO of AT&T, who, for
reasons of vanity, did not want to wear glasses, and it was possible
to scale the output of the phototoypesetter so he would be able
to read them.
After somebody pointed out that having confidential speeches on
one of the most well-known machines in the world, where loads of
people had dial-up access, was not a good idea, his secretary got
her own PDP-11 for that.
And with support from that high up, the project flourished.
Not aware of any platforms that do/did ILP64.
If 'int' were 64-bits, then what about 16 and/or 32 bit types.
short short?
long short?
On Tue, 5 Aug 2025 17:24:34 +0200, Terje Mathisen wrote:
... the problem was all the programs ported from unix which assumed
that any negative return value was a failure code.
If the POSIX API spec says a negative return for a particular call is an >error, then a negative return for that particular call is an error.
DEC was bigger in the minicomputer market. If DEC could have offered
an open-standard machine, that could have offered serious competition
to IBM. But what OS would they have used? They were still dominated
by Unix-haters then.
On Mon, 4 Aug 2025 18:16:45 -0000 (UTC)
Thomas Koenig <tkoenig@netcologne.de> wrote:
Anton Ertl <anton@mips.complang.tuwien.ac.at> schrieb:
The claim by John Savard was that the VAX "was a good match to the
technology *of its time*". It was not. It may have been a good
match for the beliefs of the time, but that's a different thing.
The evidence of 801 is the 801 did not deliver until more than decade
later. And the variant that delivered was quite different from original
801.
Actually, it can be argued that 801 didn't deliver until more than 15
years late.
[RISC] didn't really make sense until main
memory systems got a lot faster.
In article <106uqej$36gll$3@dont-email.me>,
Thomas Koenig <tkoenig@netcologne.de> wrote:
Peter Flass <Peter@Iron-Spring.com> schrieb:
The support issues alone were killers. Think about the
Orange/Grey/(Blue?) Wall of VAX documentation, and then look at the
five-page flimsy you got with a micro. The customers were willing to
accept cr*p from a small startup, but wouldn't put up with it from IBM
or DEC.
Using UNIX faced stiff competition from AT&T's internal IT people,
who wanted to run DEC's operating systems on all PDP-11 within
the company (basically, they wanted to kill UNIX). They pointed
towads the large amout of documentation that DEC provided, compared
to the low amount of UNIX, as proof of superiority. The UNIX people
saw it differently...
I've never heard this before, and I do not believe that it is
true. Do you have a source?
The same happened to some extent with the early amd64 machines, which
ended up running 32bit Windows and applications compiled for the i386
ISA. Those processors were successful mostly because they were fast at >running i386 code (with the added marketing benefit of being "64bit
ready"): it took 2 years for MS to release a matching OS.
BGB <cr88192@gmail.com> writes:
If 'int' were 64-bits, then what about 16 and/or 32 bit types.
short short?
long short?
Of course int16_t uint16_t int32_t uint32_t
On what keywords should these types be based? That's up to the
implementor. In C23 one could
typedef signed _BitInt(16) int16_t
etc. Around 1990, one would have just followed the example of "long
long" of accumulating several modifiers. I would go for 16-bit
"short" and 32-bit "long short".
- anton
In any case, RISCs delivered, starting in 1986.
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
Not aware of any platforms that do/did ILP64.
AFAIK the Cray-1 (1976) was the first 64-bit machine, ...
De Castro had had a big success with a simple load-store
architecture, the Nova. He did that to reduce CPU complexity
and cost, to compete with DEC and its PDP-8. (Byte addressing
was horrible on the Nova, though).
Now, assume that, as a time traveler wanting to kick off an early
RISC revolution, you are not allowed to reveal that you are a time
traveler (which would have larger effects than just a different
computer architecture). What do you do?
a) You go to DEC
b) You go to Data General
c) You found your own company
On 8/6/2025 6:05 AM, Anton Ertl wrote:
BGB <cr88192@gmail.com> writes:
If 'int' were 64-bits, then what about 16 and/or 32 bit types.
short short?
long short?
Of course int16_t uint16_t int32_t uint32_t
Well, assuming a post C99 world.
According to Anton Ertl <anton@mips.complang.tuwien.ac.at>:
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
Not aware of any platforms that do/did ILP64.
AFAIK the Cray-1 (1976) was the first 64-bit machine, ...
The IBM 7030 STRETCH was the first 64 bit machine, shipped in 1961,
but I would be surprised if anyone had written a C compiler for it.
It was bit addressable but memories in those days were so small that a full bit
address was only 24 bits. So if I were writing a C compiler, pointers and ints
would be 32 bits, char 8 bits, long 64 bits.
(There is a thing called STRETCH C Compiler but it's completely unrelated.)
Even if I am allowed to reveal that I am a time traveler, that may not
help; how would I prove it?
It was bit addressable but memories in those days were so small that a full bit
address was only 24 bits. So if I were writing a C compiler, pointers and ints
would be 32 bits, char 8 bits, long 64 bits.
(There is a thing called STRETCH C Compiler but it's completely unrelated.)
I don't get why bit-addressability was a thing? Intel iAPX 432 had it,
too, and it seems like all it does is drastically shrink your address
space and complexify instruction and operand fetch to (maybe) save a few >bytes.
According to Peter Flass <Peter@Iron-Spring.com>:
It was bit addressable but memories in those days were so small that a full bitI don't get why bit-addressability was a thing? Intel iAPX 432 had it, >>too, and it seems like all it does is drastically shrink your address >>space and complexify instruction and operand fetch to (maybe) save a few >>bytes.
address was only 24 bits. So if I were writing a C compiler, pointers and ints
would be 32 bits, char 8 bits, long 64 bits.
(There is a thing called STRETCH C Compiler but it's completely unrelated.) >>
STRETCH had a severe case of second system syndrome, and was full of
complex features that weren't worth the effort and it was impressive
that IBM got it to work and to run as fast as it did.
In that era memory was expensive, and usually measured in K, not M.
The idea was presumably to pack data as tightly as possible.
In the 1970s I briefly used a B1700 which was bit addressable and had reloadable
microcode so COBOL programs used the COBOL instruction set, FORTRAN programs >used the FORTRAN instruction set, and so forth, with each one having whatever >word or byte sizes they wanted. In retrospect it seems like a lot of >premature optimization.
For comparison:
SPARC: Berkeley RISC research project between 1980 and 1984; <https://en.wikipedia.org/wiki/Berkeley_RISC> does not mention the IBM
801 as inspiration, but a 1978 paper by Tanenbaum. Samples for RISC-I
in May 1982 (but could only run at 0.5MHz). No date for the completion
of RISC-II, but given that the research project ended in 1984, it was probably at that time. Sun developed Berkeley RISC into SPARC, and the
first SPARC machine, the Sun-4/260 appeared in July 1987 with a 16.67MHz processor.
The 3B was an absolute dog. We had a couple at ACC, because we were
providing device drivers or something to an ATT project for a Federal
agency.
Weren’t they designed specifically for Telco use? I remember a lecturer telling us they were capable of five-9s uptime or something of that order.
We also had first an 11/70 and later an 11/780 running 4BSD. The
BSD systems were pretty snappy.
And we had an 11/780 for the business side, running VMS, And a VMS
11/750 for engineering, which was not as well liked as the BSD until
we got the Wollongong overlay so we could network it to the BSD
system.
Did the users do all their work via SET HOST? ;)
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
Not aware of any platforms that do/did ILP64.
AFAIK the Cray-1 (1976) was the first 64-bit machine, and C for the
Cray-1 and successors implemented, as far as I can determine
type bits
char 8
short int 64
int 64
long int 64
pointer 64
AFAIK the Cray-1 (1976) was the first 64-bit machine, and C for the
Cray-1 and successors implemented, as far as I can determine
type bits
char 8
short int 64
int 64
long int 64
pointer 64
Not having a 16-bit integer type and not having a 32-bit integer type
would make it very hard to adapt portable code, such as TCP/IP protocol >processing.
AFAIK the Cray-1 (1976) was the first 64-bit machine, and C for the
Cray-1 and successors implemented, as far as I can determine
type bits
char 8
short int 64
int 64
long int 64
pointer 64
Not having a 16-bit integer type and not having a 32-bit integer type
would make it very hard to adapt portable code, such as TCP/IP protocol >>processing.
I'd think this was obvious, but if the code depends on word sizes and doesn't declare its variables to use those word sizes, I don't think "portable" is the
right term.
I don't get why bit-addressability was a thing? Intel iAPX 432 had it,
too, and it seems like all it does is drastically shrink your address
space and complexify instruction and operand fetch to (maybe) save a few bytes.
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
Not aware of any platforms that do/did ILP64.
AFAIK the Cray-1 (1976) was the first 64-bit machine ...
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
Of all the major OSes for Alpha, Windows NT was the only one that
couldn’t take advantage of the 64-bit architecture.
Actually, Windows took good advantage of the 64-bit architecture:
"64-bit Windows was initially developed on the Alpha AXP." <https://learn.microsoft.com/en-us/previous-versions/technet-magazine/cc718978(v=msdn.10)>
On Wed, 6 Aug 2025 00:53:32 -0000 (UTC), Lawrence D'Oliveiro wrote:
On Tue, 5 Aug 2025 12:52:38 -0000 (UTC), Lars Poulsen wrote:
And we had an 11/780 for the business side, running VMS, And a VMS
11/750 for engineering, which was not as well liked as the BSD until
we got the Wollongong overlay so we could network it to the BSD
system.
Did the users do all their work via SET HOST? ;)
Of course not - how would you do that from a BSD system?
CP/M owes a lot to the DEC lineage, although it dispenses with some
of the more tedious mainframe-isms - e.g. the RUN [program]
[parameters] syntax vs. just treating executable files on disk as
commands in themselves.)
On Wed, 06 Aug 2025 14:00:56 GMT, Anton Ertl wrote:
For comparison:
SPARC: Berkeley RISC research project between 1980 and 1984;
<https://en.wikipedia.org/wiki/Berkeley_RISC> does not mention the IBM
801 as inspiration, but a 1978 paper by Tanenbaum. Samples for RISC-I
in May 1982 (but could only run at 0.5MHz). No date for the completion
of RISC-II, but given that the research project ended in 1984, it was
probably at that time. Sun developed Berkeley RISC into SPARC, and the
first SPARC machine, the Sun-4/260 appeared in July 1987 with a 16.67MHz
processor.
The Katevenis thesis on RISC-II contains a timeline on p6, it lists fabrication of it in spring 83 with testing during summer 83.
There is also a bibliography entry of an informal discussion with John
Cocke at Berkeley about the 801 in June 1983
On 8/6/25 09:47, Anton Ertl wrote:
Even if I am allowed to reveal that I am a time traveler, that may not
help; how would I prove it?
I'm a time-traveler from the 1960s!
On Wed, 6 Aug 2025 08:28:03 -0700, John Ames wrote:
CP/M owes a lot to the DEC lineage, although it dispenses with some
of the more tedious mainframe-isms - e.g. the RUN [program]
[parameters] syntax vs. just treating executable files on disk as
commands in themselves.)
It added its own misfeatures, though. Like single-letter device names,
but only for disks. Non-file-structured devices were accessed via “reserved” file names, which continue to bedevil Microsoft Windows to this day, aggravated by a totally perverse extension of the concept to
paths with hierarchical directory names.
There is a citation to Cocke as "private communication" in 1980 by
Patterson in The Case for the Reduced Instruction Set Computer,
1980.
"REASONS FOR INCREASED COMPLEXITY
Why have computers become more complex? We can think of several
reasons: Speed of Memory vs. Speed of CPU. John Cocke says that the complexity began with the transition from the 701 to the 709
[Cocke80]. The 701 CPU was about ten times as fast as the core main
memory; this made any primitives that were implemented as
subroutines much slower than primitives that were instructions. Thus
the floating point subroutines became part of the 709 architecture
with dramatic gains. Making the 709 more complex resulted in an
advance that made it more cost-effective than the 701. Since then,
many "higher-level" instructions have been added to machines in an
attempt to improve performance. Note that this trend began because
of the imbalance in speeds; it is not clear that architects have
asked themselves whether this imbalance still holds for their
designs."
["Followup-To:" header set to comp.arch.]
On 2025-08-06, John Levine <johnl@taugh.com> wrote:
AFAIK the Cray-1 (1976) was the first 64-bit machine, and C for the
Cray-1 and successors implemented, as far as I can determine
type bits
char 8
short int 64
int 64
long int 64
pointer 64
Not having a 16-bit integer type and not having a 32-bit integer type >>>would make it very hard to adapt portable code, such as TCP/IP protocol >>>processing.
I'd think this was obvious, but if the code depends on word sizes and doesn't
declare its variables to use those word sizes, I don't think "portable" is the
right term.
My concern is how do you express yopur desire for having e.g. an int16 ?
All the portable code I know defines int8, int16, int32 by means of a
typedef that adds an appropriate alias for each of these back to a
native type. If "short" is 64 bits, how do you define a 16 bit?
Or did the compiler have native types __int16 etc?
Thomas Koenig <tkoenig@netcologne.de> writes:
De Castro had had a big success with a simple load-store
architecture, the Nova. He did that to reduce CPU complexity
and cost, to compete with DEC and its PDP-8. (Byte addressing
was horrible on the Nova, though).
The PDP-8, and its 16-bit followup, the Nova, may be load/store, but
it is not a register machine nor byte-addressed, while the PDP-11 is,
and the RISC-VAX would be, too.
Now, assume that, as a time traveler wanting to kick off an early
RISC revolution, you are not allowed to reveal that you are a time
traveler (which would have larger effects than just a different
computer architecture). What do you do?
a) You go to DEC
b) You go to Data General
c) You found your own company
Even if I am allowed to reveal that I am a time traveler, that may not
help; how would I prove it?
Yes, convincing people in the mid-1970s to bet the company on RISC is
a hard sell, that's I asked for "a magic wand that would convince the
DEC management and workforce that I know how to design their next architecture, and how to compile for it" in
<2025Mar1.125817@mips.complang.tuwien.ac.at>.
Some arguments that might help:
Complexity in CISC and how it breeds complexity elsewhere; e.g., the interaction of having more than one data memory access per
instruction, virtual memory, and precise exceptions.
How the CDC 6600 achieved performance (pipelining) and how non-complex
its instructions are.
I guess I would read through RISC-vs-CISC literature before entering
the time machine in order to have some additional arguments.
Concerning your three options, I think it will be a problem in any
case. Data General's first bet was on FHP, a microcoded machine with user-writeable microcode,
so maybe even more in the wrong direction
than VAX; I can imagine a high-performance OoO VAX implementation, but
for an architecture with exposed microcode like FHP an OoO
implementation would probably be pretty challenging. The backup
project that eventually came through was also a CISC.
Concerning founding ones own company, one would have to convince
venture capital, and then run the RISC of being bought by one of the
big players, who buries the architecture. And even if you survive,
you then have to build up the whole thing: production, marketing,
sales, software support, ...
In any case, the original claim was about the VAX, so of course the
question at hand is what DEC could have done instead.
- anton
There is a citation to Cocke as "private communication" in 1980 by
Patterson in The Case for the Reduced Instruction Set Computer, 1980.
"REASONS FOR INCREASED COMPLEXITY
Why have computers become more complex? We can think of several reasons: >Speed of Memory vs. Speed of CPU. John Cocke says that the complexity began >with the transition from the 701 to the 709 [Cocke80]. The 701 CPU was about >ten times as fast as the core main memory; this made any primitives that
were implemented as subroutines much slower than primitives that were >instructions. Thus the floating point subroutines became part of the 709 >architecture with dramatic gains. Making the 709 more complex resulted
in an advance that made it more cost-effective than the 701. Since then,
many "higher-level" instructions have been added to machines in an attempt
to improve performance. Note that this trend began because of the imbalance >in speeds; it is not clear that architects have asked themselves whether
this imbalance still holds for their designs."
EricP <ThatWouldBeTelling@thevillage.com> writes:
There is a citation to Cocke as "private communication" in 1980 by >>Patterson in The Case for the Reduced Instruction Set Computer, 1980.
"REASONS FOR INCREASED COMPLEXITY
Why have computers become more complex? We can think of several reasons: >>Speed of Memory vs. Speed of CPU. John Cocke says that the complexity began >>with the transition from the 701 to the 709 [Cocke80]. The 701 CPU was about >>ten times as fast as the core main memory; this made any primitives that >>were implemented as subroutines much slower than primitives that were >>instructions. Thus the floating point subroutines became part of the 709 >>architecture with dramatic gains. Making the 709 more complex resulted
in an advance that made it more cost-effective than the 701. Since then, >>many "higher-level" instructions have been added to machines in an attempt >>to improve performance. Note that this trend began because of the imbalance >>in speeds; it is not clear that architects have asked themselves whether >>this imbalance still holds for their designs."
At the start of this thread
<2025Jul29.104514@mips.complang.tuwien.ac.at>, I made exactly this
argument about the relation between memory speed and clock rate. In
that posting, I wrote:
|my guess is that in the VAX 11/780 timeframe, 2-3MHz DRAM access
|within a row would have been possible. Moreover, the VAX 11/780 has a |cache
In the meantime, this discussion and some additional searching has
unearthed that the VAX 11/780 memory subsystem has 600ns main memory
cycle time (apparently without contiguous-access (row) optimization),
with the cache lowering the average memory cycle time to 290ns.
On 8/6/25 10:25, John Levine wrote:=20
According to Anton Ertl <anton@mips.complang.tuwien.ac.at>:
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
Not aware of any platforms that do/did ILP64.
AFAIK the Cray-1 (1976) was the first 64-bit machine, ...
The IBM 7030 STRETCH was the first 64 bit machine, shipped in 1961,
but I would be surprised if anyone had written a C compiler for it.
It was bit addressable but memories in those days were so small that a=
w=20full bit=20
address was only 24 bits.=C2=A0 So if I were writing a C compiler, poi= nters=20
and ints
would be 32 bits, char 8 bits, long 64 bits.
(There is a thing called STRETCH C Compiler but it's completely=20
unrelated.)
I don't get why bit-addressability was a thing? Intel iAPX 432 had it, =
too, and it seems like all it does is drastically shrink your address=20 space and complexify instruction and operand fetch to (maybe) save a fe=
bytes.
That is one of the things I find astonishing - how a company like
DG grew from a kitche-table affair to the size they had.
Bit addressing, presumably combined with an easy way to mask the results/pick an arbitrary number of bits less or equal to register
width, makes it easier to impement compression/decompression/codecs.
However, since the only thing needed to do the same on current CPUs is a single shift after an aligned load, this feature costs far too much in reduced address space compared to what you gain.
It added its own misfeatures, though.
I don't get why bit-addressability was a thing? Intel iAPX 432 had it,
too
That disparity between CPU and RAM speeds is even greater today than
it was back then. Yet we have moved away from adding ever-more-complex instructions, and are getting better performance with simpler ones.
How come? Caching.
On Thu, 7 Aug 2025 02:22:05 -0000 (UTC)
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
That disparity between CPU and RAM speeds is even greater today than
it was back then. Yet we have moved away from adding ever-more-complex
instructions, and are getting better performance with simpler ones.
How come? Caching.
Yes, but complex instructions also make pipelining and out-of-order
execution much more difficult - to the extent that, as far back as the Pentium Pro, Intel has had to implement the x86 instruction set as a microcoded program running on top of a simpler RISC architecture.
However, in the case of the IBM STRETCH, I think there's a good
excuse: If you go from word addressing to subunit addressing (not sure
why Stretch went there, however; does a supercomputer need that?), why
stop at characters (especially given that character size at the time
was still not settled)? Why not continue down to bits?
Peter Flass <Peter@Iron-Spring.com> writes:
[IBM STRETCH bit-addressable]
I don't get why bit-addressability was a thing? Intel iAPX 432 had it,
too
One might come to think that it's the signature of overambitious
projects that eventually fail.
However, in the case of the IBM STRETCH, I think there's a good
excuse: If you go from word addressing to subunit addressing (not sure
why Stretch went there, however; does a supercomputer need that?)
stop at characters (especially given that character size at the time
was still not settled)? Why not continue down to bits?
The S/360 then found the compromise that conquered the world: Byte
addressing with 8-bit bytes.
Why iAPX432 went for bit addressing at a time when byte addressing and
the 8-bit byte was firmly established, over ten years after the S/360
and 5 years after the PDP-11 is a mystery, however.
I don't get why bit-addressability was a thing? Intel iAPX 432 had it,
too, and it seems like all it does is drastically shrink your address
space and complexify instruction and operand fetch to (maybe) save a few
bytes.
Bit addressing, presumably combined with an easy way to mask the >results/pick an arbitrary number of bits less or equal to register
width, makes it easier to impement compression/decompression/codecs.
John Ames wrote:
On Thu, 7 Aug 2025 02:22:05 -0000 (UTC)That's simply wrong:
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
That disparity between CPU and RAM speeds is even greater today than
it was back then. Yet we have moved away from adding ever-more-complex
instructions, and are getting better performance with simpler ones.
How come? Caching.
Yes, but complex instructions also make pipelining and out-of-order
execution much more difficult - to the extent that, as far back as the
Pentium Pro, Intel has had to implement the x86 instruction set as a
microcoded program running on top of a simpler RISC architecture.
The PPro had close to zero microcode actually running in any user program.
What it did have was decoders that would look at complex operations and
spit out two or more basic operations, like load+execute.
Later on we've seen the opposite where cmp+branch could be combined into
a single internal op.
Terje
Dan Cross <cross@spitfire.i.gajendra.net> schrieb:
In article <106uqej$36gll$3@dont-email.me>,
Thomas Koenig <tkoenig@netcologne.de> wrote:
Peter Flass <Peter@Iron-Spring.com> schrieb:
The support issues alone were killers. Think about the
Orange/Grey/(Blue?) Wall of VAX documentation, and then look at the
five-page flimsy you got with a micro. The customers were willing to
accept cr*p from a small startup, but wouldn't put up with it from IBM >>>> or DEC.
Using UNIX faced stiff competition from AT&T's internal IT people,
who wanted to run DEC's operating systems on all PDP-11 within
the company (basically, they wanted to kill UNIX). They pointed
towads the large amout of documentation that DEC provided, compared
to the low amount of UNIX, as proof of superiority. The UNIX people
saw it differently...
I've never heard this before, and I do not believe that it is
true. Do you have a source?
Hmm... I _think_ it was on a talk given by the UNIX people,
but I may be misremembering.
However, since the only thing needed to do the same on current CPUs is a single shift after an aligned load, this feature costs far too much in reduced address space compared to what you gain.
On 8/6/25 22:29, Thomas Koenig wrote:
That is one of the things I find astonishing - how a company like DGRecent history is littered with companies like this.
grew from a kitche-table affair to the size they had.
On Thu, 7 Aug 2025 17:52:05 +0200, Terje Mathisen
<terje.mathisen@tmsw.no> wrote:
John Ames wrote:
The PPro had close to zero microcode actually running in any user program.
What it did have was decoders that would look at complex operations and >>spit out two or more basic operations, like load+execute.
Later on we've seen the opposite where cmp+branch could be combined into
a single internal op.
Terje
You say "tomato". 8-)
It's still "microcode" for some definition ... just not a classic >"interpreter" implementation where a library of routines implements
the high level instructions.
The decoder converts x86 instructions into traces of equivalent wide
micro instructions which are directly executable by the core. The
traces then are cached separately [there is a $I0 "microcache" below
$I1] and can be re-executed (e.g., for loops) as long as they remain
in the microcache.
I guess they thought that 32 address bits left plenty to spare for
something like this. But I think it just shortened the life of their
32- bit architecture by that much more.
MAP_32BIT is only used on x86-64 on Linux, and was originally
a performance hack for allocating thread stacks: apparently, it
was cheaper to do a thread switch with a stack below the 4GiB
barrier (sign extension artifact maybe? Who knows...). But it's
no longer required for that. But there's no indication that it
was for supporting ILP32 on a 64-bit system.
MAP_32BIT is only used on x86-64 on Linux, and was originally
a performance hack for allocating thread stacks: apparently, it
was cheaper to do a thread switch with a stack below the 4GiB
barrier (sign extension artifact maybe? Who knows...). But it's
no longer required for that. But there's no indication that it
was for supporting ILP32 on a 64-bit system.
Reading up about x32, it requires quite a bit more than just
allocating everything in the low 2GB.
anton@mips.complang.tuwien.ac.at (Anton Ertl) writes:
cross@spitfire.i.gajendra.net (Dan Cross) writes:
MAP_32BIT is only used on x86-64 on Linux, and was originally
a performance hack for allocating thread stacks: apparently, it
was cheaper to do a thread switch with a stack below the 4GiB
barrier (sign extension artifact maybe? Who knows...). But it's
no longer required for that. But there's no indication that it
was for supporting ILP32 on a 64-bit system.
Reading up about x32, it requires quite a bit more than just
allocating everything in the low 2GB.
The primary issue on x86 was with the API definitions. Several
legacy API declarations used signed integers (int) for
address parameters. This limited addresses to 2GB on
a 32-bit system.
https://en.wikipedia.org/wiki/Large-file_support
The Large File Summit (I was one of the Unisys reps at the LFS)
specified a standard way to support files larger than 2GB
on 32-bit systems that used signed integers for file offsets
and file size.
Also, https://en.wikipedia.org/wiki/2_GB_limit
Also, IIRC, the major point of X32 was that it would narrow pointers and similar back down to 32 bits, requiring special versions of any shared libraries or similar.
But, it is unattractive to have both 32 and 64 bit versions of all the SO's.
In comp.arch BGB <cr88192@gmail.com> wrote:
Also, IIRC, the major point of X32 was that it would narrow pointers and
similar back down to 32 bits, requiring special versions of any shared
libraries or similar.
But, it is unattractive to have both 32 and 64 bit versions of all the SO's.
We have done something similar for years at Red Hat: not X32, but
x86_32, and it was pretty easy. If you're building a 32-bit OS anyway
(which we were) all you have to do is copy all 32-bit libraries from
one one repo to the other.
I thought the AArch64 ILP32 design was pretty neat, but no one seems
to have been interested. I guess there wasn't an advantage worth the
effort.
To be efficient, a RISC needs a full-width (presumably 32 bit)
external data bus, plus a separate address bus, which should at
least be 26 bits, better 32. A random ARM CPU I looked at at
bitsavers had 84 pins, which sounds reasonable.
Building an ARM-like instead of a 68000 would have been feasible,
but the resulting systems would have been more expensive (the
68000 had 64 pins).
So... a strategy could have been to establish the concept with
minicomputers, to make money (the VAX sold big) and then move
aggressively towards microprocessors, trying the disruptive move
towards workstations within the same company (which would be HARD).
As for the PC - a scaled-down, cheap, compatible, multi-cycle per
instruction microprocessor could have worked for that market,
but it is entirely unclear to me what this would / could
have done to the PC market, if IBM could have been prevented
from gaining such market dominance.
On Tue, 5 Aug 2025 21:01:20 -0000 (UTC), Thomas Koenig wrote:
So... a strategy could have been to establish the concept with
minicomputers, to make money (the VAX sold big) and then move
aggressively towards microprocessors, trying the disruptive move towards
workstations within the same company (which would be HARD).
None of the companies which tried to move in that direction were
successful. The mass micro market had much higher volumes and lower
margins, and those accustomed to lower-volume, higher-margin operation >simply couldn’t adapt.
Thomas Koenig <tkoenig@netcologne.de> writes:
To be efficient, a RISC needs a full-width (presumably 32 bit)
external data bus, plus a separate address bus, which should at
least be 26 bits, better 32. A random ARM CPU I looked at at
bitsavers had 84 pins, which sounds reasonable.
Building an ARM-like instead of a 68000 would have been feasible,
but the resulting systems would have been more expensive (the
68000 had 64 pins).
One could have done a RISC-VAX microprocessor with 16-bit data bus and
24-bit address bus.
Thomas Koenig <tkoenig@netcologne.de> writes:<snip>
So how could one capture the PC market? The RISC-VAX would probably
have been too expensive for a PC, even with an 8-bit data bus and a
reduced instruction set, along the lines of RV32E. Or maybe that
would have been feasible, in which case one would provide >8080->reduced-RISC-VAX and 6502->reduced-RISC-VAX assemblers to make
porting easier. And then try to sell it to IBM Boca Raton.
anton@mips.complang.tuwien.ac.at (Anton Ertl) writes:
Thomas Koenig <tkoenig@netcologne.de> writes:<snip>
So how could one capture the PC market? The RISC-VAX would probably
have been too expensive for a PC, even with an 8-bit data bus and a
reduced instruction set, along the lines of RV32E. Or maybe that
would have been feasible, in which case one would provide >>8080->reduced-RISC-VAX and 6502->reduced-RISC-VAX assemblers to make >>porting easier. And then try to sell it to IBM Boca Raton.
https://en.wikipedia.org/wiki/Rainbow_100
anton@mips.complang.tuwien.ac.at (Anton Ertl) writes:
Thomas Koenig <tkoenig@netcologne.de> writes:
Building an ARM-like instead of a 68000 would have been feasible,
but the resulting systems would have been more expensive (the
68000 had 64 pins).
One could have done a RISC-VAX microprocessor with 16-bit data bus and >>24-bit address bus.
LSI11?
In article <2025Aug13.194659@mips.complang.tuwien.ac.at>,
Anton Ertl <anton@mips.complang.tuwien.ac.at> wrote:
scott@slp53.sl.home (Scott Lurndal) writes:
anton@mips.complang.tuwien.ac.at (Anton Ertl) writes:
Thomas Koenig <tkoenig@netcologne.de> writes:<snip>
So how could one capture the PC market? The RISC-VAX would probably
have been too expensive for a PC, even with an 8-bit data bus and a
reduced instruction set, along the lines of RV32E. Or maybe that
would have been feasible, in which case one would provide
8080->reduced-RISC-VAX and 6502->reduced-RISC-VAX assemblers to make
porting easier. And then try to sell it to IBM Boca Raton.
https://en.wikipedia.org/wiki/Rainbow_100
That's completely different from what I suggest above, and DEC
obviously did not capture the PC market with that.
They did manage to crack the college market some where CS departments
had DEC hardware anyway. I know USC (original) had a Rainbow computer
lab circa 1985. That "in" didn't translate to anything else though.
Terje Mathisen <terje.mathisen@tmsw.no> writes:
Stephen Fuld wrote:
On 8/4/2025 8:32 AM, John Ames wrote:
=20
snip
=20
This notion that the only advantage of a 64-bit architecture is a larg= >>e
address space is very curious to me. Obviously that's *one* advantage,=
but while I don't know the in-the-field history of heavy-duty business= >>/
scientific computing the way some folks here do, I have not gotten the=
impression that a lot of customers were commonly running up against th= >>e=20
4 GB limit in the early '90s;
Not exactly the same, but I recall an issue with Windows NT where it=20
initially divided the 4GB address space in 2 GB for the OS, and 2GB for= >>=20
users.=C2=A0 Some users were "running out of address space", so Microso= >>ft=20
came up with an option to reduce the OS space to 1 GB, thus allowing up= >>=20
to 3 GB for users.=C2=A0 I am sure others here will know more details.
Any program written to Microsoft/Windows spec would work transparently=20 >>with a 3:1 split, the problem was all the programs ported from unix=20 >>which assumed that any negative return value was a failure code.
The only interfaces that I recall this being an issue for were
mmap(2) and lseek(2). The latter was really related to maximum
file size (although it applied to /dev/[k]mem and /proc/<pid>/mem
as well). The former was handled by the standard specifying
MAP_FAILED as the return value.
That said, Unix generally defined -1 as the return value for all
other system calls, and code that checked for "< 0" instead of
-1 when calling a standard library function or system call was fundamentally broken.
The LSI11 uses four 40-pin chips from the MCP-1600 chipset (which is fascinating in itself <https://en.wikipedia.org/wiki/MCP-1600>) for a
total of 160 pins; and it supported only 16 address bits without extra chips. That was certainly even more expensive (and also slower and
less capable) than what I suggest above, but it was several years
earlier, and what I envision was not possible in one chip then.
The LSI11 uses four 40-pin chips from the MCP-1600 chipset (which is fascinating in itself <https://en.wikipedia.org/wiki/MCP-1600>) for a total of 160 pins; and it supported only 16 address bits without extra chips. That was certainly even more expensive (and also slower and
less capable) than what I suggest above, but it was several years
earlier, and what I envision was not possible in one chip then.
Maybe compare 808x to something more in its weight class? The 8-bit
8080 was 1974, 16-bit 8086 1978, 16/8-bit 8088 1979.
The DEC F-11 (~1979) and J-11 (~1982) microprocessor designs were
capable of 22 bit addressing on a single 40-pin carrier.
De
The DEC F-11 (~1979) and J-11 (~1982) microprocessor designs werecapable of 22 bit addressing on a single 40-pin carrier.
According to <aph@littlepinkcloud.invalid>:
In comp.arch BGB <cr88192@gmail.com> wrote:
Also, IIRC, the major point of X32 was that it would narrow pointers and >>> similar back down to 32 bits, requiring special versions of any shared
libraries or similar.
But, it is unattractive to have both 32 and 64 bit versions of all the SO's.
We have done something similar for years at Red Hat: not X32, but
x86_32, and it was pretty easy. If you're building a 32-bit OS anyway
(which we were) all you have to do is copy all 32-bit libraries from
one one repo to the other.
FreeBSD does the same thing. The 32 bit libraries are installed by default on 64 bit systems because, by current standards, they're not very big.
I've stopped installing them because I know I don't have any 32 bit apps
left but on systems with old packages, who knows?
Sysop: | Tetrazocine |
---|---|
Location: | Melbourne, VIC, Australia |
Users: | 11 |
Nodes: | 8 (0 / 8) |
Uptime: | 50:31:52 |
Calls: | 166 |
Files: | 21,502 |
Messages: | 77,728 |