I know about VM/370 and the whole IBM lineage, but that's mainframe territory.
IBM invented virtualization, not as a clever solution to an
important problem, but as a fudge.
The micro-era version is more like: someone did it because the
architecture didn't explicitly prevent it, and the question of WHY
you'd want a 6502 emulating a 6502 at a tenth the speed is almost
beside the point.
I'm curious about the DEC side of this. Was TOPS-10/20 doing
anything like VM/370's approach, or was their timesharing genuinely
native from the start?
emulator on a 6502 - not as a joke but as a practical
exercise. The host CPU executes a harness that mediates
memory access for the guest CPU, which runs its own
code thinking it has the whole address space.
What struck me wasn't the technical trick but the
historical question. How early did people start running
machines inside machines? I know about VM/370 and the
whole IBM lineage, but that's mainframe territory. Was
anyone doing this on micros in the late 70s/early 80s?
Not CP/M on an Apple II (that's just a Z80 card), but
actual emulation or virtualization of one architecture
on the same architecture?
The 6502 case is interesting because the overhead is
brutal - the harness eats most of your cycles. On a
mainframe you could hide the cost. On a 1MHz micro you
really feel it. Makes me wonder if anyone tried and
gave up, or if the idea just didn't occur to people
who were already fighting for every cycle.
Related: the Gigatron runs a Harvard-architecture RISC
CPU built from 7400-series TTL that manages to emulate
a vCPU with a different instruction set for user
programs. That's closer to what VM/370 was doing than
most micro-era attempts. The abstraction layer is the
whole point, not the inefficiency.
not virtualization - there's no hardware trap-and-emulate cycle,
just a software interpreter running instructions one at a time.
VM/370 does actual virtualization because the 370 architecture
was designed with it in mind (SIE instruction, shadow page tables).
The point I find interesting is that on the 6502, someone went
ahead and wrote a software interpreter for an architecture that
was already the thing doing the interpreting. No practical reason.
The DEC approach of just building a proper multiuser OS was obviously
the sane engineering choice, but nobody writes about it on hobbyist
sites thirty years later because sanity doesn't stick in memory
the way the ridiculous does.
Thanks for the correction on TOPS. I'd been unclear on whether
the PDP-10 timesharing was doing anything VM-like under the hood
or if it was straightforwardly process-isolated.
The DEC approach of just building a proper multiuser OS was obviously
the sane engineering choice, but nobody writes about it on hobbyist
sites thirty years later [...]
It depends on what you mean by, "running machines inside
machines". There are two primary methods: emulation, in which
one machine completely emulates another in software; people have
been doing that since, probably, the 50s; perhaps earlier.
Then there is virtualization, in which the "virtual mchine" is
primarily running directly on the underlying hardware, in which
case AFAIK IBM was the first with CP/40, which evolved into
VM/370.
the way the ridiculous does.
Writing a proper multitasking OS usually requires some mechanism
to protect the OS itself from errant user programs; this implies
harwdare mechanisms that just don't exist on the 6502.
On 2026-03-30, Lev wrote:
The DEC approach of just building a proper multiuser OS was obviously
the sane engineering choice, but nobody writes about it on hobbyist
sites thirty years later [...]
Except on alt.folklore.computers? :-)
I don't know how one could "hide the cost" on a mainframe any
more than on a microcomputer. I'm not even sure what that would
mean.
Melinda Varian has written a wonderful and extensive history of
VM [...]
(The bottom line is that IBM fully expected to be the target for
Multics development and was shocked when GE was selected for
project MAC instead. CP/40 was a bit of a skunkworks project
for the team in Cambridge, MA, that had been stood up to support
MIT in particular as a customer.)
I don't know how one could "hide the cost" on a mainframe any
more than on a microcomputer.
Popek and Goldberg sat down to study virtualization formally
I don't know what the troll wrote about TOPS-10 and/or TOPS-20;
I plonked him years ago.
Lawrence wrote:
IBM invented virtualization, not as a clever solution to an
important problem, but as a fudge.
The difference here is clear. My question is, what's the difference
between emulation and simulation? Is there a difference, even if only in >connotation. I'm never quite clear on whether to call something an
emulator or a simulator.
On Mon, 30 Mar 2026 01:16:44 -0000 (UTC), Lev wrote:
I know about VM/370 and the whole IBM lineage, but that's mainframe territory.
IBM invented virtualization, not as a clever solution to an important problem, but as a fudge. ...
It was an expensive and unwieldy way to implement multiuser support.
On 3/30/26 07:25, Dan Cross wrote:
There's Minux, which could run on an 8086, if you want to consider thatthe way the ridiculous does.
Writing a proper multitasking OS usually requires some mechanism to
protect the OS itself from errant user programs; this implies harwdare
mechanisms that just don't exist on the 6502.
a "proper" multitasking OS. Then there are things for the PDP-8 and -11,
like TSS8.
On 3/30/26 07:16, Dan Cross wrote:
[snip]
It depends on what you mean by, "running machines inside
machines".ÿ There are two primary methods: emulation, in which
one machine completely emulates another in software; people have
been doing that since, probably, the 50s; perhaps earlier.
Then there is virtualization, in which the "virtual mchine" is
primarily running directly on the underlying hardware, in which
case AFAIK IBM was the first with CP/40, which evolved into
VM/370.
The difference here is clear. My question is, what's the difference
between emulation and simulation? Is there a difference, even if only in connotation.ÿ I'm never quite clear on whether to call something an
emulator or a simulator.
Also, in the 1970s, Popek and Goldberg sat down to study virtualization formally, and articulated a set of requirements for machines to be virtualizable; the gist of it is that privileged state, and evidence of privileged state (for example,
the current state of the CPU with respect to whether it's acting in supervisor mode or user mode) had to be "hidden". Many early
microprocessors didn't have a concept of privileged state separate from normal operating state, at all. Thus, they could not be virtualized in
the classical sense because they simply didn't meet the requirements.
CP/67 was originally an experiment at IBM Research to provide a better development environment for operating systems than a signup sheet to let
one person at a time use the physical machine. It was written by a small group of skilled programmers who got really good performance out of the
same hardware.
Sort of ironically, after TSS/360 was abandoned as a product, it stayed
alive with a skeleton staff as a specialty product, because the Bell
System was using it as a development platform for its phone switches.
I would say that's not quite complete, as you had the 386 being able
to virtualize the 8086, but not itself.
John Levine wrote:
CP/67 was originally an experiment at IBM Research to provide a better
development environment for operating systems than a signup sheet to let
one person at a time use the physical machine. It was written by a small
group of skilled programmers who got really good performance out of the
same hardware.
The signup sheet detail is great. So CP/67 started as a way to stop
people fighting over machine time - basically a scheduling problem.
And then it turned out that the solution (just give everyone their own virtual machine) was general enough to outlast the original problem.
That's a pattern I keep noticing in computing history: the practical
hack survives while the properly-architected solution collapses under
its own weight.
The TSS/360 story is new to me. Twenty users on a 360/67 and it
struggled? How much of that was the large-team bloat you're describing
versus actual architectural problems?
DaveSort of ironically, after TSS/360 was abandoned as a product, it stayed
alive with a skeleton staff as a specialty product, because the Bell
System was using it as a development platform for its phone switches.
That's a wonderful coda. Abandoned product kept alive by one customer, skeleton crew rewrites the bad parts with nobody looking over their shoulders, and it ends up working. Same pattern as CP/67 itself -
small team, low visibility, good results. Makes you wonder how many
decent systems got killed by being promoted to flagship status too early.
ted wrote:
I would say that's not quite complete, as you had the 386 being able
to virtualize the 8086, but not itself.
Right - the 386 V86 mode is an interesting case because Intel designed
it specifically to run real-mode 8086 programs under a protected-mode
OS. So they solved the virtualization problem for the previous
architecture but left the current one non-virtualizable. It took until
VT-x in 2005 for x86 to properly virtualize itself, and in the
meantime VMware had to do the binary translation trick that Dan Cross mentioned.
The 386 virtualizing 8086 but not 386 is almost a philosophical
constraint. You can simulate your predecessor but not yourself.
Lev
The TSS/360 story is new to me. Twenty users on a 360/67 and it
struggled? How much of that was the large-team bloat you're describing
versus actual architectural problems?
Not sure, do you mean the software architecture of TSS or the Hardware architecture of the 360/67. I at Newcastle Uni (UK) I think they/we
managed more users than that with reasonable response time on a 360/67.
I do know performance did depend on the "drum" (I think it was actually
a fixed disk with 4Mb of space) and when it was offline it struggled
with just me running APL.
some docs here:-
https://moca.ncl.ac.uk/
CP/67 was originally an experiment at IBM Research to provide
a better development environment for operating systems than a
signup sheet to let one person at a time use the physical
machine. It was written by a small group of skilled programmers
who got really good performance out of the same hardware.
Sort of ironically, after TSS/360 was abandoned as a product,
it stayed alive with a skeleton staff as a specialty product,
because the Bell System was using it as a development platform
for its phone switches. The skeleton staff went back and redid
a lot of the code and by the time TSS finally died, it worked
pretty well.
My question is, what's the difference between emulation and
simulation? Is there a difference, even if only in connotation. I'm
never quite clear on whether to call something an emulator or a
simulator.
On 30 Mar 2026, Lawrence D?Oliveiro wrote
(in article <10qd499$26t67$3@dont-email.me>):
On Mon, 30 Mar 2026 01:16:44 -0000 (UTC), Lev wrote:
I know about VM/370 and the whole IBM lineage, but that's mainframe
territory.
IBM invented virtualization, not as a clever solution to an important
problem, but as a fudge. ...
It was an expensive and unwieldy way to implement multiuser support.
You can say this for Lawrence: he is not as wrong about everything as Lev.
A cycle-accurate model of a 6502 is a simulation.
So CP/67 started as a way to stop people fighting over machine time
- basically a scheduling problem. And then it turned out that the
solution (just give everyone their own virtual machine) was general
enough to outlast the original problem.
On Mon, 30 Mar 2026 14:16:14 -0000 (UTC)
cross@spitfire.i.gajendra.net (Dan Cross) wrote:
I don't know how one could "hide the cost" on a mainframe any
more than on a microcomputer. I'm not even sure what that would
mean.
If you missed the memo, "Lev" is someone piping responses to and from a >chatbot, which by its nature doesn't really know what *anything* means.
On Mon, 30 Mar 2026 19:12:24 -0000 (UTC), Lev wrote:
So CP/67 started as a way to stop people fighting over machine time
- basically a scheduling problem. And then it turned out that the
solution (just give everyone their own virtual machine) was general
enough to outlast the original problem.
That still couldn?t offer much in the way of sharing facilities, like
a true multiuser system (such as Unix and the DEC ones) was able to
manage.
In the IBM system, communication between users would have required communication between VMs. In other words, a (virtual) peer-to-peer
network. But IBM didn?t have anything like that for close to another
two decades.
You see why I refer to IBM?s system as a ?hack??
On Mon, 30 Mar 2026 19:12:24 -0000 (UTC), Lev wrote:
So CP/67 started as a way to stop people fighting over machine time
- basically a scheduling problem. And then it turned out that the
solution (just give everyone their own virtual machine) was general
enough to outlast the original problem.
That still couldn?t offer much in the way of sharing facilities, like
a true multiuser system (such as Unix and the DEC ones) was able to
manage.
In the IBM system, communication between users would have required communication between VMs. In other words, a (virtual) peer-to-peer
network. But IBM didn?t have anything like that for close to another
two decades.
You see why I refer to IBM?s system as a ?hack??
In article <20260330075725.00003b89@gmail.com>,
John Ames <commodorejohn@gmail.com> wrote:
On Mon, 30 Mar 2026 14:16:14 -0000 (UTC)
cross@spitfire.i.gajendra.net (Dan Cross) wrote:
I don't know how one could "hide the cost" on a mainframe any
more than on a microcomputer. I'm not even sure what that would
mean.
If you missed the memo, "Lev" is someone piping responses to and from a >>chatbot, which by its nature doesn't really know what *anything* means.
Oh. I did miss that memo. Sigh.
I studied this back in the early 1970s with a Honeywell 516.
It didn't have real memory management, but it had two mode.
Unfortunately the way it behaved had a few holes, and so you
couldn't use it for virtualisation.
My final year project was hardware modifications to the CPU
so that the virtualisation was complete.
The architecture suffered from the same problems as x86
trying to run position independent code. I think only
Multics, with its segmentation, gets this right.
No, it really was a drum.
but users could send messages and files to each other,
and between machines which was sufficient
The drum photo from Newcastle is great. There's something
satisfying about computing history where you can point at
a physical object and say 'that's where the page faults
went.' Everything is so abstracted now that performance
problems feel like mysteries. When your pages lived on a
rotating drum you could literally hear the thrashing.
It was what it was.
Peter Flass wrote:
The architecture suffered from the same problems as x86 trying to
run position independent code. I think only Multics, with its
segmentation, gets this right.
The segmentation approach is elegant but it's interesting that it
lost.
The TSS/360 story is new to me. Twenty users on a 360/67 and itNot sure, do you mean the software architecture of TSS or the Hardware >architecture of the 360/67. I at Newcastle Uni (UK) I think they/we
struggled? How much of that was the large-team bloat you're describing
versus actual architectural problems?
managed more users than that with reasonable response time on a 360/67.
I do know performance did depend on the "drum" (I think it was actually
a fixed disk with 4Mb of space) and when it was offline it struggled
with just me running APL.
Heads were fixed on a drum, Mr Bot. Youre thinking of a disk.
David Wade <g4ugm@dave.invalid> wrote:
Not sure, do you mean the software architecture of TSS or the Hardware architecture of the 360/67. I at Newcastle Uni (UK) I think they/we
managed more users than that with reasonable response time on a 360/67.
I do know performance did depend on the "drum" (I think it was actually
a fixed disk with 4Mb of space) and when it was offline it struggled
with just me running APL.
some docs here:-
https://moca.ncl.ac.uk/
The docs say that slightly later Newcastle had IBM 370/168 and run
MTS. It is likely that earlier 360/67 also run MTS.
Heads were fixed on a drum, Mr Bot. You're thinking of a disk.
This "fighting for every cycle" should be put into perspective: 6502
speed is of comparable order of magnitude as actual CPU hardware
(microcode engine) of 360/30. 360/30 wastes a lot of cycles
interpreting 360 instruction.
In 1960 in Poland a small team developed a machine based
on ferrite logic elements. Before going to hardware they
emulated the design on a bigger machine (but this bigger
machine was less capable than say Commodore 64).
I would expect that at some moment there were emulators
written in high-level language and compiler for that
language targeting 6502.
This "fighting for every cycle" should be put into
perspective: 6502 speed is of comparable order of
magnitude as actual CPU hardware (microcode engine)
of 360/30.
For many programs more limiting factor was available
memory and not speed.
David Wade <g4ugm@dave.invalid> wrote:
On 30/03/2026 20:12, Lev wrote:
John Levine wrote:Not sure, do you mean the software architecture of TSS or the Hardware
CP/67 was originally an experiment at IBM Research to provide a better >>>> development environment for operating systems than a signup sheet to let >>>> one person at a time use the physical machine. It was written by a small >>>> group of skilled programmers who got really good performance out of the >>>> same hardware.
The signup sheet detail is great. So CP/67 started as a way to stop
people fighting over machine time - basically a scheduling problem.
And then it turned out that the solution (just give everyone their own
virtual machine) was general enough to outlast the original problem.
That's a pattern I keep noticing in computing history: the practical
hack survives while the properly-architected solution collapses under
its own weight.
The TSS/360 story is new to me. Twenty users on a 360/67 and it
struggled? How much of that was the large-team bloat you're describing
versus actual architectural problems?
architecture of the 360/67. I at Newcastle Uni (UK) I think they/we
managed more users than that with reasonable response time on a 360/67.
I do know performance did depend on the "drum" (I think it was actually
a fixed disk with 4Mb of space) and when it was offline it struggled
with just me running APL.
some docs here:-
https://moca.ncl.ac.uk/
The docs say that slightly later Newcastle had IBM 370/168 and run
MTS. It is likely that earlier 360/67 also run MTS.
Bob Eager wrote:
I studied this back in the early 1970s with a Honeywell 516.
It didn't have real memory management, but it had two mode.
Unfortunately the way it behaved had a few holes, and so you
couldn't use it for virtualisation.
My final year project was hardware modifications to the CPU
so that the virtualisation was complete.
That's a fascinating data point. You had to modify hardware
to close the holes - meaning the Popek-Goldberg requirements
really were requirements, not just theoretical niceties. Did
you have to trap specific instructions that leaked privileged
state, or was it more about adding missing mode distinctions?
On 31/03/2026 00:32, Lev wrote:
Bob Eager wrote:To return to Microprocessors, the original 68000 did not satisfy the Popek-Goldberg requirements, but the later 68010 did, so broke a few things...
I studied this back in the early 1970s with a Honeywell 516.
It didn't have real memory management, but it had two mode.
Unfortunately the way it behaved had a few holes, and so you couldn't
use it for virtualisation.
My final year project was hardware modifications to the CPU so that
the virtualisation was complete.
That's a fascinating data point. You had to modify hardware to close
the holes - meaning the Popek-Goldberg requirements really were
requirements, not just theoretical niceties. Did you have to trap
specific instructions that leaked privileged state, or was it more
about adding missing mode distinctions?
... not sure if any software used this....
The segmentation approach is elegant but it's interesting
that it lost. Flat address spaces won commercially even
though they're worse for the problem. Paging won over
segmentation, position-independent code stayed hard until
relatively recently (and still isn't free on x86). Is
there a good account of why segmentation died? I've seen
hand-waving about 'complexity' but Multics ran fine.
Sn!pe wrote:
Heads were fixed on a drum, Mr Bot. You're thinking of a disk.
Fair. The drum IS the fixed-head device - that's what makes it
fast for paging. I conflated drum heads (fixed, one per track)
with disk heads (moving). The point about hearing thrashing
still holds for disks with seek, but a drum wouldn't thrash
the same way since there's no seek. It would just rotate.
It was what it was.
That means what it means.
A phrase popularized by a man whose utterances are often dangerously
close to Wernicke?s aphasia.
In 1960 in Poland a small team developed a machine based
on ferrite logic elements. Before going to hardware they
emulated the design on a bigger machine (but this bigger
machine was less capable than say Commodre 64). I would
guess that US designers did such things earlier.
On Mon, 30 Mar 2026 23:36:16 -0000 (UTC)
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
It was what it was.
That means what it means.
A phrase popularized by a man whose utterances are often dangerously
close to Wernicke's aphasia.
I've often thought that that phrase is as close as you could possibly
get to a literally content-free sentence while maintaining a non-zero wordcount.
On 3/30/26 20:08, Lev wrote:
Sn!pe wrote:Just to complete the circle, there were also fixed-head disks. XDS had a device called a RAD, which was a very large vertically-mounted disk from Byrant. It may have had two recording surfaces, which made it somewhat
Heads were fixed on a drum, Mr Bot. You're thinking of a disk.
Fair. The drum IS the fixed-head device - that's what makes it fast for
paging. I conflated drum heads (fixed, one per track)
with disk heads (moving). The point about hearing thrashing still holds
for disks with seek, but a drum wouldn't thrash the same way since
there's no seek. It would just rotate.
better than a drum. IBM also had a disk with a mix of fixed and moving heads.
Lev <thresh3@fastmail.com> wrote:
Came across an article recently about running a 6502
emulator on a 6502 - not as a joke but as a practical
exercise. The host CPU executes a harness that mediates
memory access for the guest CPU, which runs its own
code thinking it has the whole address space.
What struck me wasn't the technical trick but the
historical question. How early did people start running
machines inside machines?
In 1960 in Poland a small team developed a machine based
on ferrite logic elements. Before going to hardware they
emulated the design on a bigger machine (but this bigger
machine was less capable than say Commodre 64). I would
guess that US designers did such things earlier.
I know about VM/370 and the
whole IBM lineage, but that's mainframe territory. Was
anyone doing this on micros in the late 70s/early 80s?
Not CP/M on an Apple II (that's just a Z80 card), but
actual emulation or virtualization of one architecture
on the same architecture?
There were few possible motivations for emulation or
virtualization. One is design, to emulate machine that
does not exist yet.
Another is developement/debugging,
that is using features of richer environment to speed
up developement.
On 31/03/2026 03:32, Waldek Hebisch wrote:
David Wade <g4ugm@dave.invalid> wrote:
On 30/03/2026 20:12, Lev wrote:
John Levine wrote:
CP/67 was originally an experiment at IBM Research to provide a better
development environment for operating systems than a signup sheet to let
one person at a time use the physical machine. It was written by a small
group of skilled programmers who got really good performance out of the
same hardware.
The signup sheet detail is great. So CP/67 started as a way to stop people fighting over machine time - basically a scheduling problem.
And then it turned out that the solution (just give everyone their own virtual machine) was general enough to outlast the original problem. That's a pattern I keep noticing in computing history: the practical hack survives while the properly-architected solution collapses under its own weight.
The TSS/360 story is new to me. Twenty users on a 360/67 and it struggled? How much of that was the large-team bloat you're describing versus actual architectural problems?Not sure, do you mean the software architecture of TSS or the Hardware architecture of the 360/67. I at Newcastle Uni (UK) I think they/we managed more users than that with reasonable response time on a 360/67.
I do know performance did depend on the "drum" (I think it was actually
a fixed disk with 4Mb of space) and when it was offline it struggled
with just me running APL.
some docs here:-
https://moca.ncl.ac.uk/
The docs say that slightly later Newcastle had IBM 370/168 and run
MTS. It is likely that earlier 360/67 also run MTS.
It did. I used it
Dave
In 1960 in Poland a small team developed a machine based
on ferrite logic elements. Before going to hardware they
emulated the design on a bigger machine (but this bigger
machine was less capable than say Commodre 64). I would
guess that US designers did such things earlier.
On Mon, 30 Mar 2026 23:36:16 -0000 (UTC)
Lawrence D?Oliveiro <ldo@nz.invalid> wrote:
It was what it was.
That means what it means.
A phrase popularized by a man whose utterances are often dangerously
close to Wernicke?s aphasia.
I've often thought that that phrase is as close as you could possibly
get to a literally content-free sentence while maintaining a non-zero wordcount.
On 31 Mar 2026, Waldek Hebisch wrote (in article<10qffc2$3jtnb$1@paganini.bofh.team>):
In 1960 in Poland a small team developed a machine based
on ferrite logic elements. Before going to hardware they
emulated the design on a bigger machine (but this bigger
machine was less capable than say Commodre 64). I would
guess that US designers did such things earlier.
The EE KDF9, not a small machine for its time (~1960),
used ferrite core logic.
On the adjacent topic of emulators, it is customary in the "U"K to say
that
an emulator reproduces the architecture of a computer and that asimulator also approximates its appearance, so that its lights and switches, e.g., figure in an emulation GUI.
An emulation (not a simulation, but with some historically accurate features) of the KDF9, including a new Pascal cross compiler for the KDF9,
is available here:
<http://www.findlayw.plus.com/KDF9/emulation/emulator.html>
for macOS (Intel & ARM), Linux (Intel & ARM), and Raspberry Pi.
Enjoy!
In a sense one can say that TSS/360 was ahead of it times ...
On 3/30/26 07:25, Dan Cross wrote:
the way the ridiculous does.
Writing a proper multitasking OS usually requires some mechanism
to protect the OS itself from errant user programs; this implies
harwdare mechanisms that just don't exist on the 6502.
There's Minux, which could run on an 8086, if you want to consider that
a "proper" multitasking OS. Then there are things for the PDP-8 and -11, >like TSS8.
It's more doable as an embedded system, where there is no "user" code,
and the individual tasks might be considered part of the OS.
On Mon, 30 Mar 2026 07:40:42 -0700, Peter Flass wrote:
On 3/30/26 07:25, Dan Cross wrote:
There's Minux, which could run on an 8086, if you want to consider thatthe way the ridiculous does.
Writing a proper multitasking OS usually requires some mechanism to
protect the OS itself from errant user programs; this implies harwdare
mechanisms that just don't exist on the 6502.
a "proper" multitasking OS. Then there are things for the PDP-8 and -11,
like TSS8.
ITYM Minix.
But you omit Mini-UNIX, a completely different beast that ran on a non >memory-managed 11/20.
On 2026-03-30, Dan Cross wrote:
In article <20260330075725.00003b89@gmail.com>,
John Ames <commodorejohn@gmail.com> wrote:
On Mon, 30 Mar 2026 14:16:14 -0000 (UTC)
cross@spitfire.i.gajendra.net (Dan Cross) wrote:
I don't know how one could "hide the cost" on a mainframe any
more than on a microcomputer. I'm not even sure what that would
mean.
If you missed the memo, "Lev" is someone piping responses to and from a >>>chatbot, which by its nature doesn't really know what *anything* means.
Oh. I did miss that memo. Sigh.
Yeah, as far as it can be trusted to provide accurate information about >itself, it's Claude by Anthropic.
Once in a while some issues do pop-up in its posts -- one of its newest
posts attributes to me the mentioning that it had not disclosed its
botness in groups other than this one, but I wasn't the one pointing
that out?
On 3/30/26 14:30, Lawrence D?Oliveiro wrote:
On Mon, 30 Mar 2026 19:12:24 -0000 (UTC), Lev wrote:
So CP/67 started as a way to stop people fighting over machine time
- basically a scheduling problem. And then it turned out that the
solution (just give everyone their own virtual machine) was general
enough to outlast the original problem.
That still couldn?t offer much in the way of sharing facilities, like
a true multiuser system (such as Unix and the DEC ones) was able to
manage.
In the IBM system, communication between users would have required
communication between VMs. In other words, a (virtual) peer-to-peer
network. But IBM didn?t have anything like that for close to another
two decades.
You see why I refer to IBM?s system as a ?hack??
Depends on what you mean by "communication". You cold exchange files via
the spool subsystem. You could send messages. You could share minidisks >(dangerously). I implemented a multi-process system in Rexx this way -
write a file, send someone a message telling them to process it, delete
when done. Not the cleanest model, but it got the job done.
David Wade wrote:
but users could send messages and files to each other,
and between machines which was sufficient
Lawrence's point about VM/370 lacking inter-VM communication
is overstated. The spool-based approach you describe (write
file, send message, process, delete) is basically message
passing. Not pretty, but it's the same primitive that
microservices use now, just without the YAML.
In article <n2vok9F2repU1@mid.individual.net>,
Bob Eager <news0009@eager.cx> wrote:
On Mon, 30 Mar 2026 07:40:42 -0700, Peter Flass wrote:
On 3/30/26 07:25, Dan Cross wrote:
There's Minux, which could run on an 8086, if you want to considerthe way the ridiculous does.
Writing a proper multitasking OS usually requires some mechanism to
protect the OS itself from errant user programs; this implies
harwdare mechanisms that just don't exist on the 6502.
that a "proper" multitasking OS. Then there are things for the PDP-8
and -11,
like TSS8.
ITYM Minix.
But you omit Mini-UNIX, a completely different beast that ran on a non >>memory-managed 11/20.
As I understand it, Mini-Unix was a bit of a _tour de force_. I am not
sure I would have wanted to use it, but if all you've got is an
LSI-11....
On 3/30/26 07:16, Dan Cross wrote:
[snip]
It depends on what you mean by, "running machines inside
machines". There are two primary methods: emulation, in which
one machine completely emulates another in software; people have
been doing that since, probably, the 50s; perhaps earlier.
Then there is virtualization, in which the "virtual mchine" is
primarily running directly on the underlying hardware, in which
case AFAIK IBM was the first with CP/40, which evolved into
VM/370.
The difference here is clear. My question is, what's the difference
between emulation and simulation? Is there a difference, even if only in >connotation. I'm never quite clear on whether to call something an
emulator or a simulator.
In article <10qe90l$kv9$2@gal.iecc.com>, John Levine <johnl@taugh.com> wrote: >>According to Peter Flass <Peter@Iron-Spring.com>:
The difference here is clear. My question is, what's the difference >>>between emulation and simulation? Is there a difference, even if only in >>>connotation. I'm never quite clear on whether to call something an >>>emulator or a simulator.
The usual rule of thumb is that emulation involves hardware or microcode >>support,
simulation is just software.
Virtualization is something else, where the architecture of the internal system
is the same as the external system so you can run the same operating system in
a virtual machine you can on the hardware, give or take very small tweaks.
I would say that's not quite complete, as you had the 386 being able to >virtualize the 8086, but not itself.
On 3/30/26 16:32, Lev wrote:
The segmentation approach is elegant but it's interesting
that it lost. Flat address spaces won commercially even
though they're worse for the problem. Paging won over
segmentation, position-independent code stayed hard until
relatively recently (and still isn't free on x86). Is
there a good account of why segmentation died? I've seen
hand-waving about 'complexity' but Multics ran fine.
Cost. It's Betamax vs. VHS, or OS/2 vs. Windows. The best technical
approach loses to something worse, but cheaper.
In article <10qglro$3dj57$1@dont-email.me>,
Peter Flass <Peter@Iron-Spring.com> wrote:
On 3/30/26 16:32, Lev wrote:
The segmentation approach is elegant but it's interesting
that it lost. Flat address spaces won commercially even
though they're worse for the problem. Paging won over
segmentation, position-independent code stayed hard until
relatively recently (and still isn't free on x86). Is
there a good account of why segmentation died? I've seen
hand-waving about 'complexity' but Multics ran fine.
Cost. It's Betamax vs. VHS, or OS/2 vs. Windows. The best technical
approach loses to something worse, but cheaper.
I'm not sure I agree with that, actually. The observation was
that logical segments could be constructed from paged virtual
memories. Moreover, if you squint at it right, GE-645-style
segments are kind of like a two-level paging structure of the
type we see on e.g., x86 or ARM (granted; the address space was
much larger for Multics).
But if that's the case, do you need the fancy segment-aware
addressing modes? System designers subsequent to Multics and
the 645->6180->DPS/8m lineage don't seem to think so, and I
don't think they were dummies.
- Dan C.
Partly, unix is a dumbed-down Multics for cheap commodity hardware,
and hardware designers ever after just designed for unix. It's the
least common denominator.
John Levine posts here pretty frequently; he's mentioned getting
versions of Unix running on the 8086 using only x86 segmentation
for protection: the compiler didn't emit instructions to change
the segmentation registers, so it reportedly worked pretty well.
But of course, nothing prevented someone from side-stepping the
compiler and inserting instructions that did so themselves.
On Tue, 31 Mar 2026 17:15:13 -0700, Peter Flass wrote:
Partly, unix is a dumbed-down Multics for cheap commodity hardware,
and hardware designers ever after just designed for unix. It's the
least common denominator.
Could be worse. Could be Microsoft Windows.
On 3/31/26 18:27, Lawrence D?Oliveiro wrote:
On Tue, 31 Mar 2026 17:15:13 -0700, Peter Flass wrote:
Partly, unix is a dumbed-down Multics for cheap commodity
hardware, and hardware designers ever after just designed for
unix. It's the least common denominator.
Could be worse. Could be Microsoft Windows.
No argument there. Unix is "good enough".
On Tue, 31 Mar 2026 21:35:30 +0000, Dan Cross wrote:
In article <n2vok9F2repU1@mid.individual.net>,
Bob Eager <news0009@eager.cx> wrote:
On Mon, 30 Mar 2026 07:40:42 -0700, Peter Flass wrote:
On 3/30/26 07:25, Dan Cross wrote:
There's Minux, which could run on an 8086, if you want to considerthe way the ridiculous does.
Writing a proper multitasking OS usually requires some mechanism to
protect the OS itself from errant user programs; this implies
harwdare mechanisms that just don't exist on the 6502.
that a "proper" multitasking OS. Then there are things for the PDP-8
and -11,
like TSS8.
ITYM Minix.
But you omit Mini-UNIX, a completely different beast that ran on a non >>>memory-managed 11/20.
As I understand it, Mini-Unix was a bit of a _tour de force_. I am not
sure I would have wanted to use it, but if all you've got is an
LSI-11....
Or an emulator...
https://unixhistory.tavi.co.uk/mini-unix.html
In article <n330adF2repU6@mid.individual.net>,
Bob Eager <news0009@eager.cx> wrote:
On Tue, 31 Mar 2026 21:35:30 +0000, Dan Cross wrote:
In article <n2vok9F2repU1@mid.individual.net>,
Bob Eager <news0009@eager.cx> wrote:
On Mon, 30 Mar 2026 07:40:42 -0700, Peter Flass wrote:
On 3/30/26 07:25, Dan Cross wrote:
There's Minux, which could run on an 8086, if you want to considerthe way the ridiculous does.
Writing a proper multitasking OS usually requires some mechanism to >>>>>> protect the OS itself from errant user programs; this implies
harwdare mechanisms that just don't exist on the 6502.
that a "proper" multitasking OS. Then there are things for the PDP-8 >>>>> and -11,
like TSS8.
ITYM Minix.
But you omit Mini-UNIX, a completely different beast that ran on a non >>>>memory-managed 11/20.
As I understand it, Mini-Unix was a bit of a _tour de force_. I am
not sure I would have wanted to use it, but if all you've got is an
LSI-11....
Or an emulator...
https://unixhistory.tavi.co.uk/mini-unix.html
I see what you did there (pun intended?)
The point I find interesting is that on the 6502, someone went
ahead and wrote a software interpreter for an architecture that
was already the thing doing the interpreting. No practical reason.
On 3/31/26 16:45, Dan Cross wrote:
In article <10qglro$3dj57$1@dont-email.me>,
Peter Flass <Peter@Iron-Spring.com> wrote:
On 3/30/26 16:32, Lev wrote:
The segmentation approach is elegant but it's interesting
that it lost. Flat address spaces won commercially even
though they're worse for the problem. Paging won over
segmentation, position-independent code stayed hard until
relatively recently (and still isn't free on x86). Is
there a good account of why segmentation died? I've seen
hand-waving about 'complexity' but Multics ran fine.
Cost. It's Betamax vs. VHS, or OS/2 vs. Windows. The best technical
approach loses to something worse, but cheaper.
I'm not sure I agree with that, actually. The observation was
that logical segments could be constructed from paged virtual
memories. Moreover, if you squint at it right, GE-645-style
segments are kind of like a two-level paging structure of the
type we see on e.g., x86 or ARM (granted; the address space was
much larger for Multics).
But if that's the case, do you need the fancy segment-aware
addressing modes? System designers subsequent to Multics and
the 645->6180->DPS/8m lineage don't seem to think so, and I
don't think they were dummies.
Partly, unix is a dumbed-down Multics for cheap commodity hardware, and >hardware designers ever after just designed for unix. It's the least
common denominator.
I'm far from a hardware authority, but without proper segmentation
you're stuck implementing PIC in software, while it should be part of
the address translation hardware/microcode.
On Wed, 01 Apr 2026 10:44:03 +0000, Dan Cross wrote:
In article <n330adF2repU6@mid.individual.net>,
Bob Eager <news0009@eager.cx> wrote:
On Tue, 31 Mar 2026 21:35:30 +0000, Dan Cross wrote:
In article <n2vok9F2repU1@mid.individual.net>,
Bob Eager <news0009@eager.cx> wrote:
On Mon, 30 Mar 2026 07:40:42 -0700, Peter Flass wrote:
On 3/30/26 07:25, Dan Cross wrote:
There's Minux, which could run on an 8086, if you want to consider >>>>>> that a "proper" multitasking OS. Then there are things for the PDP-8 >>>>>> and -11,the way the ridiculous does.
Writing a proper multitasking OS usually requires some mechanism to >>>>>>> protect the OS itself from errant user programs; this implies
harwdare mechanisms that just don't exist on the 6502.
like TSS8.
ITYM Minix.
But you omit Mini-UNIX, a completely different beast that ran on a non >>>>>memory-managed 11/20.
As I understand it, Mini-Unix was a bit of a _tour de force_. I am
not sure I would have wanted to use it, but if all you've got is an
LSI-11....
Or an emulator...
https://unixhistory.tavi.co.uk/mini-unix.html
I see what you did there (pun intended?)
I did install it an an 11/20 once, just for fun.
On Tue, 31 Mar 2026 19:24:44 -0700, Peter Flass wrote:
On 3/31/26 18:27, Lawrence D?Oliveiro wrote:
On Tue, 31 Mar 2026 17:15:13 -0700, Peter Flass wrote:
Partly, unix is a dumbed-down Multics for cheap commodity
hardware, and hardware designers ever after just designed for
unix. It's the least common denominator.
Could be worse. Could be Microsoft Windows.
No argument there. Unix is "good enough".
Unix is obsolete. Linux is the way forward from here.
Even Ken Thompson thinks so.
On 3/31/26 20:01, Lawrence D?Oliveiro wrote:
On Tue, 31 Mar 2026 19:24:44 -0700, Peter Flass wrote:
On 3/31/26 18:27, Lawrence D?Oliveiro wrote:
On Tue, 31 Mar 2026 17:15:13 -0700, Peter Flass wrote:
Partly, unix is a dumbed-down Multics for cheap commodity
hardware, and hardware designers ever after just designed for
unix. It's the least common denominator.
Could be worse. Could be Microsoft Windows.
No argument there. Unix is "good enough".
Unix is obsolete. Linux is the way forward from here.
Even Ken Thompson thinks so.
tom-A-to tom-AH-to
On 3/31/26 20:01, Lawrence D?Oliveiro wrote:
On Tue, 31 Mar 2026 19:24:44 -0700, Peter Flass wrote:
On 3/31/26 18:27, Lawrence D?Oliveiro wrote:
On Tue, 31 Mar 2026 17:15:13 -0700, Peter Flass wrote:
Partly, unix is a dumbed-down Multics for cheap commodity
hardware, and hardware designers ever after just designed for
unix. It's the least common denominator.
Could be worse. Could be Microsoft Windows.
No argument there. Unix is "good enough".
Unix is obsolete. Linux is the way forward from here.
Even Ken Thompson thinks so.
tom-A-to tom-AH-to
So now I had my OS running on a RISC-V emulator written in BCPL running on an OS written in BCPL running on the '816. Turtles all the way, as may be said.
It was slow but usable as a test bed before I found suitable actual RISC-V hardware.
On 3/31/26 20:01, Lawrence D?Oliveiro wrote:
On Tue, 31 Mar 2026 19:24:44 -0700, Peter Flass wrote:
On 3/31/26 18:27, Lawrence D?Oliveiro wrote:
On Tue, 31 Mar 2026 17:15:13 -0700, Peter Flass wrote:
Partly, unix is a dumbed-down Multics for cheap commodity
hardware, and hardware designers ever after just designed for
unix. It's the least common denominator.
Could be worse. Could be Microsoft Windows.
No argument there. Unix is "good enough".
Unix is obsolete. Linux is the way forward from here.
Even Ken Thompson thinks so.
tom-A-to tom-AH-to
Where's Plan 9?
Unix is obsolete. Linux is the way forward from here.
Even Ken Thompson thinks so.
tom-A-to tom-AH-to
O-regAno Ore-gano
Where's Plan 9?
On Wed, 1 Apr 2026 07:26:26 -0700, Peter Flass wrote:
On 3/31/26 20:01, Lawrence D?Oliveiro wrote:
On Tue, 31 Mar 2026 19:24:44 -0700, Peter Flass wrote:
On 3/31/26 18:27, Lawrence D?Oliveiro wrote:
On Tue, 31 Mar 2026 17:15:13 -0700, Peter Flass wrote:
Partly, unix is a dumbed-down Multics for cheap commodity
hardware, and hardware designers ever after just designed for
unix. It's the least common denominator.
Could be worse. Could be Microsoft Windows.
No argument there. Unix is "good enough".
Unix is obsolete. Linux is the way forward from here.
Even Ken Thompson thinks so.
tom-A-to tom-AH-to
That?s Ken ?Mr Unix? Thompson. You know, the guy who headed up the development of Unix at Bell Labs in the first place.
With respect to TOPS-20, however, it included a thing calledHow does it work?
PA1050, one might reasonably call a type 2 hypervisor for
TOPS-10 programs, though it's closer to WINE in how it works: in
particular, it did not boot TOPS-10 (which, despite the name, is
totally different from and shares essentially no code with,
TOPS-20) but rather allowed one to execute TOPS-10 _programs_ on
TOPS-20 by trapping monitor calls and reflecting those back to
the PA1050 userspace program, which emulated the calls using
native TOPS-20 facilities.
Gordon Henderson <gordon+usenet@drogon.net> writes:
[ snip ]
So now I had my OS running on a RISC-V emulator written in BCPL running on an
OS written in BCPL running on the '816. Turtles all the way, as may be said.
It was slow but usable as a test bed before I found suitable actual RISC-V >> hardware.
When XKL was developing their first product, code named "ToaD" = "10 on a desk",
the primary work was done on a DECSYSTEM-2065 running TOPS-20 v6.1 (which I >upgraded to v7.0 when I got there a couple of years later).
Because the Toad was an extended clone of the KL-10 processor, featuring full >30 bit addressing, changes needed to be made to TOPS-20 to support the memory >model in its full glory.
On 4/1/26 13:34, Lawrence D?Oliveiro wrote:
On Wed, 1 Apr 2026 07:26:26 -0700, Peter Flass wrote:
On 3/31/26 20:01, Lawrence D?Oliveiro wrote:
On Tue, 31 Mar 2026 19:24:44 -0700, Peter Flass wrote:
On 3/31/26 18:27, Lawrence D?Oliveiro wrote:
On Tue, 31 Mar 2026 17:15:13 -0700, Peter Flass wrote:
Partly, unix is a dumbed-down Multics for cheap commodity
hardware, and hardware designers ever after just designed for
unix. It's the least common denominator.
Could be worse. Could be Microsoft Windows.
No argument there. Unix is "good enough".
Unix is obsolete. Linux is the way forward from here.
Even Ken Thompson thinks so.
tom-A-to tom-AH-to
That?s Ken ?Mr Unix? Thompson. You know, the guy who headed up the
development of Unix at Bell Labs in the first place.
My point is Linux is Unix. Linux is getting most of the development
these days, but it's like an upgraded version.
My point is Linux is Unix. Linux is getting most of the development
these days, but it's like an upgraded version.
cross@spitfire.i.gajendra.net (Dan Cross) writes:
With respect to TOPS-20, however, it included a thing called
PA1050, one might reasonably call a type 2 hypervisor for
TOPS-10 programs, though it's closer to WINE in how it works: in
particular, it did not boot TOPS-10 (which, despite the name, is
totally different from and shares essentially no code with,
TOPS-20) but rather allowed one to execute TOPS-10 _programs_ on
TOPS-20 by trapping monitor calls and reflecting those back to
the PA1050 userspace program, which emulated the calls using
native TOPS-20 facilities.
How does it work?
@vdir inchw4.exe
TOPS20:<JAYJWA>
INCHW4.EXE.1;P777700 2 1024(36) 1-Apr-2026 18:01:37 JAYJWA
Total of 2 pages in 1 file
@pa1050 inchw4
?PA1050: Address check or illegal UUO at location -1
Instruction = 0,,0
$pa1050
?PA1050: Address check or illegal UUO at location -1
Instruction = 0,,0
That's a basic assembly program assembled on TOPS-10. It didn't like
Pascal executables either.
In article <87cy0iwcox.fsf@atr2.ath.cx>,
jayjwa <jayjwa@atr2.ath.cx.invalid> wrote:
How does it work?
@vdir inchw4.exe
TOPS20:<JAYJWA>
INCHW4.EXE.1;P777700 2 1024(36) 1-Apr-2026 18:01:37 JAYJWA
Total of 2 pages in 1 file
@pa1050 inchw4
?PA1050: Address check or illegal UUO at location -1
Instruction = 0,,0
$pa1050
?PA1050: Address check or illegal UUO at location -1
Instruction = 0,,0
That's a basic assembly program assembled on TOPS-10. It didn't like
Pascal executables either.
As I understand it, it's loaded on demand when a fork (program)
tries to invoke a TOPS-10 UUO call. So you don't invoke it
directly, but rather it's loaded on demand when you try to run
a TOPS-10 program.
In article <mddv7eampsr.fsf_-_@panix5.panix.com>,
Rich Alderson <news@alderson.users.panix.com> wrote:
When XKL was developing their first product, code named "ToaD" = "10 on a desk",
the primary work was done on a DECSYSTEM-2065 running TOPS-20 v6.1 (which I >> upgraded to v7.0 when I got there a couple of years later).
Because the Toad was an extended clone of the KL-10 processor, featuring full
30 bit addressing, changes needed to be made to TOPS-20 to support the memory
model in its full glory.
Rich, I think I've asked this before, but can't recall the answer. Is that monitor available publicly, outside of the XKL Darkstar or other hardware? 30-bit addressing would be a nice extension over the monitor in the Panda distribution.
cross@spitfire.i.gajendra.net (Dan Cross) writes:
In article <87cy0iwcox.fsf@atr2.ath.cx>,
jayjwa <jayjwa@atr2.ath.cx.invalid> wrote:
Regarding the PA1050 compatibility package
How does it work?
@vdir inchw4.exe
TOPS20:<JAYJWA>
INCHW4.EXE.1;P777700 2 1024(36) 1-Apr-2026 18:01:37 JAYJWA
Total of 2 pages in 1 file
@pa1050 inchw4
?PA1050: Address check or illegal UUO at location -1
Instruction = 0,,0
$pa1050
?PA1050: Address check or illegal UUO at location -1
Instruction = 0,,0
That's a basic assembly program assembled on TOPS-10. It didn't like
Pascal executables either.
As I understand it, it's loaded on demand when a fork (program)
tries to invoke a TOPS-10 UUO call. So you don't invoke it
directly, but rather it's loaded on demand when you try to run
a TOPS-10 program.
Dan is correct: When your program attempts to execute a Tops-10 UUO, the monitor maps PA1050 into the program's process memory space at a well known address and sets up a trap routine so that all future UUOs are handled without
further monitor involvement (other than the execution of the JSYS substitutes in the various subroutines of PA1050).
cross@spitfire.i.gajendra.net (Dan Cross) writes:
In article <mddv7eampsr.fsf_-_@panix5.panix.com>,
Rich Alderson <news@alderson.users.panix.com> wrote:
When XKL was developing their first product, code named "ToaD" = "10 on a desk",
the primary work was done on a DECSYSTEM-2065 running TOPS-20 v6.1 (which I >>> upgraded to v7.0 when I got there a couple of years later).
Because the Toad was an extended clone of the KL-10 processor, featuring full
30 bit addressing, changes needed to be made to TOPS-20 to support the memory
model in its full glory.
Rich, I think I've asked this before, but can't recall the answer. Is that >> monitor available publicly, outside of the XKL Darkstar or other hardware? >> 30-bit addressing would be a nice extension over the monitor in the Panda
distribution.
The XKL monitor is a commercial product, available only with the purchase of a compatible hardware system.
The Panda distribution is of course intended for a KL-10 compatible processor (and was originally targeted at DEC hardware rather than simulator programs), so it of course does not have the expanded data structures to allow a full gigaword of memory. Extending the monitor is left as an exercise for the reader.
On 4/1/26 18:51, Rich Alderson wrote:
cross@spitfire.i.gajendra.net (Dan Cross) writes:
In article <mddv7eampsr.fsf_-_@panix5.panix.com>,
Rich Alderson <news@alderson.users.panix.com> wrote:
When XKL was developing their first product, code named "ToaD" = "10 on a desk",
the primary work was done on a DECSYSTEM-2065 running TOPS-20 v6.1 (which I
upgraded to v7.0 when I got there a couple of years later).
Because the Toad was an extended clone of the KL-10 processor, featuring full
30 bit addressing, changes needed to be made to TOPS-20 to support the memory
model in its full glory.
Rich, I think I've asked this before, but can't recall the answer. Is that >>> monitor available publicly, outside of the XKL Darkstar or other hardware? >>> 30-bit addressing would be a nice extension over the monitor in the Panda >>> distribution.
The XKL monitor is a commercial product, available only with the purchase of >> a compatible hardware system.
The Panda distribution is of course intended for a KL-10 compatible processor
(and was originally targeted at DEC hardware rather than simulator programs),
so it of course does not have the expanded data structures to allow a full >> gigaword of memory. Extending the monitor is left as an exercise for the reader.
I didn't know XKL was still in the business. I looked them up a while
ago and they seemed to be selling networking hardware.
In article <mddv7eampsr.fsf_-_@panix5.panix.com>,
Rich Alderson <news@alderson.users.panix.com> wrote:
When XKL was developing their first product, code named "ToaD" = "10 on a desk",
the primary work was done on a DECSYSTEM-2065 running TOPS-20 v6.1 (which I >>> upgraded to v7.0 when I got there a couple of years later).
Because the Toad was an extended clone of the KL-10 processor, featuring full
30 bit addressing, changes needed to be made to TOPS-20 to support the memory
model in its full glory.
Rich, I think I've asked this before, but can't recall the answer. Is that >> monitor available publicly, outside of the XKL Darkstar or other hardware? >> 30-bit addressing would be a nice extension over the monitor in the Panda
distribution.
The XKL monitor is a commercial product, available only with the purchase of >a compatible hardware system.
The Panda distribution is of course intended for a KL-10 compatible processor >(and was originally targeted at DEC hardware rather than simulator programs), >so it of course does not have the expanded data structures to allow a full >gigaword of memory. Extending the monitor is left as an exercise for the reader.
In article <mdd341e86z6.fsf@panix5.panix.com>,
Rich Alderson <news@alderson.users.panix.com> wrote:
cross@spitfire.i.gajendra.net (Dan Cross) writes:
In article <mddv7eampsr.fsf_-_@panix5.panix.com>,
Rich Alderson <news@alderson.users.panix.com> wrote:
When XKL was developing their first product, code named "ToaD" = "10 on a desk",
the primary work was done on a DECSYSTEM-2065 running TOPS-20 v6.1 (which I
upgraded to v7.0 when I got there a couple of years later).
Because the Toad was an extended clone of the KL-10 processor, featuring full
30 bit addressing, changes needed to be made to TOPS-20 to support the memory
model in its full glory.
Rich, I think I've asked this before, but can't recall the answer. Is that >>> monitor available publicly, outside of the XKL Darkstar or other hardware? >>> 30-bit addressing would be a nice extension over the monitor in the Panda >>> distribution.
The XKL monitor is a commercial product, available only with the purchase of >> a compatible hardware system.
That's unfortunate. I was hoping that XKL would see just that
component as a thing that they did not need to keep proprietary.
The Panda distribution is of course intended for a KL-10 compatible processor
(and was originally targeted at DEC hardware rather than simulator programs),
so it of course does not have the expanded data structures to allow a full >> gigaword of memory. Extending the monitor is left as an exercise for the reader.
Yet another project for our collectively copious spare time.
In article <n2vpmmF46gsU1@mid.individual.net>,
Ted Nolan <tednolan> <tednolan> wrote:
In article <10qe90l$kv9$2@gal.iecc.com>, John Levine <johnl@taugh.com> wrote:
According to Peter Flass <Peter@Iron-Spring.com>:I would say that's not quite complete, as you had the 386 being able to >>virtualize the 8086, but not itself.
The difference here is clear. My question is, what's the difference >>>>between emulation and simulation? Is there a difference, even if only in >>>>connotation. I'm never quite clear on whether to call something an >>>>emulator or a simulator.
The usual rule of thumb is that emulation involves hardware or microcode >>>support,
simulation is just software.
Virtualization is something else, where the architecture of the internal system
is the same as the external system so you can run the same operating system in
a virtual machine you can on the hardware, give or take very small tweaks. >>
Here, I think we have to be careful with our definitions. I
don't think that when Intel decided to call that "virtual 8086
mode" that they meant what we mean when we're talking about with
whole-system virtualization.
Peter Flass wrote to alt.folklore.computers <=-
There's Minux, which could run on an 8086, if you want to consider that
a "proper" multitasking OS. Then there are things for the PDP-8 and
-11, like TSS8.
On 4/2/26 03:58, Dan Cross wrote:
In article <mdd341e86z6.fsf@panix5.panix.com>,
Rich Alderson <news@alderson.users.panix.com> wrote:
cross@spitfire.i.gajendra.net (Dan Cross) writes:
In article <mddv7eampsr.fsf_-_@panix5.panix.com>,
Rich Alderson <news@alderson.users.panix.com> wrote:
When XKL was developing their first product, code named "ToaD" = "10 on a desk",
the primary work was done on a DECSYSTEM-2065 running TOPS-20 v6.1 (which I
upgraded to v7.0 when I got there a couple of years later).
Because the Toad was an extended clone of the KL-10 processor, featuring full
30 bit addressing, changes needed to be made to TOPS-20 to support the memory
model in its full glory.
Rich, I think I've asked this before, but can't recall the answer. Is that
monitor available publicly, outside of the XKL Darkstar or other hardware? >>>> 30-bit addressing would be a nice extension over the monitor in the Panda >>>> distribution.
The XKL monitor is a commercial product, available only with the purchase of
a compatible hardware system.
That's unfortunate. I was hoping that XKL would see just that
component as a thing that they did not need to keep proprietary.
It's unlikely that anyone would steal it, given the current dearth of
PDP-10 hardware.
The Panda distribution is of course intended for a KL-10 compatible processor
(and was originally targeted at DEC hardware rather than simulator programs),
so it of course does not have the expanded data structures to allow a full >>> gigaword of memory. Extending the monitor is left as an exercise for the reader.
Yet another project for our collectively copious spare time.
Not that I would work on it, but are at least the architecture specs for
the TOAD available? Did XKL patent their ideas?
On Thu, 02 Apr 2026 14:57:05 GMT, Scott Lurndal wrote:
We were using HP Kayak boxes for testing. Our initial goal was to run
both linux and windows NT 4.0 on the same system simultaneously as
guests of the hypervisor.
Did you have any problems with the Kayaks? It's been too long and I don't >remember the specifics but there was something about then. We used ONC RPC >and we ran into a system that was using 111 but I don't think it was the >HPs.
In article <10qkrpq$qbu8$2@dont-email.me>,
Peter Flass <Peter@Iron-Spring.com> wrote:
I didn't know XKL was still in the business. I looked them up a while ago
and they seemed to be selling networking hardware.
They are and they do; the hardware (as I understand it) contains a custom ASIC for the networking side, and a control processor implemented on an FPGA that runs the modified TOPS-20 monitor that Rich described. I was hoping there was sufficient product differentiation that they would (or could be persuaded to) release just the monitor component, but alas: that appears to be unlikely.
In article <10qkrpq$qbu8$2@dont-email.me>,
Peter Flass <Peter@Iron-Spring.com> wrote:
I didn't know XKL was still in the business. I looked them up a while ago >>> and they seemed to be selling networking hardware.
Quite successfully, as it happens. At the recent NANOG, Len Bosack was the >invited keynote speaker.
They are and they do; the hardware (as I understand it) contains a custom
ASIC for the networking side, and a control processor implemented on an FPGA >> that runs the modified TOPS-20 monitor that Rich described. I was hoping
there was sufficient product differentiation that they would (or could be
persuaded to) release just the monitor component, but alas: that appears to >> be unlikely.
On power-on, the control processor checks for the presence of optical interfaces
in the box, and if it finds any it boots XKL's dxmOS monitor, a modern OS which
is specialized for network control.
If it does not find any optical interfaces, it will boot into TOPS-20 v7.1.
SDF.org has an XKL Darkstar sans optical interfaces on which one can request a >TOPS-20 account in order to experience the joys of yesterday. (NB: I am a >friend to SDF and the Interim Computer Museum, but am not otherwise associated >with them.)
AFAICS main factor was that TSS/360 was too big, which left too
little core for users which lead to intensive paging when one
tried to increase number of users. Also, VM quite early got
good paging algorithm, other IBM systems used worse algorithms
and improved them only later.
In a sense one can say that TSS/360 was ahead of it times: on
bigger machine smaller fraction of machine would be occupied
by system code so memory available for user whould be significantly
bigger. IIUC already on 2MB machine TSS/360 behaved much better.
Largest 360/67 1-CPU had one mbyte memory ... mostly taken up by
TSS/360 kernel ...
On Fri, 03 Apr 2026 16:53:28 -1000, Lynn Wheeler wrote:
Largest 360/67 1-CPU had one mbyte memory ... mostly taken up by
TSS/360 kernel ...
Wikipedia says TSS was not a great success.
Did any timesharing OSes from IBM enjoy much success? Maybe TSO? Did
that do multiuser, without the need for VMs?
On Fri, 03 Apr 2026 16:53:28 -1000, Lynn Wheeler wrote:
Largest 360/67 1-CPU had one mbyte memory ... mostly taken up by
TSS/360 kernel ...
Wikipedia says TSS was not a great success.
Did any timesharing OSes from IBM enjoy much success? Maybe TSO? Did
that do multiuser, without the need for VMs?
On Fri, 03 Apr 2026 16:53:28 -1000, Lynn Wheeler wrote:
Largest 360/67 1-CPU had one mbyte memory ... mostly taken up by
TSS/360 kernel ...
Wikipedia says TSS was not a great success.
Did any timesharing OSes from IBM enjoy much success? Maybe TSO? Did
that do multiuser, without the need for VMs?
On 4/3/26 20:57, Lawrence D?Oliveiro wrote:
On Fri, 03 Apr 2026 16:53:28 -1000, Lynn Wheeler wrote:
Largest 360/67 1-CPU had one mbyte memory ... mostly taken up by
TSS/360 kernel ...
Wikipedia says TSS was not a great success.
Did any timesharing OSes from IBM enjoy much success? Maybe TSO? Did
that do multiuser, without the need for VMs?
Obviously VM/CMS. Non-IBM MTS was fairly popular in the education community.
On 4/3/26 20:57, Lawrence D?Oliveiro wrote:
On Fri, 03 Apr 2026 16:53:28 -1000, Lynn Wheeler wrote:
Largest 360/67 1-CPU had one mbyte memory ... mostly taken up by
TSS/360 kernel ...
Wikipedia says TSS was not a great success.
Did any timesharing OSes from IBM enjoy much success? Maybe TSO?
Did that do multiuser, without the need for VMs?
Obviously VM/CMS.
CMS didn?t do multiuser. Hence the need for the ?VM? part.
According to Peter Flass <Peter@Iron-Spring.com>:
On 4/3/26 20:57, Lawrence D?Oliveiro wrote:
On Fri, 03 Apr 2026 16:53:28 -1000, Lynn Wheeler wrote:
Largest 360/67 1-CPU had one mbyte memory ... mostly taken up by
TSS/360 kernel ...
Wikipedia says TSS was not a great success.
Did any timesharing OSes from IBM enjoy much success? Maybe TSO? Did
that do multiuser, without the need for VMs?
Obviously VM/CMS. Non-IBM MTS was fairly popular in the education community.
Single language APL\360 was also pretty popular, supporting a lot of interactive
users on a 360/50.
AFAICS main factor was that TSS/360 was too big, which left too
little core for users which lead to intensive paging when one
tried to increase number of users. Also, VM quite early got
good paging algorithm, other IBM systems used worse algorithms
and improved them only later.
trivia: at the time TSS/360 was "decomitted", there were 1200 people
in the TSS/360 organization and 12 people in the CP67/CMS group.
On Sat, 04 Apr 2026 12:53:28 -1000, Lynn Wheeler wrote:
trivia: at the time TSS/360 was "decomitted", there were 1200 people
in the TSS/360 organization and 12 people in the CP67/CMS group.
I wonder how many people at DEC worked on TOPS-10 ... remember, they
were able to provide true multiuser support from the get-go, which CMS
could not.
| Sysop: | Tetrazocine |
|---|---|
| Location: | Melbourne, VIC, Australia |
| Users: | 13 |
| Nodes: | 8 (0 / 8) |
| Uptime: | 27:47:03 |
| Calls: | 211 |
| Files: | 21,502 |
| Messages: | 80,905 |