John Levine <johnl@taugh.com> wrote:
According to David Wade <g4ugm@dave.invalid>:
The TSS/360 story is new to me. Twenty users on a 360/67 and itNot sure, do you mean the software architecture of TSS or the Hardware >>>architecture of the 360/67. I at Newcastle Uni (UK) I think they/we >>>managed more users than that with reasonable response time on a 360/67.
struggled? How much of that was the large-team bloat you're describing >>>> versus actual architectural problems?
A combination of overeager software architecture and implementation.
CP/67 and MTS both got good performance from the same hardware.
AFAICS main factor was that TSS/360 was too big, which left too
little core for users which lead to intensive paging when one
tried to increase number of users. Also, VM quite early got
good paging algorithm, other IBM systems used worse algorithms
and improved them only later.
In a sense one can say that TSS/360 was ahead of it times: on
bigger machine smaller fraction of machine would be occupied
by system code so memory available for user whould be significantly
bigger. IIUC already on 2MB machine TSS/360 behaved much better.
John Levine <johnl@taugh.com> wrote:
According to Waldek Hebisch <antispam@fricas.org>:
John Levine <johnl@taugh.com> wrote:
According to David Wade <g4ugm@dave.invalid>:
A combination of overeager software architecture and implementation.The TSS/360 story is new to me. Twenty users on a 360/67 and itNot sure, do you mean the software architecture of TSS or the Hardware >>>>>architecture of the 360/67. I at Newcastle Uni (UK) I think they/we >>>>>managed more users than that with reasonable response time on a 360/67. >>>>
struggled? How much of that was the large-team bloat you're describing >>>>>> versus actual architectural problems?
CP/67 and MTS both got good performance from the same hardware.
AFAICS main factor was that TSS/360 was too big, which left too
little core for users which lead to intensive paging when one
tried to increase number of users. Also, VM quite early got
good paging algorithm, other IBM systems used worse algorithms
and improved them only later.
That was certainly part of it. It was also quite buggy, with the bugginess >> inversely proportional to how heavily used a component was. The file system >> worked pretty well but I gather magtape support didn't.
In a sense one can say that TSS/360 was ahead of it times: on
bigger machine smaller fraction of machine would be occupied
by system code so memory available for user whould be significantly >>>bigger. IIUC already on 2MB machine TSS/360 behaved much better.
Well, there's a rule of thumb that the way you get good performance from
a paging system is to have enough RAM that you don't have to page.
To explain more what I mean: if one have 1MB machine and OS takes
800 kB for itself, then one has about 200 kB for user programs.
If OS takes 400 kB, then one has about 600 kB for user programs.
I this case smaller system effectively has 3 times more memory
available for user programs. On 2 MB machine (assuming the same
OS use) ratio is closer to 4/3, still giving advantage to smaller
system, but this advantage is much smaller.
In article <10qk9ea$6jl0$1@paganini.bofh.team>,
Waldek Hebisch <antispam@fricas.org> wrote:
John Levine <johnl@taugh.com> wrote:
According to Waldek Hebisch <antispam@fricas.org>:
John Levine <johnl@taugh.com> wrote:
According to David Wade <g4ugm@dave.invalid>:
A combination of overeager software architecture and implementation. >>>>> CP/67 and MTS both got good performance from the same hardware.The TSS/360 story is new to me. Twenty users on a 360/67 and itNot sure, do you mean the software architecture of TSS or the Hardware >>>>>> architecture of the 360/67. I at Newcastle Uni (UK) I think they/we >>>>>> managed more users than that with reasonable response time on a 360/67. >>>>>
struggled? How much of that was the large-team bloat you're describing >>>>>>> versus actual architectural problems?
AFAICS main factor was that TSS/360 was too big, which left too
little core for users which lead to intensive paging when one
tried to increase number of users. Also, VM quite early got
good paging algorithm, other IBM systems used worse algorithms
and improved them only later.
That was certainly part of it. It was also quite buggy, with the bugginess >>> inversely proportional to how heavily used a component was. The file system
worked pretty well but I gather magtape support didn't.
In a sense one can say that TSS/360 was ahead of it times: on
bigger machine smaller fraction of machine would be occupied
by system code so memory available for user whould be significantly
bigger. IIUC already on 2MB machine TSS/360 behaved much better.
Well, there's a rule of thumb that the way you get good performance from >>> a paging system is to have enough RAM that you don't have to page.
To explain more what I mean: if one have 1MB machine and OS takes
800 kB for itself, then one has about 200 kB for user programs.
If OS takes 400 kB, then one has about 600 kB for user programs.
I this case smaller system effectively has 3 times more memory
available for user programs. On 2 MB machine (assuming the same
OS use) ratio is closer to 4/3, still giving advantage to smaller
system, but this advantage is much smaller.
Relatedly, I saw a talk recently by an English gent where he
talked about a similar phenomenon: if you're driving somewhere
and you're going 20 MPH (or KPH, if you prefer; the important
thing here is the ratio, not the unit) then incresing speed by
10 MPH to 30 is a significant different and makes a measurable
difference in your arrival time at your destination. On the
other hand, if you're doing 80, then increasing speed by 10 to
90 is almost immeasurable and just (in his words) "makes you a
dickhead."
Well, I thought it was funny.
- Dan C.
Back when I was doing cross-country trips I gave this some thought, and
at one point came to the same conclusion. Is it worth it to save five minutes on a four-hour trip (or whatever, don't flame me), when other contingencies can easily cause you to gain or lose more than that making
a rest stop?
This leads to what I call my 5% rule. In many cases, a difference of
less than 5% is either insignificant or gets washed out by other
factors - so you might as well not worry about it.
On Thu, 02 Apr 2026 16:58:50 GMT, Charlie Gibbs wrote:
This leads to what I call my 5% rule. In many cases, a difference of
less than 5% is either insignificant or gets washed out by other
factors - so you might as well not worry about it.
Is Microsoft applying this principle to its QA on Windows releases
now, do you think?
| Sysop: | Tetrazocine |
|---|---|
| Location: | Melbourne, VIC, Australia |
| Users: | 13 |
| Nodes: | 8 (0 / 8) |
| Uptime: | 27:45:39 |
| Calls: | 211 |
| Files: | 21,502 |
| Messages: | 80,905 |