On Wed, 4 Sep 2024 11:47:49 -0700, Peter Flass wrote:
This still doesn’t answer the question of why Linux is relatively more
popular compared to the BSDs. My impression is that BSD is considered to
be for hackers and Linux is for people who just want to use the system.
The BSDs date from the time when Unix systems were considered superior to anything else out there, whereas Linux grew up very much in the shadow of Microsoft.
One example illustrating the difference in mindset, I think, is that the Linux kernel can read any kind of disk partition format -- DOS, Apple, whatever. Whereas the BSDs still want a disk to be formatted according to their own system of “slices”.
On 2024-09-04, Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
One example illustrating the difference in mindset, I think, is that
the Linux kernel can read any kind of disk partition format -- DOS,
Apple, whatever. Whereas the BSDs still want a disk to be formatted
according to their own system of “slices”.
Slices can lie under a PC partition.
On 2024-09-04, Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Wed, 4 Sep 2024 11:47:49 -0700, Peter Flass wrote:
This still doesn’t answer the question of why Linux is relatively more >>> popular compared to the BSDs. My impression is that BSD is considered
to be for hackers and Linux is for people who just want to use the
system.
The BSDs date from the time when Unix systems were considered superior
to anything else out there, whereas Linux grew up very much in the
shadow of Microsoft.
One example illustrating the difference in mindset, I think, is that
the Linux kernel can read any kind of disk partition format -- DOS,
Apple, whatever. Whereas the BSDs still want a disk to be formatted
according to their own system of “slices”.
Slices can lie under a PC partition.
On Fri, 4 Jul 2025 05:48:51 -0000 (UTC), anthk wrote:
On 2024-09-04, Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
One example illustrating the difference in mindset, I think, is that
the Linux kernel can read any kind of disk partition format -- DOS,
Apple, whatever. Whereas the BSDs still want a disk to be formatted
according to their own system of “slices”.
Slices can lie under a PC partition.
And then there is the problem of the filesystems within those slices. On
the BSDs, the traditional filesystem is called “UFS”, but what one BSD variant means by “UFS” is not quite the same as what another BSD variant does.
The common Linux kernel shared across just about all distros supports
common standard filesystems. This is one reason why “distro-hopping” is a
common thing among Linux users, while any attempt to pull such an
equivalent stunt between BSD variants is going to be fraught with
pitfalls.
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
The common Linux kernel shared across just about all distros supports
common standard filesystems. This is one reason why “distro-hopping” is >> a common thing among Linux users, while any attempt to pull such an
equivalent stunt between BSD variants is going to be fraught with
pitfalls.
How many of the people who would be "distro-hopping" re-use existing filesystems rather than re-installing completely from scratch?
How many of the people who would be "distro-hopping" re-use existing filesystems rather than re-installing completely from scratch?
I understand that you see a problem here, but I'm not sure that I do.
One example illustrating the difference in mindset, I think, is that
the Linux kernel can read any kind of disk partition format -- DOS,
Apple, whatever. Whereas the BSDs still want a disk to be formatted
according to their own system of =E2=80=9Cslices=E2=80=9D.
On 04.09.2024 22:08 Uhr Lawrence D'Oliveiro wrote:
One example illustrating the difference in mindset, I think, is that
the Linux kernel can read any kind of disk partition format -- DOS,
Apple, whatever. Whereas the BSDs still want a disk to be formatted
according to their own system of “slices”.
FreeBSD supports GPT and MBR too. IIRC it can also read various file
systems using additional software fro the repo.
FreeBSD supports GPT and MBR too. IIRC it can also read various file
systems using additional software fro the repo.
On 14 Sep 2024 02:50:05 -0300, Mike Spencer wrote:
(I do now, at last, have a cell phone, hate the touchscreen GUI, don't
know how to do anything except phone calls, text and wireless access
point. Where are the manpages?)
A minute’s silence for the legendary Debian-based Nokia N9.
Development was well under way by the time Microsoft’s mole, Stephen Elop, came in and decreed that the company would bet its entire future on the laughable Windows Phone. So he couldn’t kill it completely, but he could ensure that the first of this product line was also the last. It got
limited release in a few countries, garnered rave reviews wherever it was available, sold out what stock was available, and that was the end of it.
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Tue, 27 Aug 2024 06:55:55 -0000 (UTC), Sebastian wrote:
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
?Unix-like? tends to mean ?Linux-like? these days, let?s face it. Linux >>>> leads, the other *nixes follow.
I hope not. Linux gets shittier with each turd that drops from the
FreeDesktop people.
Like I said, if you don?t like Linux distros infested with FreeDesktop-
isms, don?t use them. There?s no need to bring up all this bile: all it?s >> doing is aggravating your ulcers. Open Source is all about choice.
The choices are drying up. Increasingly, decisions are made by distros instead of users, and you only have a choice if there are any distros
left that haven't caved or collapsed, or if you have the time, money,
and charisma to create AND MAINTAIN a new distro. That used to not be necessary simply to have a choice. It used to be sufficient to install
a decent distro. The main distros used to let you have far more choice
than they do today.
Why do you hate the Free Desktop folks? They are at the forefront of
trying to modernize the *nix GUI.
The Linux GUI had no need of such modernization, especially since all
"modernization" really is, is Windowsization ...
Actually, it?s not. Linux GUIs very much go their own way; there are ones >> that copy Windows and even Apple, it is true, but that?s just to appeal to >> those who prefer that look.
Systemd copies Windows and Apple at a lower level, and it continues
to be forced on the Linux community from every direction. I don't
even think Devuan will be able to resist the pressure to run Systemd
for much longer. And every distro is adopting iproute2, the main
effect of which is to make Linux networking skills less transferrable
to BSD (basically vendor-lock).
There are others that go in quite different
directions. The customizability of KDE Plasma, for example, goes beyond
anything you?ll find in any proprietary platform.
And the beauty of Linux is, you can install any number of these GUI
environments at once, and switching between them is as easy as logging out >> and logging in again. You don?t even have to reboot.
Linux was more customizable in the past, and Wayland makes the problem
worse because there will always be only a few compositors, due to them
having to be so complicated. Plus, we are now seeing with the Hyprland
fiasco that distros will remove good compositors from their package management system if their managers perceive any of the authors of that compositor to have committed a thoughtcrime.
I used to run GNOME, and then GNOME 3 came out, and Debian released
it under the same package name, as if it was just the next version
of GNOME. What it actually was, was a turd to the face directly out
of the asses of the FreeDesktop-influenced GNOME developers. It was completely static, with no customizability at all. They promised to add customizability back later, but GNOME 3 was so intolerable, that I had
to find an alternative. ANY alternative. I tried KDE, but it had gotten
a shitty rewrite, just like GNOME, and had become just as intolerable
as GNOME. So I switched to XFCE for years, even though it was inferior
to GNOME and KDE as I previously knew them, until I finally noticed that
MATE was available on Debian (for now-- I assume it will get removed
at some point, or it will come to suck just as much as GNOME).
And the reasoning behind the GNOME rewrite was about as anti-user as
it's possible to be: The FreeDesktop faggots had decided that desktop
PCs were obsolete, and that we had to march towards the brave new
future, in which we'd trade our desktop machines for tablets and
fucking phones. Microsoft had the same idea, and released Windows 8
the following year, which had a bunch of stupid features that were specifically for mobile toys. They'd have taken our desktop computers
by force if they had the power to do so. They have more power today
than they did back then, so we might see a revival of the whole
"desktops are obsolete" idea in the next decade or so.
I saw GNOME 3 a couple of years ago on Ubuntu, and it still sucked,
but people still praise it for some fucked-up reason. I assume the same
thing is going on with KDE. I'm more likely to try CDE now that it's open-source, than KDE.
Just run WindowMaker with the OneStepBack or TwoStepsBack GTK2-4 themes
and the GNUstep icon theme for XDG.
Lxappearance will allow you to set your GTK theme/icons/fonts with ease
so it matches the WM one.
On Sat, 5 Jul 2025 21:35:14 +0200, Marco Moock wrote:
FreeBSD supports GPT and MBR too. IIRC it can also read various file systems using additional software fro the repo.
What about interchanging UFS volumes with other BSDs?
On Sun, 6 Jul 2025 06:08:14 -0000 (UTC), anthk wrote:
Just run WindowMaker with the OneStepBack or TwoStepsBack GTK2-4 themes
and the GNUstep icon theme for XDG.
Lxappearance will allow you to set your GTK theme/icons/fonts with ease
so it matches the WM one.
Also don’t forget the Mate and Cinnamon projects: Mate originated from GNOME/GTK 2, while Cinnamon is an offshoot from GNOME/GTK 3.
If you took it to a little endian machine all the bytes were the
wrong way around. This was because there was no model in which hard
drives would move between machines so they just dumped in-memory
structs to disc.
On 06 Jul 2025 12:43:29 +0100 (BST), Theo wrote:
If you took it to a little endian machine all the bytes were the
wrong way around. This was because there was no model in which hard
drives would move between machines so they just dumped in-memory
structs to disc.
But they had removable disk packs in those days. Also floppies, magneto- optical and optical media.
Removable disc packs mostly came later I think (although I wasn't aware the >44MB Syquest launched as early as 1986). Optical media used ISO9660; I'm
not sure what was common for M-O drives.
Removable disc packs mostly came later I think ...
At Yale our PDP-11 originally had an RK05 single platter 1MB drive in
1974, then we upgraded to a pair of RP02 washing machine sized drives,
20MB each.
We also had a PDP-10 which also used the same RP02 disks. I think I
once experimented with trying to write a PDP-11 formatted disk on the
-10, reading the file system from tape. It was rather exciting since
the 36 bit PDP-10 mapped its words into the disk's 8 bit bytes in
non-obvious ways.
I'm not 100% sure but I think this company. hardly more than a footnote in computer history, was the cause of little-endian processors.
There was already a battle between bit 0 on the left or right in 1950s mainframes.
According to Theo <theom+news@chiark.greenend.org.uk>:
Removable disc packs mostly came later I think (although I wasn't aware the >44MB Syquest launched as early as 1986). Optical media used ISO9660; I'm >not sure what was common for M-O drives.
Uh, what? Removable disk packs date from about 1960.
On 2024-08-27, Sebastian <sebastian@here.com.invalid> wrote:
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Tue, 27 Aug 2024 06:55:55 -0000 (UTC), Sebastian wrote:
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
?Unix-like? tends to mean ?Linux-like? these days, let?s face it. Linux >>>>> leads, the other *nixes follow.
I hope not. Linux gets shittier with each turd that drops from the
FreeDesktop people.
Like I said, if you don?t like Linux distros infested with FreeDesktop-
isms, don?t use them. There?s no need to bring up all this bile: all it?s >>> doing is aggravating your ulcers. Open Source is all about choice.
The choices are drying up. Increasingly, decisions are made by distros
instead of users, and you only have a choice if there are any distros
left that haven't caved or collapsed, or if you have the time, money,
and charisma to create AND MAINTAIN a new distro. That used to not be
necessary simply to have a choice. It used to be sufficient to install
a decent distro. The main distros used to let you have far more choice
than they do today.
Why do you hate the Free Desktop folks? They are at the forefront of >>>>> trying to modernize the *nix GUI.
The Linux GUI had no need of such modernization, especially since all
"modernization" really is, is Windowsization ...
Actually, it?s not. Linux GUIs very much go their own way; there are ones >>> that copy Windows and even Apple, it is true, but that?s just to appeal to >>> those who prefer that look.
Systemd copies Windows and Apple at a lower level, and it continues
to be forced on the Linux community from every direction. I don't
even think Devuan will be able to resist the pressure to run Systemd
for much longer. And every distro is adopting iproute2, the main
effect of which is to make Linux networking skills less transferrable
to BSD (basically vendor-lock).
There are others that go in quite different
directions. The customizability of KDE Plasma, for example, goes beyond >>> anything you?ll find in any proprietary platform.
And the beauty of Linux is, you can install any number of these GUI
environments at once, and switching between them is as easy as logging out >>> and logging in again. You don?t even have to reboot.
Linux was more customizable in the past, and Wayland makes the problem
worse because there will always be only a few compositors, due to them
having to be so complicated. Plus, we are now seeing with the Hyprland
fiasco that distros will remove good compositors from their package
management system if their managers perceive any of the authors of that
compositor to have committed a thoughtcrime.
I used to run GNOME, and then GNOME 3 came out, and Debian released
it under the same package name, as if it was just the next version
of GNOME. What it actually was, was a turd to the face directly out
of the asses of the FreeDesktop-influenced GNOME developers. It was
completely static, with no customizability at all. They promised to add
customizability back later, but GNOME 3 was so intolerable, that I had
to find an alternative. ANY alternative. I tried KDE, but it had gotten
a shitty rewrite, just like GNOME, and had become just as intolerable
as GNOME. So I switched to XFCE for years, even though it was inferior
to GNOME and KDE as I previously knew them, until I finally noticed that
MATE was available on Debian (for now-- I assume it will get removed
at some point, or it will come to suck just as much as GNOME).
And the reasoning behind the GNOME rewrite was about as anti-user as
it's possible to be: The FreeDesktop faggots had decided that desktop
PCs were obsolete, and that we had to march towards the brave new
future, in which we'd trade our desktop machines for tablets and
fucking phones. Microsoft had the same idea, and released Windows 8
the following year, which had a bunch of stupid features that were
specifically for mobile toys. They'd have taken our desktop computers
by force if they had the power to do so. They have more power today
than they did back then, so we might see a revival of the whole
"desktops are obsolete" idea in the next decade or so.
I saw GNOME 3 a couple of years ago on Ubuntu, and it still sucked,
but people still praise it for some fucked-up reason. I assume the same
thing is going on with KDE. I'm more likely to try CDE now that it's
open-source, than KDE.
Just run WindowMaker with the OneStepBack or TwoStepsBack
GTK2-4 themes and the GNUstep icon theme for XDG.
Lxappearance will allow you to set your GTK theme/icons/fonts
with ease so it matches the WM one.
Then use qtconfig to tell QT5 to use a GTK theme. THere's
a qgnomestyle or similarly called one.
John Levine <johnl@taugh.com> wrote:
According to Theo <theom+news@chiark.greenend.org.uk>:
Removable disc packs mostly came later I think (although I wasn't aware the >>> 44MB Syquest launched as early as 1986). Optical media used ISO9660; I'm >>> not sure what was common for M-O drives.
Uh, what? Removable disk packs date from about 1960.
The issue under discussion was taking a removable pack from one vendor and plugging it into a different vendor's machine in order to read the data stored there, which is when format standardisation became relevant. In
1960s were people moving discs from DEC to IBM, or distributing software on disc packs for multiple vendors?
Tape and optical were their own separate things with their own formats, but AFAIK sending a 'HDD' formatted drive as a distribution format across multiple vendors didn't properly take off until USB, with some niche usage for Syquests in the late 80s/early 90s (and then Zip/Jazz etc).
FAT was never an officially standardised format of course, but when the machines were running the same software it didn't matter, and so a 'PC formatted' FAT HDD (USB/memory card/...) became a de facto interchange standard that non-PC vendors also adopted, as FAT floppies had previously.
Theo
On 2025-07-06, anthk <anthk@openbsd.home> wrote:
On 2024-08-27, Sebastian <sebastian@here.com.invalid> wrote:Thanks for the two about TwoStepsBack. I quite like the OneStepBack aesthetic. The older widget style still appeals to me more.
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Tue, 27 Aug 2024 06:55:55 -0000 (UTC), Sebastian wrote:
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
?Unix-like? tends to mean ?Linux-like? these days, let?s face it.
Linux leads, the other *nixes follow.
I hope not. Linux gets shittier with each turd that drops from the
FreeDesktop people.
Like I said, if you don?t like Linux distros infested with
FreeDesktop- isms, don?t use them. There?s no need to bring up all
this bile: all it?s doing is aggravating your ulcers. Open Source is
all about choice.
The choices are drying up. Increasingly, decisions are made by distros
instead of users, and you only have a choice if there are any distros
left that haven't caved or collapsed, or if you have the time, money,
and charisma to create AND MAINTAIN a new distro. That used to not be
necessary simply to have a choice. It used to be sufficient to
install a decent distro. The main distros used to let you have far
more choice than they do today.
Why do you hate the Free Desktop folks? They are at the forefront
of trying to modernize the *nix GUI.
The Linux GUI had no need of such modernization, especially since
all "modernization" really is, is Windowsization ...
Actually, it?s not. Linux GUIs very much go their own way; there are
ones that copy Windows and even Apple, it is true, but that?s just to
appeal to those who prefer that look.
Systemd copies Windows and Apple at a lower level, and it continues to
be forced on the Linux community from every direction. I don't even
think Devuan will be able to resist the pressure to run Systemd for
much longer. And every distro is adopting iproute2, the main effect of
which is to make Linux networking skills less transferrable to BSD
(basically vendor-lock).
There are others that go in quite different directions. The
customizability of KDE Plasma, for example, goes beyond anything
you?ll find in any proprietary platform.
And the beauty of Linux is, you can install any number of these GUI
environments at once, and switching between them is as easy as
logging out and logging in again. You don?t even have to reboot.
Linux was more customizable in the past, and Wayland makes the problem
worse because there will always be only a few compositors, due to them
having to be so complicated. Plus, we are now seeing with the Hyprland
fiasco that distros will remove good compositors from their package
management system if their managers perceive any of the authors of
that compositor to have committed a thoughtcrime.
I used to run GNOME, and then GNOME 3 came out, and Debian released it
under the same package name, as if it was just the next version of
GNOME. What it actually was, was a turd to the face directly out of
the asses of the FreeDesktop-influenced GNOME developers. It was
completely static, with no customizability at all. They promised to
add customizability back later, but GNOME 3 was so intolerable, that I
had to find an alternative. ANY alternative. I tried KDE, but it had
gotten a shitty rewrite, just like GNOME, and had become just as
intolerable as GNOME. So I switched to XFCE for years, even though it
was inferior to GNOME and KDE as I previously knew them, until I
finally noticed that MATE was available on Debian (for now-- I assume
it will get removed at some point, or it will come to suck just as
much as GNOME).
And the reasoning behind the GNOME rewrite was about as anti-user as
it's possible to be: The FreeDesktop faggots had decided that desktop
PCs were obsolete, and that we had to march towards the brave new
future, in which we'd trade our desktop machines for tablets and
fucking phones. Microsoft had the same idea, and released Windows 8
the following year, which had a bunch of stupid features that were
specifically for mobile toys. They'd have taken our desktop computers
by force if they had the power to do so. They have more power today
than they did back then, so we might see a revival of the whole
"desktops are obsolete" idea in the next decade or so.
I saw GNOME 3 a couple of years ago on Ubuntu, and it still sucked,
but people still praise it for some fucked-up reason. I assume the
same thing is going on with KDE. I'm more likely to try CDE now that
it's open-source, than KDE.
Just run WindowMaker with the OneStepBack or TwoStepsBack GTK2-4 themes
and the GNUstep icon theme for XDG.
Lxappearance will allow you to set your GTK theme/icons/fonts with ease
so it matches the WM one.
Then use qtconfig to tell QT5 to use a GTK theme. THere's a qgnomestyle
or similarly called one.
On 06 Jul 2025 12:43:29 +0100 (BST), Theo wrote:
If you took it to a little endian machine all the bytes were the
wrong way around. This was because there was no model in which hard
drives would move between machines so they just dumped in-memory
structs to disc.
But they had removable disk packs in those days.
There was already a battle between bit 0 on the left or right in=20
1950s mainframes. =20
Endian-ness didn=E2=80=99t really matter before byte-addressability came along, though.
There was already a battle between bit 0 on the left or right in=20
1950s mainframes. =20
Endian-ness didn=E2=80=99t really matter before byte-addressability came along, though.
John Levine <johnl@taugh.com> wrote:
According to Theo <theom+news@chiark.greenend.org.uk>:
Removable disc packs mostly came later I think (although I wasn't aware the >> >44MB Syquest launched as early as 1986). Optical media used ISO9660; I'm >> >not sure what was common for M-O drives.
Uh, what? Removable disk packs date from about 1960.
The issue under discussion was taking a removable pack from one vendor and plugging it into a different vendor's machine in order to read the data stored there, which is when format standardisation became relevant. In
1960s were people moving discs from DEC to IBM, or distributing software on disc packs for multiple vendors?
Tape and optical were their own separate things with their own formats, but AFAIK sending a 'HDD' formatted drive as a distribution format across multiple vendors didn't properly take off until USB, with some niche usage for Syquests in the late 80s/early 90s (and then Zip/Jazz etc).
FAT was never an officially standardised format of course, but when the machines were running the same software it didn't matter, and so a 'PC formatted' FAT HDD (USB/memory card/...) became a de facto interchange standard that non-PC vendors also adopted, as FAT floppies had previously.
On Mon, 7 Jul 2025 04:22:52 -0000 (UTC)
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
There was already a battle between bit 0 on the left or right in
1950s mainframes.
Endian-ness didn’t really matter before byte-addressability came
along, though.
Which means it's IBM's fault.
On Sun, 6 Jul 2025 20:15:08 -0700, Al Kossow wrote:
I'm not 100% sure but I think this company. hardly more than a footnote
in computer history, was the cause of little-endian processors.
Guess again
Try the DEC PDP-11 (1969)
There was already a battle between bit 0 on the left or right in 1950s
mainframes.
True, but that didn't lead to the 8008 which led to the...
https://en.wikipedia.org/wiki/Intel_8008
I should have been more explicit and said x64 processors. I've always been amused at how we got to where we are now.
Ironically Motorola certainly studied the PDP-11 closely but the 68000
wound up big-endian.
I never dug too deeply into the PDP-11 when I ran on one in the early
'80s. It was running some *nix OS that had fallen off the back of a truck
on Memorial Avenue.
John Levine <johnl@taugh.com> wrote:
According to Theo <theom+news@chiark.greenend.org.uk>:
Removable disc packs mostly came later I think (although I wasn't aware the >>> 44MB Syquest launched as early as 1986). Optical media used ISO9660; I'm >>> not sure what was common for M-O drives.
Uh, what? Removable disk packs date from about 1960.
The issue under discussion was taking a removable pack from one vendor and plugging it into a different vendor's machine in order to read the data stored there, which is when format standardisation became relevant. In
1960s were people moving discs from DEC to IBM, or distributing software on disc packs for multiple vendors?
Tape and optical were their own separate things with their own formats, but AFAIK sending a 'HDD' formatted drive as a distribution format across multiple vendors didn't properly take off until USB, with some niche usage for Syquests in the late 80s/early 90s (and then Zip/Jazz etc).
FAT was never an officially standardised format of course, but when the machines were running the same software it didn't matter, and so a 'PC formatted' FAT HDD (USB/memory card/...) became a de facto interchange standard that non-PC vendors also adopted, as FAT floppies had previously.
Theo
On Mon, 7 Jul 2025 04:22:52 -0000 (UTC)
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
There was already a battle between bit 0 on the left or right in
1950s mainframes.
Endian-ness didn’t really matter before byte-addressability came
along, though.
...although bit ordering *can* make a difference in serial transmission (which end do you send first?) and bit-addressed instructions (where present.)
...although bit ordering *can* make a difference in serial
transmission (which end do you send first?) and bit-addressed
instructions (where present.)
This drove me nuts. I may have this wrong because it's 45+ years ago,
but I think BTAM received data LSB first, and I had to translate, or
else the documentation showed the characters LSB first, and I had to mentally translate all the doc.
Theo <theom+news@chiark.greenend.org.uk> wrote:
John Levine <johnl@taugh.com> wrote:
According to Theo <theom+news@chiark.greenend.org.uk>:
Removable disc packs mostly came later I think (although I wasn't aware the
44MB Syquest launched as early as 1986). Optical media used ISO9660; I'm >>> >not sure what was common for M-O drives.
Uh, what? Removable disk packs date from about 1960.
The issue under discussion was taking a removable pack from one vendor and >> plugging it into a different vendor's machine in order to read the data
stored there, which is when format standardisation became relevant. In
1960s were people moving discs from DEC to IBM, or distributing software on >> disc packs for multiple vendors?
Tape and optical were their own separate things with their own formats, but >> AFAIK sending a 'HDD' formatted drive as a distribution format across
multiple vendors didn't properly take off until USB, with some niche usage >> for Syquests in the late 80s/early 90s (and then Zip/Jazz etc).
Once Linux appeared I used it to ocasinaly read data from discs taken
from another machines like proprietary Unices. Yes, early HDD-s
used controller specific formatting so probably there were no chance
to read them on machine with different controller. But SCSI and
IDE discs could be swapped between widely different machines.
We also had a PDP-10 which also used the same RP02 disks. I think I
once experimented with trying to write a PDP-11 formatted disk on the
-10, reading the file system from tape. It was rather exciting since
the 36 bit PDP-10 mapped its words into the disk's 8 bit bytes in
non-obvious ways.
Its perfectly obvious, since the PDP-10 operating systems write 128 word blocks
at all times (even TOPS-20, which simply reads/writes 4 such blocks for each 512 word page in the data stream).
1 sector = 128 words * 36 bits = 64 * 72 bits = 576 * 8 bits
Easy-peasy.
Motorola studied the PDP-11 and fixed the things DEC got wrong.
Then there's the 'PDP-endian' quirk.
The issue under discussion was taking a removable pack from one vendor
and plugging it into a different vendor's machine in order to read the
data stored there ...
Endianness matter for character/digit addresable machines.
On 7/7/25 08:29, John Ames wrote:
This drove me nuts. I may have this wrong because it's 45+ years ago,
...although bit ordering *can* make a difference in serial transmission
(which end do you send first?) ...
but I think BTAM received data LSB first, and I had to translate, or
else the documentation showed the characters LSB first, and I had to
mentally translate all the doc.
I can understand endianness issues cropping up when you have to split a
word into independently-addressable chunks, but the fact that bit-
ordering was ever even a question remains bonkers to me, when basic math provides what *should've* been a straightforward universal standard: 2 ^
0 = 1, so bit 0 is the 1s place.
This drove me nuts. I may have this wrong because it's 45+ years ago,
but I think BTAM received data LSB first, and I had to translate, or
else the documentation showed the characters LSB first, and I had to
mentally translate all the doc.
On 07 Jul 2025 12:18:46 +0100 (BST), Theo wrote:
The issue under discussion was taking a removable pack from one vendor
and plugging it into a different vendor's machine in order to read the
data stored there ...
No, just moving packs between different machines in the same computer
centre would have been enough.
Mon, 7 Jul 2025 13:43:32 +0100
David Wade <g4ugm@dave.invalid> schrieb:
[Schrieb]
Unless you had an older Atari ST which formatted disk in such a way that
MSDOS wouldn't read them. I seem to remember it was one byte in the boot
sector the PC didn't like, and there were Atari programs to fix it...
And there was at least one DOS-program.
ST2DOS, written by Arno Schaefer, version 1.0 is from '93.
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
On 07 Jul 2025 12:18:46 +0100 (BST), Theo wrote:
The issue under discussion was taking a removable pack from one vendor
and plugging it into a different vendor's machine in order to read the
data stored there ...
No, just moving packs between different machines in the same computer
centre would have been enough.
Until a fool operator (like you, perhaps) moved a pack from a drive
with a head crash to three other drives before realizing that the
pack was bad, not the drives.
On Wed, 09 Jul 2025 04:29:09 GMT, Charlie Gibbs wrote:
You couldn't do a trick like that with the Amiga. It read and wrote an
entire track at a time, which enabled it to shorten the inter-record
gaps to the point where it could store 11 sectors per track instead of
9.
This allowed the Amiga to store 880K on what was normally a 720K floppy
-
but the result could not be read except with another Amiga or a custom
controller.
CP/M topped out for craziness. Most systems used the Western Digital
FD17xx floppy controllers but the controller could be programmed for different track/sector schemes and encoding. I had a utility that could
read 11 different formats iirc That's leaving out the hard sector types
that survived from the 8" days.
On 2025-07-08, Scott Lurndal <scott@slp53.sl.home> wrote:
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
On 07 Jul 2025 12:18:46 +0100 (BST), Theo wrote:
The issue under discussion was taking a removable pack from one vendor >>>> and plugging it into a different vendor's machine in order to read the >>>> data stored there ...
No, just moving packs between different machines in the same computer
centre would have been enough.
Until a fool operator (like you, perhaps) moved a pack from a drive
with a head crash to three other drives before realizing that the
pack was bad, not the drives.
But by then, the drives were bad too. :-(
Don't forget the ACT Sirius. A DOS machine, that crammed more data onto a >diskette buy using a variable speed drive (5 speeds, I think).
In article <md6n3pFgaflU8@mid.individual.net>,
Bob Eager <news0009@eager.cx> wrote:
Don't forget the ACT Sirius. A DOS machine, that crammed more data onto a >> diskette buy using a variable speed drive (5 speeds, I think).
Apple used the same trick with its 3.5" floppy drives to fit 800K onto a
disk that was only good for 720K elsewhere.
On 7/7/25 08:29, John Ames wrote:
On Mon, 7 Jul 2025 04:22:52 -0000 (UTC)
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
...although bit ordering *can* make a difference in serialThere was already a battle between bit 0 on the left or right in
1950s mainframes.
Endian-ness didn’t really matter before byte-addressability came
along, though.
transmission
(which end do you send first?) and bit-addressed instructions (where
present.)
This drove me nuts. I may have this wrong because it's 45+ years ago,
but I think BTAM received data LSB first, and I had to translate, or
else the documentation showed the characters LSB first, and I had to
mentally translate all the doc.
other trivia: account about biggest computer "goof" ever, 360s
originally were going to be ASCII machines, but the ASCII unit record
gear weren't ready ... so were going to start shipping with old BCD gear (with EBCDIC) and move later https://web.archive.org/web/20180513184025/http://www.bobbemer.com/P-BIT.HTM
Lynn Wheeler <lynn@garlic.com> writes:
other trivia: account about biggest computer "goof" ever, 360s
originally were going to be ASCII machines, but the ASCII unit record
gear weren't ready ... so were going to start shipping with old BCD gear
(with EBCDIC) and move later
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/P-BIT.HTM
I don't know what dreams they were having within IBM but those machines
were never going to be ASCII. It would be pretty hard to do 14xx
emulation with ASCII and IBM NEVER EVER did a competent ASCII - EBCDIC translate table.
On 2025-07-12, Dan Espen <dan1espen@gmail.com> wrote:
Lynn Wheeler <lynn@garlic.com> writes:
other trivia: account about biggest computer "goof" ever, 360s
originally were going to be ASCII machines, but the ASCII unit record
gear weren't ready ... so were going to start shipping with old BCD gear >>> (with EBCDIC) and move later
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/P-BIT.HTM
I don't know what dreams they were having within IBM but those machines
were never going to be ASCII. It would be pretty hard to do 14xx
emulation with ASCII and IBM NEVER EVER did a competent ASCII - EBCDIC
translate table.
That's partly because they couldn't even settle on values for
certain EBCDIC characters - vertical bar, for instance.
Or, famously, square brackets.
But what really p'd me off it that for characters they couldn't decide
on, they translated multiple different characters to the same character. Making their mistakes impossible to recover from.
On 2025-07-12, Dan Espen <dan1espen@gmail.com> wrote:
Or, famously, square brackets.
But what really p'd me off it that for characters they couldn't decide
on, they translated multiple different characters to the same character.
Making their mistakes impossible to recover from.
Not EBCDIC, but your mention of square brackets reminded me of the
modified 7-bit ASCII that was used to write Swedish before ISO 8859-1
and later Unicode made it big.
"} { | ] [ \" were shown as " " on Swedish-adapted equipment, making C code look absolutely ridiculous. Similar conventions applied
for the other Nordic languages and German.
On 2025-07-13, Niklas Karlsson wrote:
Not EBCDIC, but your mention of square brackets reminded me of the
modified 7-bit ASCII that was used to write Swedish before ISO 8859-1
and later Unicode made it big.
"} { | ] [ \" were shown as " " on Swedish-adapted equipment,
making C code look absolutely ridiculous. Similar conventions applied
for the other Nordic languages and German.
I played with ISO-646-FI/SE once in a Televideo terminal, but not for
long enough to figure out how to handle day-to-day usage of a UNIX-like system without these characters.
I (barely) know C has (had?) syntax and also iso646.h for such cases,
but how would e.g. shell scripting be handled?
On 2025-07-12, Dan Espen <dan1espen@gmail.com> wrote:
Or, famously, square brackets.
But what really p'd me off it that for characters they couldn't decide
on, they translated multiple different characters to the same character.
Making their mistakes impossible to recover from.
Not EBCDIC, but your mention of square brackets reminded me of the
modified 7-bit ASCII that was used to write Swedish before ISO 8859-1
and later Unicode made it big.
"} { | ] [ \" were shown as " " on Swedish-adapted equipment, making C code look absolutely ridiculous. Similar conventions applied
for the other Nordic languages and German.
On 2025-07-13, Nuno Silva <nunojsilva@invalid.invalid> wrote:
On 2025-07-13, Niklas Karlsson wrote:
Not EBCDIC, but your mention of square brackets reminded me of the
modified 7-bit ASCII that was used to write Swedish before ISO 8859-1
and later Unicode made it big.
"} { | ] [ \" were shown as " " on Swedish-adapted equipment,
making C code look absolutely ridiculous. Similar conventions applied
for the other Nordic languages and German.
I played with ISO-646-FI/SE once in a Televideo terminal, but not for
long enough to figure out how to handle day-to-day usage of a UNIX-like
system without these characters.
I (barely) know C has (had?) syntax and also iso646.h for such cases,
but how would e.g. shell scripting be handled?
Couldn't say. I came in a little to late to really have to butt heads
with that issue.
greater than GT< less than LT
= greater than GEor equal to
not greater than NG|| concatenation CAT
On 2025-07-12, Dan Espen <dan1espen@gmail.com> wrote:
Or, famously, square brackets.
But what really p'd me off it that for characters they couldn't decide
on, they translated multiple different characters to the same character.
Making their mistakes impossible to recover from.
Not EBCDIC, but your mention of square brackets reminded me of the
modified 7-bit ASCII that was used to write Swedish before ISO 8859-1
and later Unicode made it big.
"} { | ] [ \" were shown as "å ä ö Å Ä Ö" on Swedish-adapted equipment, >making C code look absolutely ridiculous. Similar conventions applied
for the other Nordic languages and German.
7-bit ASCII never made much sense to me. Why didn't they go right to 8? 7-biut characters only would have made sense on a computer with a 14 or 28-bit word size.
"} { | ] [ \" were shown as " " on Swedish-adapted equipment, making C code look absolutely ridiculous.
7-bit ASCII never made much sense to me. Why didn't they go right to 8?
Parity.
Niklas Karlsson <nikke.karlsson@gmail.com> writes:
On 2025-07-12, Dan Espen <dan1espen@gmail.com> wrote:
Or, famously, square brackets.
But what really p'd me off it that for characters they couldn't decide
on, they translated multiple different characters to the same character. >> Making their mistakes impossible to recover from.
Not EBCDIC, but your mention of square brackets reminded me of the
modified 7-bit ASCII that was used to write Swedish before ISO 8859-1
and later Unicode made it big.
"} { | ] [ \" were shown as "?? ?? ?? ?? ?? ??" on Swedish-adapted equipment,
making C code look absolutely ridiculous. Similar conventions applied
for the other Nordic languages and German.
Ah, but there were always trigraphs. Sadly they weren't much prettier.
'??(' and '??)'.
SAS/C (C compiler written for IBM mainframes after Lattice C was
purchased by Sas in 1987) introduced 'di-graphs':
(| and |)
Looked a little nicer.
On Sun, 13 Jul 2025 16:06:49 +0000, Dennis Boone wrote:
7-bit ASCII never made much sense to me. Why didn't they go right to 8?
Parity.
Also, a larger character set would likely have meant more expensive
hardware to input/display it. Think of line printers with all their characters on those drums/chains.
Dot-matrix printers were more flexible, but there was still the
keyboard problem.
And another point: subsets of ASCII could be mapped back and forth with existing even more restricted character sets, like ones with only six bits per character.
On Sun, 13 Jul 2025 16:06:49 +0000, Dennis Boone wrote:
7-bit ASCII never made much sense to me. Why didn't they go right to 8?
Parity.
Also, a larger character set would likely have meant more expensive
hardware to input/display it. Think of line printers with all their >characters on those drums/chains.
One shop I worked in had both 48- and 63-character bands for their
printer. They thought that they could mount a 63-character band
for jobs that needed it, while using a 48-character band for
everything else. The 48-character band allowed faster printing,
since the character subset passed the paper in a smaller fraction
of the time it took the band to make a complete revolution. As I
predicted, though, they soon realized that the time spent while the
operator changed bands (especially if he had just left for coffee
when a band change request came up) more than offset the time saved
by using the 48-character set - and that it was faster in the long
run to just leave the 63-character band in place all the time.
On 7/13/25 07:18, Niklas Karlsson wrote:
On 2025-07-13, Nuno Silva <nunojsilva@invalid.invalid> wrote:
On 2025-07-13, Niklas Karlsson wrote:Couldn't say. I came in a little to late to really have to butt
Not EBCDIC, but your mention of square brackets reminded me of the
modified 7-bit ASCII that was used to write Swedish before ISO 8859-1
and later Unicode made it big.
"} { | ] [ \" were shown as " " on Swedish-adapted equipment, >>>> making C code look absolutely ridiculous. Similar conventions applied
for the other Nordic languages and German.
I played with ISO-646-FI/SE once in a Televideo terminal, but not for
long enough to figure out how to handle day-to-day usage of a UNIX-like
system without these characters.
I (barely) know C has (had?) syntax and also iso646.h for such cases,
but how would e.g. shell scripting be handled?
heads
with that issue.
That's why C had trigraphs. PL/I(F) did the same thing with its
"48-character set"
On 7/14/25 02:40, Charlie Gibbs wrote:
...
One shop I worked in had both 48- and 63-character bands for their
printer. They thought that they could mount a 63-character band
for jobs that needed it, while using a 48-character band for
everything else. The 48-character band allowed faster printing,
since the character subset passed the paper in a smaller fraction
of the time it took the band to make a complete revolution. As I
predicted, though, they soon realized that the time spent while the
operator changed bands (especially if he had just left for coffee
when a band change request came up) more than offset the time saved
by using the 48-character set - and that it was faster in the long
run to just leave the 63-character band in place all the time.
In order for this to have a chance of working you'd have to establish different SYSOUT classes (print queues, or whatever) for jobs using
the 48-character set vs. 64-character set, and only change once a
shift or so, which would mean that the less-favored jobs would have to
wait.
If you had some huge job, say general ledger or inventory, that used
multiple boxes of paper and didn't need lower-case, you might want to
reserve a class for that and print it off-shift, and otherwise keep
the slower band in all the time.
Peter Flass <Peter@Iron-Spring.com> writes:
On 7/13/25 07:18, Niklas Karlsson wrote:
On 2025-07-13, Nuno Silva <nunojsilva@invalid.invalid> wrote:
On 2025-07-13, Niklas Karlsson wrote:Couldn't say. I came in a little to late to really have to butt
Not EBCDIC, but your mention of square brackets reminded me of the
modified 7-bit ASCII that was used to write Swedish before ISO 8859-1 >>>>> and later Unicode made it big.
"} { | ] [ \" were shown as " " on Swedish-adapted equipment, >>>>> making C code look absolutely ridiculous. Similar conventions applied >>>>> for the other Nordic languages and German.
I played with ISO-646-FI/SE once in a Televideo terminal, but not for
long enough to figure out how to handle day-to-day usage of a UNIX-like >>>> system without these characters.
I (barely) know C has (had?) syntax and also iso646.h for such cases,
but how would e.g. shell scripting be handled?
heads
with that issue.
That's why C had trigraphs. PL/I(F) did the same thing with its
"48-character set"
I go onto my first UNIX on mainframe project and all the developers had already accepted TRIGRAPHS. I found that totally unacceptable. It took
me a month or 2 to find a 3270 emulator that I could patch up to finally
be able to see and type square brackets.
To IBM's credit I used IBM's internally used 3270 emulator (MITE I
believe) with some patches I came up with. I dumped the binary, found
the translate table and fixed it.
I can't fathom why trigraphs were considered an acceptable solution.
In the mainframe world, lower case was generally held in low regard. The
myth was that anything not in all caps didn't look appropriately
computerish. This myth survived for decades afterwards.
I read somewhere that, when AT&T engineers were designing the first=20 teletypes, they had room to include either uppercase letters ored like a
lowercase, but not both. Executives decided that entire uppercase was preferable to entire lowercase, solely because =E2=80=9Cgod=E2=80=9D seem=
less respectful way of writing the name (or was it occupation?) of
their favourite deity than =E2=80=9CGOD=E2=80=9D.
=20
I have no idea if this story is credible or not ...
I can't fathom why trigraphs were considered an acceptable solution.
I can't fathom why trigraphs were considered an acceptable solution.
On 14/07/2025 21:36, Dan Espen wrote:
Peter Flass <Peter@Iron-Spring.com> writes:On a real 3178 there are no [] characters so you either lose some other characters, or use tri-graphs.
On 7/13/25 07:18, Niklas Karlsson wrote:
On 2025-07-13, Nuno Silva <nunojsilva@invalid.invalid> wrote:
On 2025-07-13, Niklas Karlsson wrote:Couldn't say. I came in a little to late to really have to butt
Not EBCDIC, but your mention of square brackets reminded me of the >>>>>> modified 7-bit ASCII that was used to write Swedish before ISO 8859-1 >>>>>> and later Unicode made it big.
"} { | ] [ \" were shown as " " on Swedish-adapted
equipment,
making C code look absolutely ridiculous. Similar conventions applied >>>>>> for the other Nordic languages and German.
I played with ISO-646-FI/SE once in a Televideo terminal, but not for >>>>> long enough to figure out how to handle day-to-day usage of a UNIX- >>>>> like
system without these characters.
I (barely) know C has (had?) syntax and also iso646.h for such cases, >>>>> but how would e.g. shell scripting be handled?
heads
with that issue.
That's why C had trigraphs. PL/I(F) did the same thing with its
"48-character set"
I go onto my first UNIX on mainframe project and all the developers had
already accepted TRIGRAPHS. I found that totally unacceptable. It took
me a month or 2 to find a 3270 emulator that I could patch up to finally
be able to see and type square brackets.
To IBM's credit I used IBM's internally used 3270 emulator (MITE I
believe) with some patches I came up with. I dumped the binary, found
the translate table and fixed it.
I can't fathom why trigraphs were considered an acceptable solution.
Dan Espen <dan1espen@gmail.com> writes:
Peter Flass <Peter@Iron-Spring.com> writes:
On 7/13/25 07:18, Niklas Karlsson wrote:
On 2025-07-13, Nuno Silva <nunojsilva@invalid.invalid> wrote:
On 2025-07-13, Niklas Karlsson wrote:Couldn't say. I came in a little to late to really have to butt
Not EBCDIC, but your mention of square brackets reminded me of the >>>>>> modified 7-bit ASCII that was used to write Swedish before ISO 8859-1 >>>>>> and later Unicode made it big.
"} { | ] [ \" were shown as " " on Swedish-adapted equipment, >>>>>> making C code look absolutely ridiculous. Similar conventions applied >>>>>> for the other Nordic languages and German.
I played with ISO-646-FI/SE once in a Televideo terminal, but not for >>>>> long enough to figure out how to handle day-to-day usage of a UNIX-like >>>>> system without these characters.
I (barely) know C has (had?) syntax and also iso646.h for such cases, >>>>> but how would e.g. shell scripting be handled?
heads
with that issue.
That's why C had trigraphs. PL/I(F) did the same thing with its
"48-character set"
I go onto my first UNIX on mainframe project and all the developers had
already accepted TRIGRAPHS. I found that totally unacceptable. It took
me a month or 2 to find a 3270 emulator that I could patch up to finally
be able to see and type square brackets.
To IBM's credit I used IBM's internally used 3270 emulator (MITE I
believe) with some patches I came up with. I dumped the binary, found
the translate table and fixed it.
I can't fathom why trigraphs were considered an acceptable solution.
Not many keypunches had a square bracket key. Granted, if one were
skilled on the keypunch, one can synthesize any hollerith sequence;
so assuming one knew how the hardware translated the hollerith into
EBCDIC (and the C compiler used the same EBCDIC character) they
could punch a square bracket, albeit rather painfully. trigraphs
were much more convenient.
On Mon, 14 Jul 2025 09:40:28 GMT, Charlie Gibbs wrote:
In the mainframe world, lower case was generally held in low regard. The
myth was that anything not in all caps didn't look appropriately
computerish. This myth survived for decades afterwards.
I read somewhere that, when AT&T engineers were designing the first teletypes, they had room to include either uppercase letters or lowercase, but not both. Executives decided that entire uppercase was preferable to entire lowercase, solely because “god” seemed like a less respectful way of writing the name (or was it occupation?) of their favourite deity than “GOD”.
I have no idea if this story is credible or not ...
When I discovered that the DEC systems (including language compilers) I
was using as an undergrad were case-insensitive, and that I could write Fortran code in lowercase or even mixed case if I wanted, some other
people did look at me a little strangely ...
On Mon, 14 Jul 2025 16:36:19 -0400, Dan Espen wrote:
I can't fathom why trigraphs were considered an acceptable solution.
What would have been better?
On 7/14/25 18:29, Lawrence D'Oliveiro wrote:
On Mon, 14 Jul 2025 16:36:19 -0400, Dan Espen wrote:
I can't fathom why trigraphs were considered an acceptable solution.
What would have been better?
FORTRAN used .OR., .AND., etc.
On 7/14/25 18:29, Lawrence D'Oliveiro wrote:
On Mon, 14 Jul 2025 16:36:19 -0400, Dan Espen wrote:
I can't fathom why trigraphs were considered an acceptable solution.
What would have been better?
FORTRAN used .OR., .AND., etc.
Scott Lurndal <scott@slp53.sl.home> wrote:
Niklas Karlsson <nikke.karlsson@gmail.com> writes:
On 2025-07-12, Dan Espen <dan1espen@gmail.com> wrote:
Or, famously, square brackets.
But what really p'd me off it that for characters they couldn't decide
on, they translated multiple different characters to the same character. >> >> Making their mistakes impossible to recover from.
Not EBCDIC, but your mention of square brackets reminded me of the
modified 7-bit ASCII that was used to write Swedish before ISO 8859-1
and later Unicode made it big.
"} { | ] [ \" were shown as "?? ?? ?? ?? ?? ??" on Swedish-adapted equipment,
making C code look absolutely ridiculous. Similar conventions applied
for the other Nordic languages and German.
Ah, but there were always trigraphs. Sadly they weren't much prettier.
'??(' and '??)'.
SAS/C (C compiler written for IBM mainframes after Lattice C was
purchased by Sas in 1987) introduced 'di-graphs':
(| and |)
Looked a little nicer.
On 7/14/25 14:02, David Wade wrote:
On 14/07/2025 21:36, Dan Espen wrote:
Peter Flass <Peter@Iron-Spring.com> writes:On a real 3178 there are no [] characters so you either lose some
On 7/13/25 07:18, Niklas Karlsson wrote:
On 2025-07-13, Nuno Silva <nunojsilva@invalid.invalid> wrote:
On 2025-07-13, Niklas Karlsson wrote:Couldn't say. I came in a little to late to really have to butt
Not EBCDIC, but your mention of square brackets reminded me of the >>>>>>> modified 7-bit ASCII that was used to write Swedish before ISO
8859-1
and later Unicode made it big.
"} { | ] [ \" were shown as " " on Swedish-adapted
equipment,
making C code look absolutely ridiculous. Similar conventions
applied
for the other Nordic languages and German.
I played with ISO-646-FI/SE once in a Televideo terminal, but not for >>>>>> long enough to figure out how to handle day-to-day usage of a
UNIX- like
system without these characters.
I (barely) know C has (had?) syntax and also iso646.h for such cases, >>>>>> but how would e.g. shell scripting be handled?
heads
with that issue.
That's why C had trigraphs. PL/I(F) did the same thing with its
"48-character set"
I go onto my first UNIX on mainframe project and all the developers had
already accepted TRIGRAPHS. I found that totally unacceptable. It took >>> me a month or 2 to find a 3270 emulator that I could patch up to finally >>> be able to see and type square brackets.
To IBM's credit I used IBM's internally used 3270 emulator (MITE I
believe) with some patches I came up with. I dumped the binary, found
the translate table and fixed it.
I can't fathom why trigraphs were considered an acceptable solution.
other characters, or use tri-graphs.
By golly, you're right. The 3278 APL keyboard had them. We used 3290s
with the APL keyboard; great piece of gear.
... I worked on coloured book software on IBM VM
On 7/14/25 14:14, Scott Lurndal wrote:
Dan Espen <dan1espen@gmail.com> writes:
Peter Flass <Peter@Iron-Spring.com> writes:
On 7/13/25 07:18, Niklas Karlsson wrote:
On 2025-07-13, Nuno Silva <nunojsilva@invalid.invalid> wrote:
On 2025-07-13, Niklas Karlsson wrote:Couldn't say. I came in a little to late to really have to butt
Not EBCDIC, but your mention of square brackets reminded me of the >>>>>>> modified 7-bit ASCII that was used to write Swedish before ISO 8859-1 >>>>>>> and later Unicode made it big.
"} { | ] [ \" were shown as "å ä ö Å Ä Ö" on Swedish-adapted equipment,
making C code look absolutely ridiculous. Similar conventions applied >>>>>>> for the other Nordic languages and German.
I played with ISO-646-FI/SE once in a Televideo terminal, but not for >>>>>> long enough to figure out how to handle day-to-day usage of a UNIX-like >>>>>> system without these characters.
I (barely) know C has (had?) syntax and also iso646.h for such cases, >>>>>> but how would e.g. shell scripting be handled?
heads
with that issue.
That's why C had trigraphs. PL/I(F) did the same thing with its
"48-character set"
I go onto my first UNIX on mainframe project and all the developers had
already accepted TRIGRAPHS. I found that totally unacceptable. It took >>> me a month or 2 to find a 3270 emulator that I could patch up to finally >>> be able to see and type square brackets.
To IBM's credit I used IBM's internally used 3270 emulator (MITE I
believe) with some patches I came up with. I dumped the binary, found
the translate table and fixed it.
I can't fathom why trigraphs were considered an acceptable solution.
Not many keypunches had a square bracket key. Granted, if one were
skilled on the keypunch, one can synthesize any hollerith sequence;
so assuming one knew how the hardware translated the hollerith into
EBCDIC (and the C compiler used the same EBCDIC character) they
could punch a square bracket, albeit rather painfully. trigraphs
were much more convenient.
I got pretty good at multi-punching at one time in the long ago.
On Mon, 14 Jul 2025 20:01:48 -0700, Peter Flass wrote:
On 7/14/25 18:29, Lawrence D'Oliveiro wrote:
On Mon, 14 Jul 2025 16:36:19 -0400, Dan Espen wrote:
I can't fathom why trigraphs were considered an acceptable solution.
What would have been better?
FORTRAN used .OR., .AND., etc.
But C avoided using meaningful names for that kind of thing.
Peter Flass <Peter@Iron-Spring.com> writes:
On 7/14/25 18:29, Lawrence D'Oliveiro wrote:
On Mon, 14 Jul 2025 16:36:19 -0400, Dan Espen wrote:
I can't fathom why trigraphs were considered an acceptable solution.
What would have been better?
FORTRAN used .OR., .AND., etc.
FORTRAN is not C. Trigraphs worked perfectly well,
irrespective of your personal feelings. Ugly, perhaps,
but not as ugly as .OR.
On Mon, 14 Jul 2025 19:56:56 -0700, Peter Flass wrote:
On 7/14/25 14:02, David Wade wrote:
On 14/07/2025 21:36, Dan Espen wrote:
Peter Flass <Peter@Iron-Spring.com> writes:On a real 3178 there are no [] characters so you either lose some other
On 7/13/25 07:18, Niklas Karlsson wrote:
On 2025-07-13, Nuno Silva <nunojsilva@invalid.invalid> wrote:That's why C had trigraphs. PL/I(F) did the same thing with its
On 2025-07-13, Niklas Karlsson wrote:Couldn't say. I came in a little to late to really have to butt
Not EBCDIC, but your mention of square brackets reminded me of the >>>>>>>> modified 7-bit ASCII that was used to write Swedish before ISO >>>>>>>> 8859-1 and later Unicode made it big.
"} { | ] [ \" were shown as "å ä ö Å Ä Ö" on Swedish-adapted >>>>>>>> equipment,
making C code look absolutely ridiculous. Similar conventions
applied for the other Nordic languages and German.
I played with ISO-646-FI/SE once in a Televideo terminal, but not >>>>>>> for long enough to figure out how to handle day-to-day usage of a >>>>>>> UNIX- like system without these characters.
I (barely) know C has (had?) syntax and also iso646.h for such
cases,
but how would e.g. shell scripting be handled?
heads with that issue.
"48-character set"
I go onto my first UNIX on mainframe project and all the developers
had already accepted TRIGRAPHS. I found that totally unacceptable. >>>> It took me a month or 2 to find a 3270 emulator that I could patch up
to finally be able to see and type square brackets.
To IBM's credit I used IBM's internally used 3270 emulator (MITE I
believe) with some patches I came up with. I dumped the binary, found >>>> the translate table and fixed it.
I can't fathom why trigraphs were considered an acceptable solution.
characters, or use tri-graphs.
By golly, you're right. The 3278 APL keyboard had them. We used 3290s
with the APL keyboard; great piece of gear.
APL keyboards had many strange and wondrous characters... The IBM 5120 had
a selector switch for BASIC or APL and had the APL character set, iirc on >the front of the keycaps.
On 7/14/25 22:59, Lawrence D'Oliveiro wrote:
On Mon, 14 Jul 2025 20:01:48 -0700, Peter Flass wrote:
On 7/14/25 18:29, Lawrence D'Oliveiro wrote:
On Mon, 14 Jul 2025 16:36:19 -0400, Dan Espen wrote:
I can't fathom why trigraphs were considered an acceptable solution.
What would have been better?
FORTRAN used .OR., .AND., etc.
But C avoided using meaningful names for that kind of thing.
Not meaningful with the dots.
On Mon, 14 Jul 2025 16:36:19 -0400, Dan Espen wrote:
I can't fathom why trigraphs were considered an acceptable solution.
What would have been better?
Digraphs. They give alternative spelling for needed C tokens.
Trigraphs apply everwhere, including strings and to lower chance
of accidental match they are deliberatly obscure.
On Tue, 15 Jul 2025 07:21:12 -0700, Peter Flass wrote:
On 7/14/25 22:59, Lawrence D'Oliveiro wrote:
On Mon, 14 Jul 2025 20:01:48 -0700, Peter Flass wrote:
On 7/14/25 18:29, Lawrence D'Oliveiro wrote:
On Mon, 14 Jul 2025 16:36:19 -0400, Dan Espen wrote:
I can't fathom why trigraphs were considered an acceptable solution. >>>>>What would have been better?
FORTRAN used .OR., .AND., etc.
But C avoided using meaningful names for that kind of thing.
Not meaningful with the dots.
You think you can’t tell that “.OR.” came from “or”, and “.AND.’ from
“and”?
On 7/15/25 20:59, Lawrence D'Oliveiro wrote:
On Tue, 15 Jul 2025 07:21:12 -0700, Peter Flass wrote:
On 7/14/25 22:59, Lawrence D'Oliveiro wrote:
On Mon, 14 Jul 2025 20:01:48 -0700, Peter Flass wrote:
On 7/14/25 18:29, Lawrence D'Oliveiro wrote:
On Mon, 14 Jul 2025 16:36:19 -0400, Dan Espen wrote:
I can't fathom why trigraphs were considered an acceptable
solution.
What would have been better?
FORTRAN used .OR., .AND., etc.
But C avoided using meaningful names for that kind of thing.
Not meaningful with the dots.
You think you can’t tell that “.OR.” came from “or”, and “.AND.’ from
“and”?
Of course. What I meant was "not otherwise significant to the parser,"
so not confusable with anything else.
On 14/07/2025 21:36, Dan Espen wrote:
Peter Flass <Peter@Iron-Spring.com> writes:On a real 3178 there are no [] characters so you either lose some
On 7/13/25 07:18, Niklas Karlsson wrote:I go onto my first UNIX on mainframe project and all the developers
On 2025-07-13, Nuno Silva <nunojsilva@invalid.invalid> wrote:
On 2025-07-13, Niklas Karlsson wrote:Couldn't say. I came in a little to late to really have to butt
Not EBCDIC, but your mention of square brackets reminded me of the >>>>>> modified 7-bit ASCII that was used to write Swedish before ISO 8859-1 >>>>>> and later Unicode made it big.
"} { | ] [ \" were shown as " " on Swedish-adapted equipment, >>>>>> making C code look absolutely ridiculous. Similar conventions applied >>>>>> for the other Nordic languages and German.
I played with ISO-646-FI/SE once in a Televideo terminal, but not for >>>>> long enough to figure out how to handle day-to-day usage of a UNIX-like >>>>> system without these characters.
I (barely) know C has (had?) syntax and also iso646.h for such cases, >>>>> but how would e.g. shell scripting be handled?
heads
with that issue.
That's why C had trigraphs. PL/I(F) did the same thing with its
"48-character set"
had
already accepted TRIGRAPHS. I found that totally unacceptable. It took
me a month or 2 to find a 3270 emulator that I could patch up to finally
be able to see and type square brackets.
To IBM's credit I used IBM's internally used 3270 emulator (MITE I
believe) with some patches I came up with. I dumped the binary, found
the translate table and fixed it.
I can't fathom why trigraphs were considered an acceptable solution.
other characters, or use tri-graphs.
Did the 3178 come with an APL feature?
David Wade <g4ugm@dave.invalid> writes:
On 14/07/2025 21:36, Dan Espen wrote:
Peter Flass <Peter@Iron-Spring.com> writes:On a real 3178 there are no [] characters so you either lose some
On 7/13/25 07:18, Niklas Karlsson wrote:I go onto my first UNIX on mainframe project and all the developers
On 2025-07-13, Nuno Silva <nunojsilva@invalid.invalid> wrote:
On 2025-07-13, Niklas Karlsson wrote:Couldn't say. I came in a little to late to really have to butt
Not EBCDIC, but your mention of square brackets reminded me of the >>>>>>> modified 7-bit ASCII that was used to write Swedish before ISO 8859-1 >>>>>>> and later Unicode made it big.
"} { | ] [ \" were shown as " " on Swedish-adapted equipment, >>>>>>> making C code look absolutely ridiculous. Similar conventions applied >>>>>>> for the other Nordic languages and German.
I played with ISO-646-FI/SE once in a Televideo terminal, but not for >>>>>> long enough to figure out how to handle day-to-day usage of a UNIX-like >>>>>> system without these characters.
I (barely) know C has (had?) syntax and also iso646.h for such cases, >>>>>> but how would e.g. shell scripting be handled?
heads
with that issue.
That's why C had trigraphs. PL/I(F) did the same thing with its
"48-character set"
had
already accepted TRIGRAPHS. I found that totally unacceptable. It took >>> me a month or 2 to find a 3270 emulator that I could patch up to finally >>> be able to see and type square brackets.
To IBM's credit I used IBM's internally used 3270 emulator (MITE I
believe) with some patches I came up with. I dumped the binary, found
the translate table and fixed it.
I can't fathom why trigraphs were considered an acceptable solution.
other characters, or use tri-graphs.
Did the 3178 come with an APL feature?
Real terminals went away pretty quickly.
The project I was on was using emulators except for some of us with
3290s.
On a real 3178 there are no [] characters so you either lose some
other characters, or use tri-graphs.
Did the 3178 come with an APL feature?
Real terminals went away pretty quickly.
The project I was on was using emulators except for some of us with
3290s.
Lynn Wheeler <lynn@garlic.com> writes:
other trivia: account about biggest computer "goof" ever, 360s
originally were going to be ASCII machines, but the ASCII unit record
gear weren't ready ... so were going to start shipping with old BCD gear
(with EBCDIC) and move later
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/P-BIT.HTM
I don't know what dreams they were having within IBM but those machines
were never going to be ASCII. It would be pretty hard to do 14xx
emulation with ASCII and IBM NEVER EVER did a competent ASCII - EBCDIC translate table.
On Mon, 14 Jul 2025 09:40:28 GMT, Charlie Gibbs wrote:
In the mainframe world, lower case was generally held in low regard. The
myth was that anything not in all caps didn't look appropriately
computerish. This myth survived for decades afterwards.
I read somewhere that, when AT&T engineers were designing the first teletypes, they had room to include either uppercase letters or lowercase, but not both. Executives decided that entire uppercase was preferable to entire lowercase, solely because “god” seemed like a less respectful way of writing the name (or was it occupation?) of their favourite deity than “GOD”.
I have no idea if this story is credible or not ...
On Mon, 7 Jul 2025 16:10:25 -0000 (UTC), Waldek Hebisch wrote:
Endianness matter for character/digit addresable machines.
I thought such machines always stored the digits in order of ascending significance, because it didn’t make sense to do it the other way.
Emulation would work without any change, CPU and almost all microcode
would be the same. IIUC what would differ would be translation tables
on output and input. This could require extra space in case of ASCII peripherials. But normal 1401 memory size were decimal, so lower than corresponding binary numbers. And actual core had extra space for use
by microcode. So it does not look like a big problem.
Dan Espen <dan1espen@gmail.com> wrote:
Lynn Wheeler <lynn@garlic.com> writes:
other trivia: account about biggest computer "goof" ever, 360s
originally were going to be ASCII machines, but the ASCII unit record
gear weren't ready ... so were going to start shipping with old BCD gear >>> (with EBCDIC) and move later
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/P-BIT.HTM
I don't know what dreams they were having within IBM but those machines
were never going to be ASCII. It would be pretty hard to do 14xx
emulation with ASCII and IBM NEVER EVER did a competent ASCII - EBCDIC
translate table.
Emulation would work without any change, CPU and almost all microcode
would be the same. IIUC what would differ would be translation tables
on output and input. This could require extra space in case of
ASCII peripherials. But normal 1401 memory size were decimal, so
lower than corresponding binary numbers. And actual core had extra
space for use by microcode. So it does not look like a big problem.
It is hard to say what technical problems with ASCII were.
BCD gear used properties of BCD, so rewiring it for ASCII
could require some effort. But it does not look like a
big effort. So they probably could announce ASCII before
I/O equipement was fully ready (after all, they announced
before they had working systems and did not ship some
of what was announced).
Instead of adding a high order bit to the 7-bit code, IBM wanted to
put the extra bit in position 5 (counting from the right), thus
splitting the defined and undefined characters into "stripes" in the
table. I have no idea why they thought this was a good idea, but the
rest of the industry said FOAD, and the rest, as is said, is history.
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Mon, 7 Jul 2025 16:10:25 -0000 (UTC), Waldek Hebisch wrote:
Endianness matter for character/digit addresable machines.
I thought such machines always stored the digits in order of ascending
significance, because it didn’t make sense to do it the other way.
I think that bit/digit serial machines did arithmetic starting from the >lowest digit. But early computer equipment needed to cooperate with
punched card equipement, that is accept mixture of character and
numeric data written in English writing order.
In addition to any technical problem, there was the political problem created by IBM's version of 8-bit ASCII vs. the rest of the industry's version.
Instead of adding a high order bit to the 7-bit code, IBM wanted to put the extra bit in position 5 (counting from the right), thus splitting the defined and undefined characters into "stripes" in the table. I have no idea why they
thought this was a good idea, but the rest of the industry said FOAD, and the rest, as is said, is history.
scott@alfter.diespammersdie.us (Scott Alfter) writes:
In article <md6n3pFgaflU8@mid.individual.net>,
Bob Eager <news0009@eager.cx> wrote:
Don't forget the ACT Sirius. A DOS machine, that crammed more data onto a >>> diskette buy using a variable speed drive (5 speeds, I think).
Apple used the same trick with its 3.5" floppy drives to fit 800K onto a
disk that was only good for 720K elsewhere.
And before the 800K floppy, there was the single-sided 400K floppy on the same
controller.
Also, lower case letter shapes are more complicated, so upper case
is more robust to low quality print ...
ASCII not, what your machine can do for you. -- IBM
Aye, I really like the internal 400k floppy on my 128k Mac because
you can hear the drive speeding up and slowing down depending on
which region is being read.
On Fri, 18 Jul 2025 23:23:12 GMT, Charlie Gibbs wrote:
ASCII not, what your machine can do for you. -- IBM
... “ASCII what you can do for your machine”.
Sums up IBM equipment (and software) in a nutshell.
Did the 3178 come with an APL feature?On a real 3178 there are no [] characters so you either lose some
other characters, or use tri-graphs.
Not unless you paid a lot of money. In those times every mod was an
expensive extra, even if it was a link of wire..
Real terminals went away pretty quickly.
The project I was on was using emulators except for some of us with
3290s.
I think you were late on the scene. I started on 2260's which date
from 1964. The IBM PC wasn't released until 1981, some 17 years
later. 3270 emulation didn't happen until I think a couple of years
later, so almost 20 years after the first terminals. Yes they quickly replaced terminals once they were available, but they were around for
a long time...
Dan Espen <dan1espen@gmail.com> wrote:
Lynn Wheeler <lynn@garlic.com> writes:
other trivia: account about biggest computer "goof" ever, 360s
originally were going to be ASCII machines, but the ASCII unit record
gear weren't ready ... so were going to start shipping with old BCD gear >>> (with EBCDIC) and move later
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/P-BIT.HTM
I don't know what dreams they were having within IBM but those machines
were never going to be ASCII. It would be pretty hard to do 14xx
emulation with ASCII and IBM NEVER EVER did a competent ASCII - EBCDIC
translate table.
Emulation would work without any change, CPU and almost all microcode
would be the same. IIUC what would differ would be translation tables
on output and input. This could require extra space in case of
ASCII peripherials. But normal 1401 memory size were decimal, so
lower than corresponding binary numbers. And actual core had extra
space for use by microcode. So it does not look like a big problem.
On Fri, 18 Jul 2025 18:23:23 +0000, Waldek Hebisch wrote:
Emulation would work without any change, CPU and almost all microcode
would be the same. IIUC what would differ would be translation tables
on output and input. This could require extra space in case of ASCII
peripherials. But normal 1401 memory size were decimal, so lower than
corresponding binary numbers. And actual core had extra space for use
by microcode. So it does not look like a big problem.
I worked on a mainframe that supported both ASCII and EBCDIC. There was a mode bit which selected which it would use.
The difference was conversion from decimal nibbles to normal bytes, in
that different zone bits were used.
Bob Eager <news0009@eager.cx> writes:
On Fri, 18 Jul 2025 18:23:23 +0000, Waldek Hebisch wrote:
Emulation would work without any change, CPU and almost all microcode
would be the same. IIUC what would differ would be translation tables
on output and input. This could require extra space in case of ASCII
peripherials. But normal 1401 memory size were decimal, so lower than
corresponding binary numbers. And actual core had extra space for use
by microcode. So it does not look like a big problem.
I worked on a mainframe that supported both ASCII and EBCDIC. There was a
mode bit which selected which it would use.
The difference was conversion from decimal nibbles to normal bytes, in
that different zone bits were used.
Every 360 had a ASCII bit. That bit took quite a while to disappear
from the PSW. Never saw anyone attempt to turn it on.
Bob Eager <news0009@eager.cx> writes:
On Fri, 18 Jul 2025 18:23:23 +0000, Waldek Hebisch wrote:
Emulation would work without any change, CPU and almost all microcode
would be the same. IIUC what would differ would be translation tables
on output and input. This could require extra space in case of ASCII
peripherials. But normal 1401 memory size were decimal, so lower than
corresponding binary numbers. And actual core had extra space for use
by microcode. So it does not look like a big problem.
I worked on a mainframe that supported both ASCII and EBCDIC. There was
a mode bit which selected which it would use.
The difference was conversion from decimal nibbles to normal bytes, in
that different zone bits were used.
Every 360 had a ASCII bit. That bit took quite a while to disappear
from the PSW. Never saw anyone attempt to turn it on.
antispam@fricas.org (Waldek Hebisch) writes:
Dan Espen <dan1espen@gmail.com> wrote:
Lynn Wheeler <lynn@garlic.com> writes:
other trivia: account about biggest computer "goof" ever, 360s
originally were going to be ASCII machines, but the ASCII unit record
gear weren't ready ... so were going to start shipping with old BCD gear >>>> (with EBCDIC) and move later
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/P-BIT.HTM
I don't know what dreams they were having within IBM but those machines
were never going to be ASCII. It would be pretty hard to do 14xx
emulation with ASCII and IBM NEVER EVER did a competent ASCII - EBCDIC
translate table.
Emulation would work without any change, CPU and almost all microcode
would be the same. IIUC what would differ would be translation tables
on output and input. This could require extra space in case of
ASCII peripherials. But normal 1401 memory size were decimal, so
lower than corresponding binary numbers. And actual core had extra
space for use by microcode. So it does not look like a big problem.
Can't make much sense of the above.
14xx programs in emulation, by definition had to use BCD.
ASCII had a different collating sequence. It's not a translation issue.
On 7/19/25 12:28, Dan Espen wrote:
Bob Eager <news0009@eager.cx> writes:
On Fri, 18 Jul 2025 18:23:23 +0000, Waldek Hebisch wrote:
Emulation would work without any change, CPU and almost all microcode
would be the same. IIUC what would differ would be translation tables >>>> on output and input. This could require extra space in case of ASCII
peripherials. But normal 1401 memory size were decimal, so lower than >>>> corresponding binary numbers. And actual core had extra space for use >>>> by microcode. So it does not look like a big problem.
I worked on a mainframe that supported both ASCII and EBCDIC. There was a >>> mode bit which selected which it would use.
The difference was conversion from decimal nibbles to normal bytes, in
that different zone bits were used.
Every 360 had a ASCII bit. That bit took quite a while to disappear
from the PSW. Never saw anyone attempt to turn it on.
It never did anything. Its only defined effect was to change the signs >generated for packed-decimal data. I don't know what IBM was thinking.
David Wade <g4ugm@dave.invalid> writes:
Did the 3178 come with an APL feature?On a real 3178 there are no [] characters so you either lose some
other characters, or use tri-graphs.
Not unless you paid a lot of money. In those times every mod was an expensive extra, even if it was a link of wire..
Real terminals went away pretty quickly.
The project I was on was using emulators except for some of us with
3290s.
I think you were late on the scene. I started on 2260's which date
from 1964. The IBM PC wasn't released until 1981, some 17 years
later. 3270 emulation didn't happen until I think a couple of years
later, so almost 20 years after the first terminals. Yes they quickly replaced terminals once they were available, but they were around for
a long time...
Me, late on the scene?
I started programming in 1964 on IBM 14xx in Autocoder.
Did my first 2260 project using BTAM and assembler in 1968.
One of my favorite 327xs were the 3279 color terminals. Great keyboards
on those things. Looking back there was the punched card era, the 3270
era, then the 327x emulator era. I think I put in more years in
emulator era than the real terminal era.
--
Dan Espen
antispam@fricas.org (Waldek Hebisch) writes:
Dan Espen <dan1espen@gmail.com> wrote:
Lynn Wheeler <lynn@garlic.com> writes:
other trivia: account about biggest computer "goof" ever, 360s
originally were going to be ASCII machines, but the ASCII unit record
gear weren't ready ... so were going to start shipping with old BCD gear >>>> (with EBCDIC) and move later
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/P-BIT.HTM
I don't know what dreams they were having within IBM but those machines
were never going to be ASCII. It would be pretty hard to do 14xx
emulation with ASCII and IBM NEVER EVER did a competent ASCII - EBCDIC
translate table.
Emulation would work without any change, CPU and almost all microcode
would be the same. IIUC what would differ would be translation tables
on output and input. This could require extra space in case of
ASCII peripherials. But normal 1401 memory size were decimal, so
lower than corresponding binary numbers. And actual core had extra
space for use by microcode. So it does not look like a big problem.
Can't make much sense of the above.
14xx programs in emulation, by definition had to use BCD.
ASCII had a different collating sequence. It's not a translation issue.
On Sat, 19 Jul 2025 15:16:03 -0400
Dan Espen <dan1espen@gmail.com> wrote:
David Wade <g4ugm@dave.invalid> writes:
Did the 3178 come with an APL feature?On a real 3178 there are no [] characters so you either lose some
other characters, or use tri-graphs.
Not unless you paid a lot of money. In those times every mod was an
expensive extra, even if it was a link of wire..
Real terminals went away pretty quickly.
The project I was on was using emulators except for some of us with
3290s.
I think you were late on the scene. I started on 2260's which date
from 1964. The IBM PC wasn't released until 1981, some 17 years
later. 3270 emulation didn't happen until I think a couple of years
later, so almost 20 years after the first terminals. Yes they quickly
replaced terminals once they were available, but they were around for
a long time...
Me, late on the scene?
I started programming in 1964 on IBM 14xx in Autocoder.
Did my first 2260 project using BTAM and assembler in 1968.
One of my favorite 327xs were the 3279 color terminals. Great keyboards
on those things. Looking back there was the punched card era, the 3270
era, then the 327x emulator era. I think I put in more years in
emulator era than the real terminal era.
Yeahbut I'd have to book the colour terminal way in advance - anyhow
green on black is more restful to the eyes. I missed out on autocoder,
being a mere stripling.
Dan Espen <dan1espen@gmail.com> wrote:
antispam@fricas.org (Waldek Hebisch) writes:
Dan Espen <dan1espen@gmail.com> wrote:
Lynn Wheeler <lynn@garlic.com> writes:
other trivia: account about biggest computer "goof" ever, 360s
originally were going to be ASCII machines, but the ASCII unit record >>>>> gear weren't ready ... so were going to start shipping with old BCD gear >>>>> (with EBCDIC) and move later
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/P-BIT.HTM
I don't know what dreams they were having within IBM but those machines >>>> were never going to be ASCII. It would be pretty hard to do 14xx
emulation with ASCII and IBM NEVER EVER did a competent ASCII - EBCDIC >>>> translate table.
Emulation would work without any change, CPU and almost all microcode
would be the same. IIUC what would differ would be translation tables
on output and input. This could require extra space in case of
ASCII peripherials. But normal 1401 memory size were decimal, so
lower than corresponding binary numbers. And actual core had extra
space for use by microcode. So it does not look like a big problem.
Can't make much sense of the above.
14xx programs in emulation, by definition had to use BCD.
Yes. And using ASCII in 360 OS-es have nothing to do with the
above.
ASCII had a different collating sequence. It's not a translation issue.
Internally emulator works in BCD. The only problem is to correctly
emulate I/O when working with ASCII periperials. That is solved
by using translation table (so that BCD code from emulator gives
correct glyph on the printer, etc).
antispam@fricas.org (Waldek Hebisch) writes:
Dan Espen <dan1espen@gmail.com> wrote:
antispam@fricas.org (Waldek Hebisch) writes:
Dan Espen <dan1espen@gmail.com> wrote:
Lynn Wheeler <lynn@garlic.com> writes:
other trivia: account about biggest computer "goof" ever, 360s
originally were going to be ASCII machines, but the ASCII unit record >>>>>> gear weren't ready ... so were going to start shipping with old BCD gear >>>>>> (with EBCDIC) and move later
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/P-BIT.HTM
I don't know what dreams they were having within IBM but those machines >>>>> were never going to be ASCII. It would be pretty hard to do 14xx
emulation with ASCII and IBM NEVER EVER did a competent ASCII - EBCDIC >>>>> translate table.
Emulation would work without any change, CPU and almost all microcode
would be the same. IIUC what would differ would be translation tables >>>> on output and input. This could require extra space in case of
ASCII peripherials. But normal 1401 memory size were decimal, so
lower than corresponding binary numbers. And actual core had extra
space for use by microcode. So it does not look like a big problem.
Can't make much sense of the above.
14xx programs in emulation, by definition had to use BCD.
Yes. And using ASCII in 360 OS-es have nothing to do with the
above.
ASCII had a different collating sequence. It's not a translation issue.
Internally emulator works in BCD. The only problem is to correctly
emulate I/O when working with ASCII periperials. That is solved
by using translation table (so that BCD code from emulator gives
correct glyph on the printer, etc).
If printing is all your app does.
Cards are Hollerith. A close cousin of BCD.
The app would expect any card master file to to in BCD order.
Tapes and disk have the same issue.
"Kerr-Mudd, John" <admin@127.0.0.1> writes:
On Sat, 19 Jul 2025 15:16:03 -0400
Dan Espen <dan1espen@gmail.com> wrote:
David Wade <g4ugm@dave.invalid> writes:
Did the 3178 come with an APL feature?On a real 3178 there are no [] characters so you either lose some
other characters, or use tri-graphs.
Not unless you paid a lot of money. In those times every mod was an
expensive extra, even if it was a link of wire..
Real terminals went away pretty quickly.
The project I was on was using emulators except for some of us with
3290s.
I think you were late on the scene. I started on 2260's which date
from 1964. The IBM PC wasn't released until 1981, some 17 years
later. 3270 emulation didn't happen until I think a couple of years
later, so almost 20 years after the first terminals. Yes they quickly
replaced terminals once they were available, but they were around for
a long time...
Me, late on the scene?
I started programming in 1964 on IBM 14xx in Autocoder.
Did my first 2260 project using BTAM and assembler in 1968.
One of my favorite 327xs were the 3279 color terminals. Great keyboards >> on those things. Looking back there was the punched card era, the 3270
era, then the 327x emulator era. I think I put in more years in
emulator era than the real terminal era.
Yeahbut I'd have to book the colour terminal way in advance - anyhow
green on black is more restful to the eyes. I missed out on autocoder, being a mere stripling.
One of my more favorite pastimes was redoing IBMs default 4-color color scheme of their ISPF screens. A 3279 was a 7 color terminal with
reverse image, underlining. It's amazing how much better you can make
a screen look with a little artistic skill.
At Bell Labs I had the 3279 on my desk for a year or so.
A short-term works colleague who was planning on doing-up^wrebuilding a cottage in mid-Wales for the quiet country life translated the ISPF
panels into Welsh.
On 2025-07-13, Nuno Silva <nunojsilva@invalid.invalid> wrote:
On 2025-07-13, Niklas Karlsson wrote:
Not EBCDIC, but your mention of square brackets reminded me of the
modified 7-bit ASCII that was used to write Swedish before ISO 8859-1
and later Unicode made it big.
"} { | ] [ \" were shown as "å ä ö Å Ä Ö" on Swedish-adapted equipment,
making C code look absolutely ridiculous. Similar conventions applied
for the other Nordic languages and German.
I played with ISO-646-FI/SE once in a Televideo terminal, but not for
long enough to figure out how to handle day-to-day usage of a UNIX-like
system without these characters.
I (barely) know C has (had?) syntax and also iso646.h for such cases,
but how would e.g. shell scripting be handled?
Couldn't say. I came in a little to late to really have to butt heads
with that issue.
Dan Espen <dan1espen@gmail.com> wrote:
antispam@fricas.org (Waldek Hebisch) writes:
Dan Espen <dan1espen@gmail.com> wrote:
antispam@fricas.org (Waldek Hebisch) writes:
Dan Espen <dan1espen@gmail.com> wrote:
Lynn Wheeler <lynn@garlic.com> writes:
other trivia: account about biggest computer "goof" ever, 360s
originally were going to be ASCII machines, but the ASCII unit record >>>>>>> gear weren't ready ... so were going to start shipping with old BCD gear
(with EBCDIC) and move later
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/P-BIT.HTM
I don't know what dreams they were having within IBM but those machines >>>>>> were never going to be ASCII. It would be pretty hard to do 14xx
emulation with ASCII and IBM NEVER EVER did a competent ASCII - EBCDIC >>>>>> translate table.
Emulation would work without any change, CPU and almost all microcode >>>>> would be the same. IIUC what would differ would be translation tables >>>>> on output and input. This could require extra space in case of
ASCII peripherials. But normal 1401 memory size were decimal, so
lower than corresponding binary numbers. And actual core had extra
space for use by microcode. So it does not look like a big problem.
Can't make much sense of the above.
14xx programs in emulation, by definition had to use BCD.
Yes. And using ASCII in 360 OS-es have nothing to do with the
above.
ASCII had a different collating sequence. It's not a translation issue. >>>Internally emulator works in BCD. The only problem is to correctly
emulate I/O when working with ASCII periperials. That is solved
by using translation table (so that BCD code from emulator gives
correct glyph on the printer, etc).
If printing is all your app does.
Cards are Hollerith. A close cousin of BCD.
The app would expect any card master file to to in BCD order.
Yes, card reader and card punch also need translation table.
That why I wrote etc above.
Tapes and disk have the same issue.
That is less clear: 1401 discs and tapes stored word marks which
made them incompatible with ususal 360 formats.
And discs were
ususally read on system of the same type. So extra translation
program (needed anyway due to word marks) could also handle change
of character codes when transfering data between system.
Clearly 1401 compatibility did not prevent introduction of CKD
discs. And CKD means different on disk format than 1401 disc.
On Mon, 21 Jul 2025 09:26:43 +0100, Kerr-Mudd, John wrote:
A short-term works colleague who was planning on doing-up^wrebuilding a cottage in mid-Wales for the quiet country life translated the ISPF
panels into Welsh.
For some reason, former Linux kernel developer Alan Cox immediately came
to mind ...
Sysop: | Tetrazocine |
---|---|
Location: | Melbourne, VIC, Australia |
Users: | 11 |
Nodes: | 8 (0 / 8) |
Uptime: | 49:02:48 |
Calls: | 166 |
Files: | 21,502 |
Messages: | 77,711 |