• Re: The Rise And Fall Of Unix

    From anthk@3:633/280.2 to All on Fri Jul 4 15:48:51 2025
    On 2024-09-04, Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    On Wed, 4 Sep 2024 11:47:49 -0700, Peter Flass wrote:

    This still doesn’t answer the question of why Linux is relatively more
    popular compared to the BSDs. My impression is that BSD is considered to
    be for hackers and Linux is for people who just want to use the system.

    The BSDs date from the time when Unix systems were considered superior to anything else out there, whereas Linux grew up very much in the shadow of Microsoft.

    One example illustrating the difference in mindset, I think, is that the Linux kernel can read any kind of disk partition format -- DOS, Apple, whatever. Whereas the BSDs still want a disk to be formatted according to their own system of “slices”.

    Slices can lie under a PC partition.

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Fri Jul 4 17:29:34 2025
    On Fri, 4 Jul 2025 05:48:51 -0000 (UTC), anthk wrote:

    On 2024-09-04, Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    One example illustrating the difference in mindset, I think, is that
    the Linux kernel can read any kind of disk partition format -- DOS,
    Apple, whatever. Whereas the BSDs still want a disk to be formatted
    according to their own system of “slices”.

    Slices can lie under a PC partition.

    And then there is the problem of the filesystems within those slices. On
    the BSDs, the traditional filesystem is called “UFS”, but what one BSD variant means by “UFS” is not quite the same as what another BSD variant does.

    The common Linux kernel shared across just about all distros supports
    common standard filesystems. This is one reason why “distro-hopping” is a common thing among Linux users, while any attempt to pull such an
    equivalent stunt between BSD variants is going to be fraught with
    pitfalls.

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Bob Eager@3:633/280.2 to All on Fri Jul 4 23:07:11 2025
    On Fri, 04 Jul 2025 05:48:51 +0000, anthk wrote:

    On 2024-09-04, Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    On Wed, 4 Sep 2024 11:47:49 -0700, Peter Flass wrote:

    This still doesn’t answer the question of why Linux is relatively more >>> popular compared to the BSDs. My impression is that BSD is considered
    to be for hackers and Linux is for people who just want to use the
    system.

    The BSDs date from the time when Unix systems were considered superior
    to anything else out there, whereas Linux grew up very much in the
    shadow of Microsoft.

    One example illustrating the difference in mindset, I think, is that
    the Linux kernel can read any kind of disk partition format -- DOS,
    Apple, whatever. Whereas the BSDs still want a disk to be formatted
    according to their own system of “slices”.

    Slices can lie under a PC partition.

    I see Lawrence is being ignorant once again (I only saw the quote because
    he is killfiled).

    BSDs have supported (and use) GUID Partition Tables (GPTs) for years. Yes, they can still use partitions and slices if required, but very few users
    do that.

    --
    Using UNIX since v6 (1975)...

    Use the BIG mirror service in the UK:
    http://www.mirrorservice.org

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: ---:- FTN<->UseNet Gate -:--- (3:633/280.2@fidonet)
  • From Bud Frede@3:633/280.2 to All on Sat Jul 5 09:50:03 2025
    Lawrence D'Oliveiro <ldo@nz.invalid> writes:

    On Fri, 4 Jul 2025 05:48:51 -0000 (UTC), anthk wrote:

    On 2024-09-04, Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    One example illustrating the difference in mindset, I think, is that
    the Linux kernel can read any kind of disk partition format -- DOS,
    Apple, whatever. Whereas the BSDs still want a disk to be formatted
    according to their own system of “slices”.

    Slices can lie under a PC partition.

    And then there is the problem of the filesystems within those slices. On
    the BSDs, the traditional filesystem is called “UFS”, but what one BSD variant means by “UFS” is not quite the same as what another BSD variant does.

    The common Linux kernel shared across just about all distros supports
    common standard filesystems. This is one reason why “distro-hopping” is a
    common thing among Linux users, while any attempt to pull such an
    equivalent stunt between BSD variants is going to be fraught with
    pitfalls.

    How many of the people who would be "distro-hopping" re-use existing filesystems rather than re-installing completely from scratch?

    I understand that you see a problem here, but I'm not sure that I do.





    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: Wossamotta U. (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Sat Jul 5 11:15:25 2025
    On Fri, 04 Jul 2025 19:50:03 -0400, Bud Frede wrote:

    Lawrence D'Oliveiro <ldo@nz.invalid> writes:

    The common Linux kernel shared across just about all distros supports
    common standard filesystems. This is one reason why “distro-hopping” is >> a common thing among Linux users, while any attempt to pull such an
    equivalent stunt between BSD variants is going to be fraught with
    pitfalls.

    How many of the people who would be "distro-hopping" re-use existing filesystems rather than re-installing completely from scratch?

    Consider that an OS install can fit in, say, less than 100GB, whereas hard drives (and even SSDs) come in multi-terabyte sizes these days.

    So it is easy enough to allocate multiple partitions for OS installs, and
    use all the rest as a common /home area for user files. That way, you can switch OSes and still have access to the same user files, without having
    to copy stuff back and forth.

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Theo@3:633/280.2 to All on Sat Jul 5 20:08:15 2025
    Bud Frede <frede@mouse-potato.com> wrote:
    How many of the people who would be "distro-hopping" re-use existing filesystems rather than re-installing completely from scratch?

    I understand that you see a problem here, but I'm not sure that I do.

    It's simply things like formatting a USB stick/HDD to the native UFS and then finding nothing else will read it. You can obviously format as FAT/etc but it's not so good as a filesystem especially for storing programs on, and especially not for booting the OS from.

    Back in the day, hard drives never moved between machines so it didn't
    matter. Nowadays they're on USB and do, regularly.

    Theo

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: University of Cambridge, England (3:633/280.2@fidonet)
  • From Marco Moock@3:633/280.2 to All on Sun Jul 6 05:35:14 2025
    On 04.09.2024 22:08 Uhr Lawrence D'Oliveiro wrote:

    One example illustrating the difference in mindset, I think, is that
    the Linux kernel can read any kind of disk partition format -- DOS,
    Apple, whatever. Whereas the BSDs still want a disk to be formatted
    according to their own system of =E2=80=9Cslices=E2=80=9D.

    FreeBSD supports GPT and MBR too. IIRC it can also read various file
    systems using additional software fro the repo.

    --=20
    kind regards
    Marco

    Send spam to 1725480524muell@stinkedores.dorfdsl.de


    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Bob Eager@3:633/280.2 to All on Sun Jul 6 07:12:28 2025
    On Sat, 05 Jul 2025 21:35:14 +0200, Marco Moock wrote:

    On 04.09.2024 22:08 Uhr Lawrence D'Oliveiro wrote:

    One example illustrating the difference in mindset, I think, is that
    the Linux kernel can read any kind of disk partition format -- DOS,
    Apple, whatever. Whereas the BSDs still want a disk to be formatted
    according to their own system of “slices”.

    FreeBSD supports GPT and MBR too. IIRC it can also read various file
    systems using additional software fro the repo.

    And the Microsoft Logical Disk Manager partitioning scheme.

    --
    Using UNIX since v6 (1975)...

    Use the BIG mirror service in the UK:
    http://www.mirrorservice.org

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: ---:- FTN<->UseNet Gate -:--- (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Sun Jul 6 10:12:17 2025
    On Sat, 5 Jul 2025 21:35:14 +0200, Marco Moock wrote:

    FreeBSD supports GPT and MBR too. IIRC it can also read various file
    systems using additional software fro the repo.

    What about interchanging UFS volumes with other BSDs?

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From anthk@3:633/280.2 to All on Sun Jul 6 16:08:13 2025
    On 2024-09-14, Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    On 14 Sep 2024 02:50:05 -0300, Mike Spencer wrote:

    (I do now, at last, have a cell phone, hate the touchscreen GUI, don't
    know how to do anything except phone calls, text and wireless access
    point. Where are the manpages?)

    A minute’s silence for the legendary Debian-based Nokia N9.

    Development was well under way by the time Microsoft’s mole, Stephen Elop, came in and decreed that the company would bet its entire future on the laughable Windows Phone. So he couldn’t kill it completely, but he could ensure that the first of this product line was also the last. It got
    limited release in a few countries, garnered rave reviews wherever it was available, sold out what stock was available, and that was the end of it.

    Get PostMarketOS and you will still be able to get a modern
    system on it.

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From anthk@3:633/280.2 to All on Sun Jul 6 16:08:14 2025
    On 2024-08-27, Sebastian <sebastian@here.com.invalid> wrote:
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    On Tue, 27 Aug 2024 06:55:55 -0000 (UTC), Sebastian wrote:

    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    ?Unix-like? tends to mean ?Linux-like? these days, let?s face it. Linux >>>> leads, the other *nixes follow.

    I hope not. Linux gets shittier with each turd that drops from the
    FreeDesktop people.

    Like I said, if you don?t like Linux distros infested with FreeDesktop-
    isms, don?t use them. There?s no need to bring up all this bile: all it?s >> doing is aggravating your ulcers. Open Source is all about choice.

    The choices are drying up. Increasingly, decisions are made by distros instead of users, and you only have a choice if there are any distros
    left that haven't caved or collapsed, or if you have the time, money,
    and charisma to create AND MAINTAIN a new distro. That used to not be necessary simply to have a choice. It used to be sufficient to install
    a decent distro. The main distros used to let you have far more choice
    than they do today.

    Why do you hate the Free Desktop folks? They are at the forefront of
    trying to modernize the *nix GUI.

    The Linux GUI had no need of such modernization, especially since all
    "modernization" really is, is Windowsization ...

    Actually, it?s not. Linux GUIs very much go their own way; there are ones >> that copy Windows and even Apple, it is true, but that?s just to appeal to >> those who prefer that look.

    Systemd copies Windows and Apple at a lower level, and it continues
    to be forced on the Linux community from every direction. I don't
    even think Devuan will be able to resist the pressure to run Systemd
    for much longer. And every distro is adopting iproute2, the main
    effect of which is to make Linux networking skills less transferrable
    to BSD (basically vendor-lock).

    There are others that go in quite different
    directions. The customizability of KDE Plasma, for example, goes beyond
    anything you?ll find in any proprietary platform.

    And the beauty of Linux is, you can install any number of these GUI
    environments at once, and switching between them is as easy as logging out >> and logging in again. You don?t even have to reboot.

    Linux was more customizable in the past, and Wayland makes the problem
    worse because there will always be only a few compositors, due to them
    having to be so complicated. Plus, we are now seeing with the Hyprland
    fiasco that distros will remove good compositors from their package management system if their managers perceive any of the authors of that compositor to have committed a thoughtcrime.

    I used to run GNOME, and then GNOME 3 came out, and Debian released
    it under the same package name, as if it was just the next version
    of GNOME. What it actually was, was a turd to the face directly out
    of the asses of the FreeDesktop-influenced GNOME developers. It was completely static, with no customizability at all. They promised to add customizability back later, but GNOME 3 was so intolerable, that I had
    to find an alternative. ANY alternative. I tried KDE, but it had gotten
    a shitty rewrite, just like GNOME, and had become just as intolerable
    as GNOME. So I switched to XFCE for years, even though it was inferior
    to GNOME and KDE as I previously knew them, until I finally noticed that
    MATE was available on Debian (for now-- I assume it will get removed
    at some point, or it will come to suck just as much as GNOME).

    And the reasoning behind the GNOME rewrite was about as anti-user as
    it's possible to be: The FreeDesktop faggots had decided that desktop
    PCs were obsolete, and that we had to march towards the brave new
    future, in which we'd trade our desktop machines for tablets and
    fucking phones. Microsoft had the same idea, and released Windows 8
    the following year, which had a bunch of stupid features that were specifically for mobile toys. They'd have taken our desktop computers
    by force if they had the power to do so. They have more power today
    than they did back then, so we might see a revival of the whole
    "desktops are obsolete" idea in the next decade or so.

    I saw GNOME 3 a couple of years ago on Ubuntu, and it still sucked,
    but people still praise it for some fucked-up reason. I assume the same
    thing is going on with KDE. I'm more likely to try CDE now that it's open-source, than KDE.

    Just run WindowMaker with the OneStepBack or TwoStepsBack
    GTK2-4 themes and the GNUstep icon theme for XDG.

    Lxappearance will allow you to set your GTK theme/icons/fonts
    with ease so it matches the WM one.

    Then use qtconfig to tell QT5 to use a GTK theme. THere's
    a qgnomestyle or similarly called one.



    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Sun Jul 6 16:52:10 2025
    On Sun, 6 Jul 2025 06:08:14 -0000 (UTC), anthk wrote:

    Just run WindowMaker with the OneStepBack or TwoStepsBack GTK2-4 themes
    and the GNUstep icon theme for XDG.

    Lxappearance will allow you to set your GTK theme/icons/fonts with ease
    so it matches the WM one.

    Also don’t forget the Mate and Cinnamon projects: Mate originated from GNOME/GTK 2, while Cinnamon is an offshoot from GNOME/GTK 3.

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Theo@3:633/280.2 to All on Sun Jul 6 21:43:29 2025
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    On Sat, 5 Jul 2025 21:35:14 +0200, Marco Moock wrote:

    FreeBSD supports GPT and MBR too. IIRC it can also read various file systems using additional software fro the repo.

    What about interchanging UFS volumes with other BSDs?

    I can't speak to the specifics of FreeBSD and UFS2, but AIUI classic UFS was
    a machine specific format. eg if you were running on a big-endian machine
    then your metadata was written big endian. If you took it to a little endian machine all the bytes were the wrong way around. This was because there was
    no model in which hard drives would move between machines so they just
    dumped in-memory structs to disc. So the reader would have to know what
    kind of machine you had to begin with.

    NetBSD say that their FFS is compatible with a lot of UNIX and 'many other systems based on BSD and SystemV', but doesn't mention FreeBSD which is a rather glaring omission:
    https://www.netbsd.org/about/interop.html

    Theo

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: University of Cambridge, England (3:633/280.2@fidonet)
  • From Peter Flass@3:633/280.2 to All on Mon Jul 7 00:59:06 2025
    On 7/5/25 23:52, Lawrence D'Oliveiro wrote:
    On Sun, 6 Jul 2025 06:08:14 -0000 (UTC), anthk wrote:

    Just run WindowMaker with the OneStepBack or TwoStepsBack GTK2-4 themes
    and the GNUstep icon theme for XDG.

    Lxappearance will allow you to set your GTK theme/icons/fonts with ease
    so it matches the WM one.

    Also don’t forget the Mate and Cinnamon projects: Mate originated from GNOME/GTK 2, while Cinnamon is an offshoot from GNOME/GTK 3.

    I've used Mate for years. Once I looked at the newish Gnome stuff, and
    my only thought was: nope, nope, nope.I don't want a desktop that looks
    like someone is trying to show off all the great stuff they can do with graphics.


    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Mon Jul 7 07:31:09 2025
    On 06 Jul 2025 12:43:29 +0100 (BST), Theo wrote:

    If you took it to a little endian machine all the bytes were the
    wrong way around. This was because there was no model in which hard
    drives would move between machines so they just dumped in-memory
    structs to disc.

    But they had removable disk packs in those days. Also floppies, magneto- optical and optical media.

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Theo@3:633/280.2 to All on Mon Jul 7 08:32:55 2025
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    On 06 Jul 2025 12:43:29 +0100 (BST), Theo wrote:

    If you took it to a little endian machine all the bytes were the
    wrong way around. This was because there was no model in which hard
    drives would move between machines so they just dumped in-memory
    structs to disc.

    But they had removable disk packs in those days. Also floppies, magneto- optical and optical media.

    Removable disc packs mostly came later I think (although I wasn't aware the 44MB Syquest launched as early as 1986). Optical media used ISO9660; I'm
    not sure what was common for M-O drives.

    What format did UNIX floppies commonly use? FAT12 was an option but
    wouldn't have held metadata correctly. Were UFS floppies popular?

    I found this which refers to the endianness problem for UFS floppies: https://docs.oracle.com/cd/E19253-01/817-5093/medformat-80/index.html

    "SPARC and x86 UFS formats are different. SPARC uses little-endian bit
    coding, x86 uses big-endian. Media formatted for UFS is restricted to the hardware platform on which they were formatted. So, a diskette formatted
    for UFS on a SPARC based platform cannot be used for UFS on an x86 platform. Likewise, a diskette formatted for UFS on an x86 platform cannot be used on
    a SPARC platform."

    (odd that it says x86 UFS is big endian)

    Theo

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: University of Cambridge, England (3:633/280.2@fidonet)
  • From John Levine@3:633/280.2 to All on Mon Jul 7 09:06:48 2025
    According to Theo <theom+news@chiark.greenend.org.uk>:
    Removable disc packs mostly came later I think (although I wasn't aware the >44MB Syquest launched as early as 1986). Optical media used ISO9660; I'm
    not sure what was common for M-O drives.

    Uh, what? Removable disk packs date from about 1960.

    At Yale our PDP-11 originally had an RK05 single platter 1MB drive in
    1974, then we upgraded to a pair of RP02 washing machine sized drives,
    20MB each.

    We also had a PDP-10 which also used the same RP02 disks. I think I
    once experimented with trying to write a PDP-11 formatted disk on the
    -10, reading the file system from tape. It was rather exciting since
    the 36 bit PDP-10 mapped its words into the disk's 8 bit bytes in
    non-obvious ways.

    --
    Regards,
    John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
    Please consider the environment before reading this e-mail. https://jl.ly

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: Taughannock Networks (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Mon Jul 7 09:11:01 2025
    On 06 Jul 2025 23:32:55 +0100 (BST), Theo wrote:

    Removable disc packs mostly came later I think ...

    Removable disk packs predate the IBM “Winchester” drive, which ushered in the kind of non-removable disk drive we have been taking for granted since about the 1970s.

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Rich Alderson@3:633/280.2 to All on Mon Jul 7 11:07:39 2025
    John Levine <johnl@taugh.com> writes:

    At Yale our PDP-11 originally had an RK05 single platter 1MB drive in
    1974, then we upgraded to a pair of RP02 washing machine sized drives,
    20MB each.

    We also had a PDP-10 which also used the same RP02 disks. I think I
    once experimented with trying to write a PDP-11 formatted disk on the
    -10, reading the file system from tape. It was rather exciting since
    the 36 bit PDP-10 mapped its words into the disk's 8 bit bytes in
    non-obvious ways.

    Its perfectly obvious, since the PDP-10 operating systems write 128 word blocks at all times (even TOPS-20, which simply reads/writes 4 such blocks for each 512 word page in the data stream).

    1 sector = 128 words * 36 bits = 64 * 72 bits = 576 * 8 bits

    Easy-peasy.

    --
    Rich Alderson news@alderson.users.panix.com
    Audendum est, et veritas investiganda; quam etiamsi non assequamur,
    omnino tamen proprius, quam nunc sumus, ad eam perveniemus.
    --Galen

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: PANIX Public Access Internet and UNIX, NYC (3:633/280.2@fidonet)
  • From Al Kossow@3:633/280.2 to All on Mon Jul 7 13:15:08 2025

    I'm not 100% sure but I think this company. hardly more than a footnote in computer history, was the cause of little-endian processors.

    Guess again

    Try the DEC PDP-11 (1969)

    There was already a battle between bit 0 on the left or right in 1950s mainframes.

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Mon Jul 7 14:22:52 2025
    On Sun, 6 Jul 2025 20:15:08 -0700, Al Kossow wrote:

    There was already a battle between bit 0 on the left or right in 1950s mainframes.

    Endian-ness didn’t really matter before byte-addressability came along, though.

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Theo@3:633/280.2 to All on Mon Jul 7 21:18:46 2025
    John Levine <johnl@taugh.com> wrote:
    According to Theo <theom+news@chiark.greenend.org.uk>:
    Removable disc packs mostly came later I think (although I wasn't aware the >44MB Syquest launched as early as 1986). Optical media used ISO9660; I'm >not sure what was common for M-O drives.

    Uh, what? Removable disk packs date from about 1960.

    The issue under discussion was taking a removable pack from one vendor and plugging it into a different vendor's machine in order to read the data
    stored there, which is when format standardisation became relevant. In
    1960s were people moving discs from DEC to IBM, or distributing software on disc packs for multiple vendors?

    Tape and optical were their own separate things with their own formats, but AFAIK sending a 'HDD' formatted drive as a distribution format across
    multiple vendors didn't properly take off until USB, with some niche usage
    for Syquests in the late 80s/early 90s (and then Zip/Jazz etc).

    FAT was never an officially standardised format of course, but when the machines were running the same software it didn't matter, and so a 'PC formatted' FAT HDD (USB/memory card/...) became a de facto interchange
    standard that non-PC vendors also adopted, as FAT floppies had previously.

    Theo

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: University of Cambridge, England (3:633/280.2@fidonet)
  • From Borax Man@3:633/280.2 to All on Mon Jul 7 21:42:26 2025
    On 2025-07-06, anthk <anthk@openbsd.home> wrote:
    On 2024-08-27, Sebastian <sebastian@here.com.invalid> wrote:
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    On Tue, 27 Aug 2024 06:55:55 -0000 (UTC), Sebastian wrote:

    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    ?Unix-like? tends to mean ?Linux-like? these days, let?s face it. Linux >>>>> leads, the other *nixes follow.

    I hope not. Linux gets shittier with each turd that drops from the
    FreeDesktop people.

    Like I said, if you don?t like Linux distros infested with FreeDesktop-
    isms, don?t use them. There?s no need to bring up all this bile: all it?s >>> doing is aggravating your ulcers. Open Source is all about choice.

    The choices are drying up. Increasingly, decisions are made by distros
    instead of users, and you only have a choice if there are any distros
    left that haven't caved or collapsed, or if you have the time, money,
    and charisma to create AND MAINTAIN a new distro. That used to not be
    necessary simply to have a choice. It used to be sufficient to install
    a decent distro. The main distros used to let you have far more choice
    than they do today.

    Why do you hate the Free Desktop folks? They are at the forefront of >>>>> trying to modernize the *nix GUI.

    The Linux GUI had no need of such modernization, especially since all
    "modernization" really is, is Windowsization ...

    Actually, it?s not. Linux GUIs very much go their own way; there are ones >>> that copy Windows and even Apple, it is true, but that?s just to appeal to >>> those who prefer that look.

    Systemd copies Windows and Apple at a lower level, and it continues
    to be forced on the Linux community from every direction. I don't
    even think Devuan will be able to resist the pressure to run Systemd
    for much longer. And every distro is adopting iproute2, the main
    effect of which is to make Linux networking skills less transferrable
    to BSD (basically vendor-lock).

    There are others that go in quite different
    directions. The customizability of KDE Plasma, for example, goes beyond >>> anything you?ll find in any proprietary platform.

    And the beauty of Linux is, you can install any number of these GUI
    environments at once, and switching between them is as easy as logging out >>> and logging in again. You don?t even have to reboot.

    Linux was more customizable in the past, and Wayland makes the problem
    worse because there will always be only a few compositors, due to them
    having to be so complicated. Plus, we are now seeing with the Hyprland
    fiasco that distros will remove good compositors from their package
    management system if their managers perceive any of the authors of that
    compositor to have committed a thoughtcrime.

    I used to run GNOME, and then GNOME 3 came out, and Debian released
    it under the same package name, as if it was just the next version
    of GNOME. What it actually was, was a turd to the face directly out
    of the asses of the FreeDesktop-influenced GNOME developers. It was
    completely static, with no customizability at all. They promised to add
    customizability back later, but GNOME 3 was so intolerable, that I had
    to find an alternative. ANY alternative. I tried KDE, but it had gotten
    a shitty rewrite, just like GNOME, and had become just as intolerable
    as GNOME. So I switched to XFCE for years, even though it was inferior
    to GNOME and KDE as I previously knew them, until I finally noticed that
    MATE was available on Debian (for now-- I assume it will get removed
    at some point, or it will come to suck just as much as GNOME).

    And the reasoning behind the GNOME rewrite was about as anti-user as
    it's possible to be: The FreeDesktop faggots had decided that desktop
    PCs were obsolete, and that we had to march towards the brave new
    future, in which we'd trade our desktop machines for tablets and
    fucking phones. Microsoft had the same idea, and released Windows 8
    the following year, which had a bunch of stupid features that were
    specifically for mobile toys. They'd have taken our desktop computers
    by force if they had the power to do so. They have more power today
    than they did back then, so we might see a revival of the whole
    "desktops are obsolete" idea in the next decade or so.

    I saw GNOME 3 a couple of years ago on Ubuntu, and it still sucked,
    but people still praise it for some fucked-up reason. I assume the same
    thing is going on with KDE. I'm more likely to try CDE now that it's
    open-source, than KDE.

    Just run WindowMaker with the OneStepBack or TwoStepsBack
    GTK2-4 themes and the GNUstep icon theme for XDG.

    Lxappearance will allow you to set your GTK theme/icons/fonts
    with ease so it matches the WM one.

    Then use qtconfig to tell QT5 to use a GTK theme. THere's
    a qgnomestyle or similarly called one.



    Thanks for the two about TwoStepsBack. I quite like the OneStepBack
    aesthetic. The older widget style still appeals to me more.

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From David Wade@3:633/280.2 to All on Mon Jul 7 22:43:32 2025
    On 07/07/2025 12:18, Theo wrote:
    John Levine <johnl@taugh.com> wrote:
    According to Theo <theom+news@chiark.greenend.org.uk>:
    Removable disc packs mostly came later I think (although I wasn't aware the >>> 44MB Syquest launched as early as 1986). Optical media used ISO9660; I'm >>> not sure what was common for M-O drives.

    Uh, what? Removable disk packs date from about 1960.

    The issue under discussion was taking a removable pack from one vendor and plugging it into a different vendor's machine in order to read the data stored there, which is when format standardisation became relevant. In
    1960s were people moving discs from DEC to IBM, or distributing software on disc packs for multiple vendors?

    Tape and optical were their own separate things with their own formats, but AFAIK sending a 'HDD' formatted drive as a distribution format across multiple vendors didn't properly take off until USB, with some niche usage for Syquests in the late 80s/early 90s (and then Zip/Jazz etc).

    FAT was never an officially standardised format of course, but when the machines were running the same software it didn't matter, and so a 'PC formatted' FAT HDD (USB/memory card/...) became a de facto interchange standard that non-PC vendors also adopted, as FAT floppies had previously.

    Unless you had an older Atari ST which formatted disk in such a way that
    MSDOS wouldn't read them. I seem to remember it was one byte in the boot sector the PC didn't like, and there were Atari programs to fix it...

    Or if you formatted the disk on a PC no problemsa reading and writing on either machine.


    Theo

    Dave

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Ander@3:633/280.2 to All on Mon Jul 7 23:48:56 2025
    El Mon, 7 Jul 2025 11:42:26 -0000 (UTC), Borax Man escribi:

    On 2025-07-06, anthk <anthk@openbsd.home> wrote:
    On 2024-08-27, Sebastian <sebastian@here.com.invalid> wrote:
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    On Tue, 27 Aug 2024 06:55:55 -0000 (UTC), Sebastian wrote:

    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    ?Unix-like? tends to mean ?Linux-like? these days, let?s face it.
    Linux leads, the other *nixes follow.

    I hope not. Linux gets shittier with each turd that drops from the
    FreeDesktop people.

    Like I said, if you don?t like Linux distros infested with
    FreeDesktop- isms, don?t use them. There?s no need to bring up all
    this bile: all it?s doing is aggravating your ulcers. Open Source is
    all about choice.

    The choices are drying up. Increasingly, decisions are made by distros
    instead of users, and you only have a choice if there are any distros
    left that haven't caved or collapsed, or if you have the time, money,
    and charisma to create AND MAINTAIN a new distro. That used to not be
    necessary simply to have a choice. It used to be sufficient to
    install a decent distro. The main distros used to let you have far
    more choice than they do today.

    Why do you hate the Free Desktop folks? They are at the forefront
    of trying to modernize the *nix GUI.

    The Linux GUI had no need of such modernization, especially since
    all "modernization" really is, is Windowsization ...

    Actually, it?s not. Linux GUIs very much go their own way; there are
    ones that copy Windows and even Apple, it is true, but that?s just to
    appeal to those who prefer that look.

    Systemd copies Windows and Apple at a lower level, and it continues to
    be forced on the Linux community from every direction. I don't even
    think Devuan will be able to resist the pressure to run Systemd for
    much longer. And every distro is adopting iproute2, the main effect of
    which is to make Linux networking skills less transferrable to BSD
    (basically vendor-lock).

    There are others that go in quite different directions. The
    customizability of KDE Plasma, for example, goes beyond anything
    you?ll find in any proprietary platform.

    And the beauty of Linux is, you can install any number of these GUI
    environments at once, and switching between them is as easy as
    logging out and logging in again. You don?t even have to reboot.

    Linux was more customizable in the past, and Wayland makes the problem
    worse because there will always be only a few compositors, due to them
    having to be so complicated. Plus, we are now seeing with the Hyprland
    fiasco that distros will remove good compositors from their package
    management system if their managers perceive any of the authors of
    that compositor to have committed a thoughtcrime.

    I used to run GNOME, and then GNOME 3 came out, and Debian released it
    under the same package name, as if it was just the next version of
    GNOME. What it actually was, was a turd to the face directly out of
    the asses of the FreeDesktop-influenced GNOME developers. It was
    completely static, with no customizability at all. They promised to
    add customizability back later, but GNOME 3 was so intolerable, that I
    had to find an alternative. ANY alternative. I tried KDE, but it had
    gotten a shitty rewrite, just like GNOME, and had become just as
    intolerable as GNOME. So I switched to XFCE for years, even though it
    was inferior to GNOME and KDE as I previously knew them, until I
    finally noticed that MATE was available on Debian (for now-- I assume
    it will get removed at some point, or it will come to suck just as
    much as GNOME).

    And the reasoning behind the GNOME rewrite was about as anti-user as
    it's possible to be: The FreeDesktop faggots had decided that desktop
    PCs were obsolete, and that we had to march towards the brave new
    future, in which we'd trade our desktop machines for tablets and
    fucking phones. Microsoft had the same idea, and released Windows 8
    the following year, which had a bunch of stupid features that were
    specifically for mobile toys. They'd have taken our desktop computers
    by force if they had the power to do so. They have more power today
    than they did back then, so we might see a revival of the whole
    "desktops are obsolete" idea in the next decade or so.

    I saw GNOME 3 a couple of years ago on Ubuntu, and it still sucked,
    but people still praise it for some fucked-up reason. I assume the
    same thing is going on with KDE. I'm more likely to try CDE now that
    it's open-source, than KDE.

    Just run WindowMaker with the OneStepBack or TwoStepsBack GTK2-4 themes
    and the GNUstep icon theme for XDG.

    Lxappearance will allow you to set your GTK theme/icons/fonts with ease
    so it matches the WM one.

    Then use qtconfig to tell QT5 to use a GTK theme. THere's a qgnomestyle
    or similarly called one.



    Thanks for the two about TwoStepsBack. I quite like the OneStepBack aesthetic. The older widget style still appeals to me more.

    That with the GNUstep icons for XDG it makes a great WindowMaker + Rox environment, kinda better than with GWorkSpace, but not as integrated.

    https://store.kde.org/p/1239539

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Scott Lurndal@3:633/280.2 to All on Tue Jul 8 00:17:34 2025
    Reply-To: slp53@pacbell.net

    Lawrence D'Oliveiro <ldo@nz.invalid> writes:
    On 06 Jul 2025 12:43:29 +0100 (BST), Theo wrote:

    If you took it to a little endian machine all the bytes were the
    wrong way around. This was because there was no model in which hard
    drives would move between machines so they just dumped in-memory
    structs to disc.

    But they had removable disk packs in those days.

    Which could be moved from unit to unit -only-
    when the disk drive model was identical.

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: UsenetServer - www.usenetserver.com (3:633/280.2@fidonet)
  • From John Ames@3:633/280.2 to All on Tue Jul 8 01:22:44 2025
    On Mon, 7 Jul 2025 04:22:52 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    There was already a battle between bit 0 on the left or right in
    1950s mainframes. =20
    =20
    Endian-ness didn=E2=80=99t really matter before byte-addressability came along, though.

    Which means it's IBM's fault.


    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From John Ames@3:633/280.2 to All on Tue Jul 8 01:29:21 2025
    On Mon, 7 Jul 2025 04:22:52 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    There was already a battle between bit 0 on the left or right in
    1950s mainframes. =20
    =20
    Endian-ness didn=E2=80=99t really matter before byte-addressability came along, though.

    ....although bit ordering *can* make a difference in serial transmission
    (which end do you send first?) and bit-addressed instructions (where
    present.)


    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Waldek Hebisch@3:633/280.2 to All on Tue Jul 8 01:50:39 2025
    Theo <theom+news@chiark.greenend.org.uk> wrote:
    John Levine <johnl@taugh.com> wrote:
    According to Theo <theom+news@chiark.greenend.org.uk>:
    Removable disc packs mostly came later I think (although I wasn't aware the >> >44MB Syquest launched as early as 1986). Optical media used ISO9660; I'm >> >not sure what was common for M-O drives.

    Uh, what? Removable disk packs date from about 1960.

    The issue under discussion was taking a removable pack from one vendor and plugging it into a different vendor's machine in order to read the data stored there, which is when format standardisation became relevant. In
    1960s were people moving discs from DEC to IBM, or distributing software on disc packs for multiple vendors?

    Tape and optical were their own separate things with their own formats, but AFAIK sending a 'HDD' formatted drive as a distribution format across multiple vendors didn't properly take off until USB, with some niche usage for Syquests in the late 80s/early 90s (and then Zip/Jazz etc).

    Once Linux appeared I used it to ocasinaly read data from discs taken
    from another machines like proprietary Unices. Yes, early HDD-s
    used controller specific formatting so probably there were no chance
    to read them on machine with different controller. But SCSI and
    IDE discs could be swapped between widely different machines.
    This was much earlier than USB.

    FAT was never an officially standardised format of course, but when the machines were running the same software it didn't matter, and so a 'PC formatted' FAT HDD (USB/memory card/...) became a de facto interchange standard that non-PC vendors also adopted, as FAT floppies had previously.

    For ocasional use there was tar format, just write tar archive from the
    start of media (basically treating disc as a tape). Later I used
    dumps of CDROM to hard discs so that I could boot a machine without
    CD drive.

    --
    Waldek Hebisch

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: To protect and to server (3:633/280.2@fidonet)
  • From Waldek Hebisch@3:633/280.2 to All on Tue Jul 8 02:10:25 2025
    John Ames <commodorejohn@gmail.com> wrote:
    On Mon, 7 Jul 2025 04:22:52 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    There was already a battle between bit 0 on the left or right in
    1950s mainframes.

    Endian-ness didn’t really matter before byte-addressability came
    along, though.

    Which means it's IBM's fault.

    Endianness matter for character/digit addresable machines. IIUC
    French Gamma was character addresable and earlier than IBM 1401.
    But I do not know which machine was the first character addresable.

    There were early serial/BCD machines which probably internally were
    little endian. But most apparently were word addresable and
    character addressable ones that I know about are more or less
    big endian (1401 stores numbers in big endian order, but as
    address uses address of last byte, so shares some features with
    little endian machines).

    --
    Waldek Hebisch

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: To protect and to server (3:633/280.2@fidonet)
  • From Peter Flass@3:633/280.2 to All on Tue Jul 8 02:45:08 2025
    On 7/6/25 21:58, rbowman wrote:
    On Sun, 6 Jul 2025 20:15:08 -0700, Al Kossow wrote:

    I'm not 100% sure but I think this company. hardly more than a footnote
    in computer history, was the cause of little-endian processors.

    Guess again

    Try the DEC PDP-11 (1969)

    There was already a battle between bit 0 on the left or right in 1950s
    mainframes.

    True, but that didn't lead to the 8008 which led to the...

    https://en.wikipedia.org/wiki/Intel_8008

    I should have been more explicit and said x64 processors. I've always been amused at how we got to where we are now.

    Ironically Motorola certainly studied the PDP-11 closely but the 68000
    wound up big-endian.

    Motorola studied the PDP-11 and fixed the things DEC got wrong. We'd be
    in a different world if the 680x0 had won out over the x86.

    Then there's the 'PDP-endian' quirk.

    I never dug too deeply into the PDP-11 when I ran on one in the early
    '80s. It was running some *nix OS that had fallen off the back of a truck
    on Memorial Avenue.


    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Peter Flass@3:633/280.2 to All on Tue Jul 8 02:51:35 2025
    On 7/7/25 04:18, Theo wrote:
    John Levine <johnl@taugh.com> wrote:
    According to Theo <theom+news@chiark.greenend.org.uk>:
    Removable disc packs mostly came later I think (although I wasn't aware the >>> 44MB Syquest launched as early as 1986). Optical media used ISO9660; I'm >>> not sure what was common for M-O drives.

    Uh, what? Removable disk packs date from about 1960.

    The issue under discussion was taking a removable pack from one vendor and plugging it into a different vendor's machine in order to read the data stored there, which is when format standardisation became relevant. In
    1960s were people moving discs from DEC to IBM, or distributing software on disc packs for multiple vendors?

    Or the same vendor. The IBM 1316 disk pack was used by various vendors,
    but was formatted differently for different systems. Most used sector organization (including IBM's own 360/20), but IBM DOS and OS used
    C-K-D. This is even before you get to the level of the directory/VTOC structures.


    Tape and optical were their own separate things with their own formats, but AFAIK sending a 'HDD' formatted drive as a distribution format across multiple vendors didn't properly take off until USB, with some niche usage for Syquests in the late 80s/early 90s (and then Zip/Jazz etc).

    9-track tape was really the interchange format of choice. I think most cross-system stuff used 80-byte records and 800-byte blocks, with or
    without IBM standard labels.


    FAT was never an officially standardised format of course, but when the machines were running the same software it didn't matter, and so a 'PC formatted' FAT HDD (USB/memory card/...) became a de facto interchange standard that non-PC vendors also adopted, as FAT floppies had previously.

    Theo


    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Peter Flass@3:633/280.2 to All on Tue Jul 8 02:55:21 2025
    On 7/7/25 08:29, John Ames wrote:
    On Mon, 7 Jul 2025 04:22:52 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    There was already a battle between bit 0 on the left or right in
    1950s mainframes.

    Endian-ness didn’t really matter before byte-addressability came
    along, though.

    ...although bit ordering *can* make a difference in serial transmission (which end do you send first?) and bit-addressed instructions (where present.)


    This drove me nuts. I may have this wrong because it's 45+ years ago,
    but I think BTAM received data LSB first, and I had to translate, or
    else the documentation showed the characters LSB first, and I had to
    mentally translate all the doc.

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From John Ames@3:633/280.2 to All on Tue Jul 8 03:25:15 2025
    On Mon, 7 Jul 2025 09:55:21 -0700
    Peter Flass <Peter@Iron-Spring.com> wrote:

    ...although bit ordering *can* make a difference in serial
    transmission (which end do you send first?) and bit-addressed
    instructions (where present.)

    This drove me nuts. I may have this wrong because it's 45+ years ago,
    but I think BTAM received data LSB first, and I had to translate, or
    else the documentation showed the characters LSB first, and I had to mentally translate all the doc.

    I can understand endianness issues cropping up when you have to split a
    word into independently-addressable chunks, but the fact that bit-
    ordering was ever even a question remains bonkers to me, when basic
    math provides what *should've* been a straightforward universal
    standard: 2 ^ 0 = 1, so bit 0 is the 1s place.


    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Scott Lurndal@3:633/280.2 to All on Tue Jul 8 06:11:38 2025
    Reply-To: slp53@pacbell.net

    antispam@fricas.org (Waldek Hebisch) writes:
    Theo <theom+news@chiark.greenend.org.uk> wrote:
    John Levine <johnl@taugh.com> wrote:
    According to Theo <theom+news@chiark.greenend.org.uk>:
    Removable disc packs mostly came later I think (although I wasn't aware the
    44MB Syquest launched as early as 1986). Optical media used ISO9660; I'm >>> >not sure what was common for M-O drives.

    Uh, what? Removable disk packs date from about 1960.

    The issue under discussion was taking a removable pack from one vendor and >> plugging it into a different vendor's machine in order to read the data
    stored there, which is when format standardisation became relevant. In
    1960s were people moving discs from DEC to IBM, or distributing software on >> disc packs for multiple vendors?

    Tape and optical were their own separate things with their own formats, but >> AFAIK sending a 'HDD' formatted drive as a distribution format across
    multiple vendors didn't properly take off until USB, with some niche usage >> for Syquests in the late 80s/early 90s (and then Zip/Jazz etc).

    Once Linux appeared I used it to ocasinaly read data from discs taken
    from another machines like proprietary Unices. Yes, early HDD-s
    used controller specific formatting so probably there were no chance
    to read them on machine with different controller. But SCSI and
    IDE discs could be swapped between widely different machines.

    Sometimes. SCSI disks were sometimes formatted to unusual sector sizes
    for various purposes (Sun for Oracle (iirc) used 520 byte sectors, Unisys
    used 180-byte sectors, et cetera).


    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: UsenetServer - www.usenetserver.com (3:633/280.2@fidonet)
  • From Lars Poulsen@3:633/280.2 to All on Tue Jul 8 06:23:53 2025
    John Levine <johnl@taugh.com> writes:
    We also had a PDP-10 which also used the same RP02 disks. I think I
    once experimented with trying to write a PDP-11 formatted disk on the
    -10, reading the file system from tape. It was rather exciting since
    the 36 bit PDP-10 mapped its words into the disk's 8 bit bytes in
    non-obvious ways.

    On 2025-07-07, Rich Alderson <news@alderson.users.panix.com> wrote:
    Its perfectly obvious, since the PDP-10 operating systems write 128 word blocks
    at all times (even TOPS-20, which simply reads/writes 4 such blocks for each 512 word page in the data stream).

    1 sector = 128 words * 36 bits = 64 * 72 bits = 576 * 8 bits

    Easy-peasy.

    Just like the Univac/Unisys 1106/1108/1110/1100/2200, which read 9 8-bit
    bytes (from disk or 9-track tape) into 2 36-bit words. Although I think
    there was a way to put 8 bytes into quarter-words (9-bit bytes) instead.
    --
    Lars Poulsen

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Tue Jul 8 07:39:07 2025
    On Mon, 7 Jul 2025 09:45:08 -0700, Peter Flass wrote:

    Motorola studied the PDP-11 and fixed the things DEC got wrong.

    Still, the split between A- and D-registers was ... not considered a
    brilliant idea.

    Then there's the 'PDP-endian' quirk.

    32-bit integers in Fortran had the high word before the low word. But
    then, you couldn’t blame that on the PDP-11 hardware.

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Tue Jul 8 07:41:18 2025
    On 07 Jul 2025 12:18:46 +0100 (BST), Theo wrote:

    The issue under discussion was taking a removable pack from one vendor
    and plugging it into a different vendor's machine in order to read the
    data stored there ...

    No, just moving packs between different machines in the same computer
    centre would have been enough.

    Some computer centres had rules against using disk packs from outside. Interesting reason why ...

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Tue Jul 8 07:42:31 2025
    On Mon, 7 Jul 2025 16:10:25 -0000 (UTC), Waldek Hebisch wrote:

    Endianness matter for character/digit addresable machines.

    I thought such machines always stored the digits in order of ascending significance, because it didn’t make sense to do it the other way.

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Tue Jul 8 07:43:55 2025
    On Mon, 7 Jul 2025 09:55:21 -0700, Peter Flass wrote:

    On 7/7/25 08:29, John Ames wrote:

    ...although bit ordering *can* make a difference in serial transmission
    (which end do you send first?) ...

    This drove me nuts. I may have this wrong because it's 45+ years ago,
    but I think BTAM received data LSB first, and I had to translate, or
    else the documentation showed the characters LSB first, and I had to
    mentally translate all the doc.

    The RS-232C spec explicitly said the least-significant bit was sent first.

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Tue Jul 8 07:45:23 2025
    On Mon, 7 Jul 2025 10:25:15 -0700, John Ames wrote:

    I can understand endianness issues cropping up when you have to split a
    word into independently-addressable chunks, but the fact that bit-
    ordering was ever even a question remains bonkers to me, when basic math provides what *should've* been a straightforward universal standard: 2 ^
    0 = 1, so bit 0 is the 1s place.

    Big-endian architectures can never make up their minds. IBM’s POWER/
    PowerPC architecture numbered the bits the opposite way from their significance as binary digits in an integer.

    And then there was the Motorola 68k, where the original single-bit- manipulation instructions numbered them one way, and the later variable- bit-field instructions numbered them the opposite way.

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lynn Wheeler@3:633/280.2 to All on Tue Jul 8 15:01:27 2025

    Peter Flass <Peter@Iron-Spring.com> writes:
    This drove me nuts. I may have this wrong because it's 45+ years ago,
    but I think BTAM received data LSB first, and I had to translate, or
    else the documentation showed the characters LSB first, and I had to
    mentally translate all the doc.

    I had taken 2 credit hr intro to fortran/computers and at end of
    semester was hired to rewrite 1401 MPIO for 360/30. Univ. was getting
    360/67 for tss/360 (replacing 790/1401) and got 360/30 temporarily until
    360/67 was available. They gave me pile of software and hardware manuals
    and I (since they shutdown datacenter on weekends) had the datacenter
    dedicated (although 48hrs w/o sleep made monday classes hard) got to
    design and implement my own monitor, device drivers, interrupt handlers,
    error recovery, storage management, etc ... and had a 2000 card
    implementation within a few weeks.

    360/67 arrived within year of taking intro class, and I was hired
    fulltime for os/360 (tss/360 never came to production). Student fortran
    ran under second on 709 but over minute on os/360 (360/67 running as
    360/65). I add HASP and cuts time in half. I then redo STAGE2 sysgen to carefully place datasets and PDS members to optimize disk arm seek and multitrack search, cutting another 2/3rds to 12.9secs. Never got better
    than 709 until install UofWaterloo WATFOR..

    CSC then comes out to install CP67 (3rd after CSC itself and MIT Lincoln
    Labs). It had 2741 and 1052 terminal support with automagic terminal
    type and used SAD CCW to change port terminal type scanner. Univ. had
    some number of (tty33&tty35) ascii terminals so I added ASCII terminal
    support, borrowing BTAM BCD<->ASCII translate tables.

    I then wanted a single dial-in phone number for all terminal types,
    didn't quiet work, IBM controller could change part terminal type
    port scanner ... but had hard-wired port line speeds.

    This kicked off univ. project to build an IBM clone controller, build
    mainframe channel interface card for Interdata/3 programmed to emulate
    IBM controller (with the addition that it supported auto-baud). We
    initially didn't look at IBM controller spec closely enough and when
    terminal data 1st arrived from clone in mainframe memory, it was all
    garbage. We find that incoming terminal data, leading bit was placed in low-order byte position ... so data arrived in mainframe memory with all
    bytes having bit-reverse bits.

    Wasn't so obvious with 1042&2741 terminals that used tilt-rotate codes
    (not actual bcd ... or ascii).

    Later, upgraded to Interdata/4 for the channel interface and cluster of Interdata/3s for port interfaces. Interdata (and then Perkin-Elmer) was
    selling as IBM clone controller (and four of us written up some part of
    the ibm clone controller business).
    https://en.wikipedia.org/wiki/Interdata https://en.wikipedia.org/wiki/Perkin-Elmer#Computer_Systems_Division

    other trivia: account about biggest computer "goof" ever, 360s
    originally were going to be ASCII machines, but the ASCII unit record
    gear weren't ready ... so were going to start shipping with old BCD gear
    (with EBCDIC) and move later https://web.archive.org/web/20180513184025/http://www.bobbemer.com/P-BIT.HTM

    --
    virtualization experience starting Jan1968, online at home since Mar1970

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: Wheeler&Wheeler (3:633/280.2@fidonet)
  • From Scott Lurndal@3:633/280.2 to All on Tue Jul 8 23:48:35 2025
    Reply-To: slp53@pacbell.net

    Lawrence D'Oliveiro <ldo@nz.invalid> writes:
    On 07 Jul 2025 12:18:46 +0100 (BST), Theo wrote:

    The issue under discussion was taking a removable pack from one vendor
    and plugging it into a different vendor's machine in order to read the
    data stored there ...

    No, just moving packs between different machines in the same computer
    centre would have been enough.

    Until a fool operator (like you, perhaps) moved a pack from a drive
    with a head crash to three other drives before realizing that the
    pack was bad, not the drives.


    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: UsenetServer - www.usenetserver.com (3:633/280.2@fidonet)
  • From Charlie Gibbs@3:633/280.2 to All on Wed Jul 9 14:29:09 2025
    On 2025-07-08, Jan van den Broek <fortytwo@xs4all.nl> wrote:

    Mon, 7 Jul 2025 13:43:32 +0100
    David Wade <g4ugm@dave.invalid> schrieb:

    [Schrieb]

    Unless you had an older Atari ST which formatted disk in such a way that
    MSDOS wouldn't read them. I seem to remember it was one byte in the boot
    sector the PC didn't like, and there were Atari programs to fix it...

    And there was at least one DOS-program.
    ST2DOS, written by Arno Schaefer, version 1.0 is from '93.

    You couldn't do a trick like that with the Amiga. It read and wrote
    an entire track at a time, which enabled it to shorten the inter-record
    gaps to the point where it could store 11 sectors per track instead of 9.
    This allowed the Amiga to store 880K on what was normally a 720K floppy -
    but the result could not be read except with another Amiga or a custom controller.

    --
    /~\ Charlie Gibbs | Growth for the sake of
    \ / <cgibbs@kltpzyxm.invalid> | growth is the ideology
    X I'm really at ac.dekanfrus | of the cancer cell.
    / \ if you read it the right way. | -- Edward Abbey

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: ---:- FTN<->UseNet Gate -:--- (3:633/280.2@fidonet)
  • From Charlie Gibbs@3:633/280.2 to All on Wed Jul 9 14:29:09 2025
    On 2025-07-08, Scott Lurndal <scott@slp53.sl.home> wrote:

    Lawrence D'Oliveiro <ldo@nz.invalid> writes:

    On 07 Jul 2025 12:18:46 +0100 (BST), Theo wrote:

    The issue under discussion was taking a removable pack from one vendor
    and plugging it into a different vendor's machine in order to read the
    data stored there ...

    No, just moving packs between different machines in the same computer
    centre would have been enough.

    Until a fool operator (like you, perhaps) moved a pack from a drive
    with a head crash to three other drives before realizing that the
    pack was bad, not the drives.

    But by then, the drives were bad too. :-(

    Not being a fool operator, when I was formatting a new pack and
    heard strange sounds (over and above the noisy spindle bearing
    in that particular drive), I shut down the drive, observed what
    a mess it had made of the new pack, quarantined both of them,
    and re-configured the system to run without the damaged drive.
    The pack was chewed up so badly you could see bare aluminum
    gleaming through what was left of the oxide. It led to a
    wonderful finger-pointing session, where Univac (the maker
    of the drive) blamed CDC (the maker of the pack) and vice
    versa. There really wasn't much of a choice - Univac couldn't
    keep up with the demand, so most shops went with CDC packs.
    If they mounted cleanly once, they ran forever...

    --
    /~\ Charlie Gibbs | Growth for the sake of
    \ / <cgibbs@kltpzyxm.invalid> | growth is the ideology
    X I'm really at ac.dekanfrus | of the cancer cell.
    / \ if you read it the right way. | -- Edward Abbey

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: ---:- FTN<->UseNet Gate -:--- (3:633/280.2@fidonet)
  • From Bob Eager@3:633/280.2 to All on Wed Jul 9 18:21:13 2025
    On Wed, 09 Jul 2025 07:33:09 +0000, rbowman wrote:

    On Wed, 09 Jul 2025 04:29:09 GMT, Charlie Gibbs wrote:

    You couldn't do a trick like that with the Amiga. It read and wrote an
    entire track at a time, which enabled it to shorten the inter-record
    gaps to the point where it could store 11 sectors per track instead of
    9.
    This allowed the Amiga to store 880K on what was normally a 720K floppy
    -
    but the result could not be read except with another Amiga or a custom
    controller.

    CP/M topped out for craziness. Most systems used the Western Digital
    FD17xx floppy controllers but the controller could be programmed for different track/sector schemes and encoding. I had a utility that could
    read 11 different formats iirc That's leaving out the hard sector types
    that survived from the 8" days.

    Don't forget the ACT Sirius. A DOS machine, that crammed more data onto a diskette buy using a variable speed drive (5 speeds, I think).

    People played tunes on those drives.



    --
    Using UNIX since v6 (1975)...

    Use the BIG mirror service in the UK:
    http://www.mirrorservice.org

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: ---:- FTN<->UseNet Gate -:--- (3:633/280.2@fidonet)
  • From Scott Lurndal@3:633/280.2 to All on Thu Jul 10 02:21:26 2025
    Reply-To: slp53@pacbell.net

    Charlie Gibbs <cgibbs@kltpzyxm.invalid> writes:
    On 2025-07-08, Scott Lurndal <scott@slp53.sl.home> wrote:

    Lawrence D'Oliveiro <ldo@nz.invalid> writes:

    On 07 Jul 2025 12:18:46 +0100 (BST), Theo wrote:

    The issue under discussion was taking a removable pack from one vendor >>>> and plugging it into a different vendor's machine in order to read the >>>> data stored there ...

    No, just moving packs between different machines in the same computer
    centre would have been enough.

    Until a fool operator (like you, perhaps) moved a pack from a drive
    with a head crash to three other drives before realizing that the
    pack was bad, not the drives.

    But by then, the drives were bad too. :-(

    Indeed. The DEC FE was not happy.


    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: UsenetServer - www.usenetserver.com (3:633/280.2@fidonet)
  • From Scott Alfter@3:633/280.2 to All on Fri Jul 11 01:41:57 2025
    In article <md6n3pFgaflU8@mid.individual.net>,
    Bob Eager <news0009@eager.cx> wrote:
    Don't forget the ACT Sirius. A DOS machine, that crammed more data onto a >diskette buy using a variable speed drive (5 speeds, I think).

    Apple used the same trick with its 3.5" floppy drives to fit 800K onto a
    disk that was only good for 720K elsewhere.

    --
    _/_
    / v \ Scott Alfter (remove the obvious to send mail)
    (IIGS( https://alfter.us/ Top-posting!
    \_^_/ >What's the most annoying thing on Usenet?

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: USS Voyager NCC-74656, Delta Quadrant (3:633/280.2@fidonet)
  • From Rich Alderson@3:633/280.2 to All on Fri Jul 11 06:20:45 2025
    scott@alfter.diespammersdie.us (Scott Alfter) writes:

    In article <md6n3pFgaflU8@mid.individual.net>,
    Bob Eager <news0009@eager.cx> wrote:

    Don't forget the ACT Sirius. A DOS machine, that crammed more data onto a >> diskette buy using a variable speed drive (5 speeds, I think).

    Apple used the same trick with its 3.5" floppy drives to fit 800K onto a
    disk that was only good for 720K elsewhere.

    And before the 800K floppy, there was the single-sided 400K floppy on the same controller.

    --
    Rich Alderson news@alderson.users.panix.com
    Audendum est, et veritas investiganda; quam etiamsi non assequamur,
    omnino tamen proprius, quam nunc sumus, ad eam perveniemus.
    --Galen

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: PANIX Public Access Internet and UNIX, NYC (3:633/280.2@fidonet)
  • From Dan Espen@3:633/280.2 to All on Sat Jul 12 10:28:31 2025
    Peter Flass <Peter@Iron-Spring.com> writes:

    On 7/7/25 08:29, John Ames wrote:
    On Mon, 7 Jul 2025 04:22:52 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    There was already a battle between bit 0 on the left or right in
    1950s mainframes.

    Endian-ness didn’t really matter before byte-addressability came
    along, though.
    ...although bit ordering *can* make a difference in serial
    transmission
    (which end do you send first?) and bit-addressed instructions (where
    present.)


    This drove me nuts. I may have this wrong because it's 45+ years ago,
    but I think BTAM received data LSB first, and I had to translate, or
    else the documentation showed the characters LSB first, and I had to
    mentally translate all the doc.

    BTAM received bytes at a time, so bit order was dependent on the device.
    Some devices (like 2260s, 3270s, 3780s) just sent bytes as you would expect them.

    --
    Dan Espen

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Dan Espen@3:633/280.2 to All on Sat Jul 12 10:32:34 2025
    Lynn Wheeler <lynn@garlic.com> writes:

    other trivia: account about biggest computer "goof" ever, 360s
    originally were going to be ASCII machines, but the ASCII unit record
    gear weren't ready ... so were going to start shipping with old BCD gear (with EBCDIC) and move later https://web.archive.org/web/20180513184025/http://www.bobbemer.com/P-BIT.HTM

    I don't know what dreams they were having within IBM but those machines
    were never going to be ASCII. It would be pretty hard to do 14xx
    emulation with ASCII and IBM NEVER EVER did a competent ASCII - EBCDIC translate table.

    --
    Dan Espen

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Charlie Gibbs@3:633/280.2 to All on Sun Jul 13 03:13:12 2025
    On 2025-07-12, Dan Espen <dan1espen@gmail.com> wrote:

    Lynn Wheeler <lynn@garlic.com> writes:

    other trivia: account about biggest computer "goof" ever, 360s
    originally were going to be ASCII machines, but the ASCII unit record
    gear weren't ready ... so were going to start shipping with old BCD gear
    (with EBCDIC) and move later
    https://web.archive.org/web/20180513184025/http://www.bobbemer.com/P-BIT.HTM

    I don't know what dreams they were having within IBM but those machines
    were never going to be ASCII. It would be pretty hard to do 14xx
    emulation with ASCII and IBM NEVER EVER did a competent ASCII - EBCDIC translate table.

    That's partly because they couldn't even settle on values for
    certain EBCDIC characters - vertical bar, for instance.

    --
    /~\ Charlie Gibbs | Growth for the sake of
    \ / <cgibbs@kltpzyxm.invalid> | growth is the ideology
    X I'm really at ac.dekanfrus | of the cancer cell.
    / \ if you read it the right way. | -- Edward Abbey

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: ---:- FTN<->UseNet Gate -:--- (3:633/280.2@fidonet)
  • From Dan Espen@3:633/280.2 to All on Sun Jul 13 09:49:56 2025
    Charlie Gibbs <cgibbs@kltpzyxm.invalid> writes:

    On 2025-07-12, Dan Espen <dan1espen@gmail.com> wrote:

    Lynn Wheeler <lynn@garlic.com> writes:

    other trivia: account about biggest computer "goof" ever, 360s
    originally were going to be ASCII machines, but the ASCII unit record
    gear weren't ready ... so were going to start shipping with old BCD gear >>> (with EBCDIC) and move later
    https://web.archive.org/web/20180513184025/http://www.bobbemer.com/P-BIT.HTM

    I don't know what dreams they were having within IBM but those machines
    were never going to be ASCII. It would be pretty hard to do 14xx
    emulation with ASCII and IBM NEVER EVER did a competent ASCII - EBCDIC
    translate table.

    That's partly because they couldn't even settle on values for
    certain EBCDIC characters - vertical bar, for instance.

    Or, famously, square brackets.
    But what really p'd me off it that for characters they couldn't decide
    on, they translated multiple different characters to the same character.
    Making their mistakes impossible to recover from.

    --
    Dan Espen

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Niklas Karlsson@3:633/280.2 to All on Sun Jul 13 20:50:02 2025
    On 2025-07-12, Dan Espen <dan1espen@gmail.com> wrote:

    Or, famously, square brackets.
    But what really p'd me off it that for characters they couldn't decide
    on, they translated multiple different characters to the same character. Making their mistakes impossible to recover from.

    Not EBCDIC, but your mention of square brackets reminded me of the
    modified 7-bit ASCII that was used to write Swedish before ISO 8859-1
    and later Unicode made it big.

    "} { | ] [ \" were shown as " " on Swedish-adapted equipment,
    making C code look absolutely ridiculous. Similar conventions applied
    for the other Nordic languages and German.

    Niklas
    --
    Must confess, i'd feel just a little bit
    conspicuous ordering a large Espresso and an enema-bag at my local coffee-house...
    -- Tanuki in asr

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: Department of Redundancy Department (3:633/280.2@fidonet)
  • From Nuno Silva@3:633/280.2 to All on Sun Jul 13 21:19:32 2025
    On 2025-07-13, Niklas Karlsson wrote:

    On 2025-07-12, Dan Espen <dan1espen@gmail.com> wrote:

    Or, famously, square brackets.
    But what really p'd me off it that for characters they couldn't decide
    on, they translated multiple different characters to the same character.
    Making their mistakes impossible to recover from.

    Not EBCDIC, but your mention of square brackets reminded me of the
    modified 7-bit ASCII that was used to write Swedish before ISO 8859-1
    and later Unicode made it big.

    "} { | ] [ \" were shown as " " on Swedish-adapted equipment, making C code look absolutely ridiculous. Similar conventions applied
    for the other Nordic languages and German.

    I played with ISO-646-FI/SE once in a Televideo terminal, but not for
    long enough to figure out how to handle day-to-day usage of a UNIX-like
    system without these characters.

    I (barely) know C has (had?) syntax and also iso646.h for such cases,
    but how would e.g. shell scripting be handled?

    --
    Nuno Silva

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Niklas Karlsson@3:633/280.2 to All on Mon Jul 14 00:18:03 2025
    On 2025-07-13, Nuno Silva <nunojsilva@invalid.invalid> wrote:
    On 2025-07-13, Niklas Karlsson wrote:

    Not EBCDIC, but your mention of square brackets reminded me of the
    modified 7-bit ASCII that was used to write Swedish before ISO 8859-1
    and later Unicode made it big.

    "} { | ] [ \" were shown as " " on Swedish-adapted equipment,
    making C code look absolutely ridiculous. Similar conventions applied
    for the other Nordic languages and German.

    I played with ISO-646-FI/SE once in a Televideo terminal, but not for
    long enough to figure out how to handle day-to-day usage of a UNIX-like system without these characters.

    I (barely) know C has (had?) syntax and also iso646.h for such cases,
    but how would e.g. shell scripting be handled?

    Couldn't say. I came in a little to late to really have to butt heads
    with that issue.

    Niklas
    --
    "Some people think that noise abatement should be a higher priority for ATC. I say safety is noise abatement. You have no idea how much noise it makes to have a 737 fall out of the sky after an accident." -- anon. air traffic controller

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: Department of Redundancy Department (3:633/280.2@fidonet)
  • From Peter Flass@3:633/280.2 to All on Mon Jul 14 00:46:05 2025
    On 7/13/25 03:50, Niklas Karlsson wrote:
    On 2025-07-12, Dan Espen <dan1espen@gmail.com> wrote:

    Or, famously, square brackets.
    But what really p'd me off it that for characters they couldn't decide
    on, they translated multiple different characters to the same character.
    Making their mistakes impossible to recover from.

    Not EBCDIC, but your mention of square brackets reminded me of the
    modified 7-bit ASCII that was used to write Swedish before ISO 8859-1
    and later Unicode made it big.

    "} { | ] [ \" were shown as " " on Swedish-adapted equipment, making C code look absolutely ridiculous. Similar conventions applied
    for the other Nordic languages and German.


    7-bit ASCII never made much sense to me. Why didn't they go right to 8?
    7-biut characters only would have made sense on a computer with a 14 or
    28-bit word size.


    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Peter Flass@3:633/280.2 to All on Mon Jul 14 00:58:55 2025
    On 7/13/25 07:18, Niklas Karlsson wrote:
    On 2025-07-13, Nuno Silva <nunojsilva@invalid.invalid> wrote:
    On 2025-07-13, Niklas Karlsson wrote:

    Not EBCDIC, but your mention of square brackets reminded me of the
    modified 7-bit ASCII that was used to write Swedish before ISO 8859-1
    and later Unicode made it big.

    "} { | ] [ \" were shown as " " on Swedish-adapted equipment,
    making C code look absolutely ridiculous. Similar conventions applied
    for the other Nordic languages and German.

    I played with ISO-646-FI/SE once in a Televideo terminal, but not for
    long enough to figure out how to handle day-to-day usage of a UNIX-like
    system without these characters.

    I (barely) know C has (had?) syntax and also iso646.h for such cases,
    but how would e.g. shell scripting be handled?

    Couldn't say. I came in a little to late to really have to butt heads
    with that issue.


    That's why C had trigraphs. PL/I(F) did the same thing with its
    "48-character set"

    : colon ..
    ; semicolon ,.
    & and AND
    | or OR
    not NOT
    greater than GT
    < less than LT
    _ underscore no equivalent
    % percent //
    = greater than GE
    or equal to
    <= less than LE
    or equal to
    = not equal to NE
    < not less than NL
    not greater than NG
    || concatenation CAT

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Scott Lurndal@3:633/280.2 to All on Mon Jul 14 01:24:29 2025
    Reply-To: slp53@pacbell.net

    Niklas Karlsson <nikke.karlsson@gmail.com> writes:
    On 2025-07-12, Dan Espen <dan1espen@gmail.com> wrote:

    Or, famously, square brackets.
    But what really p'd me off it that for characters they couldn't decide
    on, they translated multiple different characters to the same character.
    Making their mistakes impossible to recover from.

    Not EBCDIC, but your mention of square brackets reminded me of the
    modified 7-bit ASCII that was used to write Swedish before ISO 8859-1
    and later Unicode made it big.

    "} { | ] [ \" were shown as "å ä ö Å Ä Ö" on Swedish-adapted equipment, >making C code look absolutely ridiculous. Similar conventions applied
    for the other Nordic languages and German.

    Ah, but there were always trigraphs. Sadly they weren't much prettier.

    '??(' and '??)'.

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: UsenetServer - www.usenetserver.com (3:633/280.2@fidonet)
  • From Dennis Boone@3:633/280.2 to All on Mon Jul 14 02:06:49 2025
    7-bit ASCII never made much sense to me. Why didn't they go right to 8? 7-biut characters only would have made sense on a computer with a 14 or 28-bit word size.

    Parity.

    De

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: ---:- FTN<->UseNet Gate -:--- (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Mon Jul 14 08:02:30 2025
    On 13 Jul 2025 10:50:02 GMT, Niklas Karlsson wrote:

    "} { | ] [ \" were shown as " " on Swedish-adapted equipment, making C code look absolutely ridiculous.

    This is why other languages, of the time and even later, were more conservative in their character-set requirements. Ada doesn’t even require square brackets, it uses parentheses for both array subscripting and
    function calls.

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Mon Jul 14 08:17:41 2025
    On Sun, 13 Jul 2025 16:06:49 +0000, Dennis Boone wrote:

    7-bit ASCII never made much sense to me. Why didn't they go right to 8?

    Parity.

    Also, a larger character set would likely have meant more expensive
    hardware to input/display it. Think of line printers with all their
    characters on those drums/chains. Dot-matrix printers were more flexible,
    but there was still the keyboard problem.

    And another point: subsets of ASCII could be mapped back and forth with existing even more restricted character sets, like ones with only six bits
    per character.

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Don Poitras@3:633/280.2 to All on Mon Jul 14 08:56:27 2025
    Scott Lurndal <scott@slp53.sl.home> wrote:
    Niklas Karlsson <nikke.karlsson@gmail.com> writes:
    On 2025-07-12, Dan Espen <dan1espen@gmail.com> wrote:

    Or, famously, square brackets.
    But what really p'd me off it that for characters they couldn't decide
    on, they translated multiple different characters to the same character. >> Making their mistakes impossible to recover from.

    Not EBCDIC, but your mention of square brackets reminded me of the
    modified 7-bit ASCII that was used to write Swedish before ISO 8859-1
    and later Unicode made it big.

    "} { | ] [ \" were shown as "?? ?? ?? ?? ?? ??" on Swedish-adapted equipment,
    making C code look absolutely ridiculous. Similar conventions applied
    for the other Nordic languages and German.

    Ah, but there were always trigraphs. Sadly they weren't much prettier.

    '??(' and '??)'.

    SAS/C (C compiler written for IBM mainframes after Lattice C was
    purchased by Sas in 1987) introduced 'di-graphs':

    (| and |)

    Looked a little nicer.

    --
    Don Poitras

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: Home (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Mon Jul 14 11:11:51 2025
    On Sun, 13 Jul 2025 22:56:27 -0000 (UTC), Don Poitras wrote:

    SAS/C (C compiler written for IBM mainframes after Lattice C was
    purchased by Sas in 1987) introduced 'di-graphs':

    (| and |)

    Looked a little nicer.

    Those do look quite reasonable. I think BCPL used ($ and $) for statement brackets. I was expecting Unicode to allow for more bracketing symbols,
    but apart from « and » (and of course typographic quotes), I can’t find much else.

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Charlie Gibbs@3:633/280.2 to All on Mon Jul 14 19:40:28 2025
    On 2025-07-13, Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    On Sun, 13 Jul 2025 16:06:49 +0000, Dennis Boone wrote:

    7-bit ASCII never made much sense to me. Why didn't they go right to 8?

    Parity.

    Also, a larger character set would likely have meant more expensive
    hardware to input/display it. Think of line printers with all their characters on those drums/chains.

    Enabling lower case - or even going from a 48- to a 64-character
    set - required purchasing a separate drum/chain/band (many Univac
    printers used an embossed metal band that looked like a band saw
    blade). And the larger the character set, the fewer copies you could
    fit on the band/chain/drum, which reduced the number of lines per
    minute that the printer could print. (I never saw a drum printer
    on which you could change drums - that would have been too much
    hassle both for the hardware designer and the operator.)

    One shop I worked in had both 48- and 63-character bands for their
    printer. They thought that they could mount a 63-character band
    for jobs that needed it, while using a 48-character band for
    everything else. The 48-character band allowed faster printing,
    since the character subset passed the paper in a smaller fraction
    of the time it took the band to make a complete revolution. As I
    predicted, though, they soon realized that the time spent while the
    operator changed bands (especially if he had just left for coffee
    when a band change request came up) more than offset the time saved
    by using the 48-character set - and that it was faster in the long
    run to just leave the 63-character band in place all the time.

    In the mainframe world, lower case was generally held in low regard.
    The myth was that anything not in all caps didn't look appropriately computerish. This myth survived for decades afterwards.

    Dot-matrix printers were more flexible, but there was still the
    keyboard problem.

    Univac's terminals (e.g. Uniscope 200) had a couple of secret jumpers
    (i.e. you paid extra for the CE to come out and move them). One of
    them enabled display of lower-case characters, while another (soldered
    in, unfortunately) allowed the keyboard to enter lower case. The later
    UTS-400 had a jumper in the keyboard to allow those who knew about it
    to easily change the option - but there were trade-offs even there.

    And another point: subsets of ASCII could be mapped back and forth with existing even more restricted character sets, like ones with only six bits per character.

    Still, the lozenge never really made it over into 8-bit codes.

    --
    /~\ Charlie Gibbs | Growth for the sake of
    \ / <cgibbs@kltpzyxm.invalid> | growth is the ideology
    X I'm really at ac.dekanfrus | of the cancer cell.
    / \ if you read it the right way. | -- Edward Abbey

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: ---:- FTN<->UseNet Gate -:--- (3:633/280.2@fidonet)
  • From Scott Lurndal@3:633/280.2 to All on Tue Jul 15 00:19:47 2025
    Reply-To: slp53@pacbell.net

    Lawrence D'Oliveiro <ldo@nz.invalid> writes:
    On Sun, 13 Jul 2025 16:06:49 +0000, Dennis Boone wrote:

    7-bit ASCII never made much sense to me. Why didn't they go right to 8?

    Parity.

    Also, a larger character set would likely have meant more expensive
    hardware to input/display it. Think of line printers with all their >characters on those drums/chains.

    What should one think about? A print train/chain/band for that
    generation would repeat the character set (48, 64 or 96 characters)
    several times on the chain/band/train. The only added expense
    was in the time dimension since the printer may need to wait a bit
    longer for the desired character to be under a hammer.

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: UsenetServer - www.usenetserver.com (3:633/280.2@fidonet)
  • From Peter Flass@3:633/280.2 to All on Tue Jul 15 00:40:00 2025
    On 7/14/25 02:40, Charlie Gibbs wrote:
    ....

    One shop I worked in had both 48- and 63-character bands for their
    printer. They thought that they could mount a 63-character band
    for jobs that needed it, while using a 48-character band for
    everything else. The 48-character band allowed faster printing,
    since the character subset passed the paper in a smaller fraction
    of the time it took the band to make a complete revolution. As I
    predicted, though, they soon realized that the time spent while the
    operator changed bands (especially if he had just left for coffee
    when a band change request came up) more than offset the time saved
    by using the 48-character set - and that it was faster in the long
    run to just leave the 63-character band in place all the time.


    In order for this to have a chance of working you'd have to establish different SYSOUT classes (print queues, or whatever) for jobs using the 48-character set vs. 64-character set, and only change once a shift or
    so, which would mean that the less-favored jobs would have to wait.

    If you had some huge job, say general ledger or inventory, that used
    multiple boxes of paper and didn't need lower-case, you might want to
    reserve a class for that and print it off-shift, and otherwise keep the
    slower band in all the time.

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Dan Espen@3:633/280.2 to All on Tue Jul 15 06:36:19 2025
    Peter Flass <Peter@Iron-Spring.com> writes:

    On 7/13/25 07:18, Niklas Karlsson wrote:
    On 2025-07-13, Nuno Silva <nunojsilva@invalid.invalid> wrote:
    On 2025-07-13, Niklas Karlsson wrote:

    Not EBCDIC, but your mention of square brackets reminded me of the
    modified 7-bit ASCII that was used to write Swedish before ISO 8859-1
    and later Unicode made it big.

    "} { | ] [ \" were shown as " " on Swedish-adapted equipment, >>>> making C code look absolutely ridiculous. Similar conventions applied
    for the other Nordic languages and German.

    I played with ISO-646-FI/SE once in a Televideo terminal, but not for
    long enough to figure out how to handle day-to-day usage of a UNIX-like
    system without these characters.

    I (barely) know C has (had?) syntax and also iso646.h for such cases,
    but how would e.g. shell scripting be handled?
    Couldn't say. I came in a little to late to really have to butt
    heads
    with that issue.


    That's why C had trigraphs. PL/I(F) did the same thing with its
    "48-character set"

    I go onto my first UNIX on mainframe project and all the developers had
    already accepted TRIGRAPHS. I found that totally unacceptable. It took
    me a month or 2 to find a 3270 emulator that I could patch up to finally
    be able to see and type square brackets.

    To IBM's credit I used IBM's internally used 3270 emulator (MITE I
    believe) with some patches I came up with. I dumped the binary, found
    the translate table and fixed it.

    I can't fathom why trigraphs were considered an acceptable solution.

    --
    Dan Espen

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Dan Espen@3:633/280.2 to All on Tue Jul 15 06:53:30 2025
    Peter Flass <Peter@Iron-Spring.com> writes:

    On 7/14/25 02:40, Charlie Gibbs wrote:
    ...
    One shop I worked in had both 48- and 63-character bands for their
    printer. They thought that they could mount a 63-character band
    for jobs that needed it, while using a 48-character band for
    everything else. The 48-character band allowed faster printing,
    since the character subset passed the paper in a smaller fraction
    of the time it took the band to make a complete revolution. As I
    predicted, though, they soon realized that the time spent while the
    operator changed bands (especially if he had just left for coffee
    when a band change request came up) more than offset the time saved
    by using the 48-character set - and that it was faster in the long
    run to just leave the 63-character band in place all the time.


    In order for this to have a chance of working you'd have to establish different SYSOUT classes (print queues, or whatever) for jobs using
    the 48-character set vs. 64-character set, and only change once a
    shift or so, which would mean that the less-favored jobs would have to
    wait.

    If you had some huge job, say general ledger or inventory, that used
    multiple boxes of paper and didn't need lower-case, you might want to
    reserve a class for that and print it off-shift, and otherwise keep
    the slower band in all the time.

    I believe the UCS FOLD option dealt with that.

    --
    Dan Espen

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From David Wade@3:633/280.2 to All on Tue Jul 15 07:02:13 2025
    On 14/07/2025 21:36, Dan Espen wrote:
    Peter Flass <Peter@Iron-Spring.com> writes:

    On 7/13/25 07:18, Niklas Karlsson wrote:
    On 2025-07-13, Nuno Silva <nunojsilva@invalid.invalid> wrote:
    On 2025-07-13, Niklas Karlsson wrote:

    Not EBCDIC, but your mention of square brackets reminded me of the
    modified 7-bit ASCII that was used to write Swedish before ISO 8859-1 >>>>> and later Unicode made it big.

    "} { | ] [ \" were shown as " " on Swedish-adapted equipment, >>>>> making C code look absolutely ridiculous. Similar conventions applied >>>>> for the other Nordic languages and German.

    I played with ISO-646-FI/SE once in a Televideo terminal, but not for
    long enough to figure out how to handle day-to-day usage of a UNIX-like >>>> system without these characters.

    I (barely) know C has (had?) syntax and also iso646.h for such cases,
    but how would e.g. shell scripting be handled?
    Couldn't say. I came in a little to late to really have to butt
    heads
    with that issue.


    That's why C had trigraphs. PL/I(F) did the same thing with its
    "48-character set"

    I go onto my first UNIX on mainframe project and all the developers had already accepted TRIGRAPHS. I found that totally unacceptable. It took
    me a month or 2 to find a 3270 emulator that I could patch up to finally
    be able to see and type square brackets.

    To IBM's credit I used IBM's internally used 3270 emulator (MITE I
    believe) with some patches I came up with. I dumped the binary, found
    the translate table and fixed it.

    I can't fathom why trigraphs were considered an acceptable solution.

    On a real 3178 there are no [] characters so you either lose some other characters, or use tri-graphs.

    Dave


    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Tue Jul 15 07:37:42 2025
    On Mon, 14 Jul 2025 09:40:28 GMT, Charlie Gibbs wrote:

    In the mainframe world, lower case was generally held in low regard. The
    myth was that anything not in all caps didn't look appropriately
    computerish. This myth survived for decades afterwards.

    I read somewhere that, when AT&T engineers were designing the first
    teletypes, they had room to include either uppercase letters or lowercase,
    but not both. Executives decided that entire uppercase was preferable to entire lowercase, solely because “god” seemed like a less respectful way of writing the name (or was it occupation?) of their favourite deity than “GOD”.

    I have no idea if this story is credible or not ...

    When I discovered that the DEC systems (including language compilers) I
    was using as an undergrad were case-insensitive, and that I could write Fortran code in lowercase or even mixed case if I wanted, some other
    people did look at me a little strangely ...

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From John Ames@3:633/280.2 to All on Tue Jul 15 07:49:30 2025
    On Mon, 14 Jul 2025 21:37:42 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    I read somewhere that, when AT&T engineers were designing the first=20 teletypes, they had room to include either uppercase letters or
    lowercase, but not both. Executives decided that entire uppercase was preferable to entire lowercase, solely because =E2=80=9Cgod=E2=80=9D seem=
    ed like a
    less respectful way of writing the name (or was it occupation?) of
    their favourite deity than =E2=80=9CGOD=E2=80=9D.
    =20
    I have no idea if this story is credible or not ...

    You never know with these things, but it seems far likelier that this
    has to do with historical precedent; uppercase has been the "default" letterform in Latin alphabets since antiquity (the capital letters came
    first and minuscule forms were invented later) and teletypes would have
    a more specific precedent in telegraphy, where telegrams were normally
    written out in uppercase (AFAIK, no telegraph code ever bothered to
    feature case distinction.)


    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Dennis Boone@3:633/280.2 to All on Tue Jul 15 11:10:23 2025
    I can't fathom why trigraphs were considered an acceptable solution.

    They aren't? :)

    https://www.open-std.org/jtc1/sc22/wg14/www/docs/n2940.pdf

    De

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: ---:- FTN<->UseNet Gate -:--- (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Tue Jul 15 11:29:44 2025
    On Mon, 14 Jul 2025 16:36:19 -0400, Dan Espen wrote:

    I can't fathom why trigraphs were considered an acceptable solution.

    What would have been better?

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Peter Flass@3:633/280.2 to All on Tue Jul 15 12:56:56 2025
    On 7/14/25 14:02, David Wade wrote:
    On 14/07/2025 21:36, Dan Espen wrote:
    Peter Flass <Peter@Iron-Spring.com> writes:

    On 7/13/25 07:18, Niklas Karlsson wrote:
    On 2025-07-13, Nuno Silva <nunojsilva@invalid.invalid> wrote:
    On 2025-07-13, Niklas Karlsson wrote:

    Not EBCDIC, but your mention of square brackets reminded me of the >>>>>> modified 7-bit ASCII that was used to write Swedish before ISO 8859-1 >>>>>> and later Unicode made it big.

    "} { | ] [ \" were shown as " " on Swedish-adapted
    equipment,
    making C code look absolutely ridiculous. Similar conventions applied >>>>>> for the other Nordic languages and German.

    I played with ISO-646-FI/SE once in a Televideo terminal, but not for >>>>> long enough to figure out how to handle day-to-day usage of a UNIX- >>>>> like
    system without these characters.

    I (barely) know C has (had?) syntax and also iso646.h for such cases, >>>>> but how would e.g. shell scripting be handled?
    Couldn't say. I came in a little to late to really have to butt
    heads
    with that issue.


    That's why C had trigraphs. PL/I(F) did the same thing with its
    "48-character set"

    I go onto my first UNIX on mainframe project and all the developers had
    already accepted TRIGRAPHS. I found that totally unacceptable. It took
    me a month or 2 to find a 3270 emulator that I could patch up to finally
    be able to see and type square brackets.

    To IBM's credit I used IBM's internally used 3270 emulator (MITE I
    believe) with some patches I came up with. I dumped the binary, found
    the translate table and fixed it.

    I can't fathom why trigraphs were considered an acceptable solution.

    On a real 3178 there are no [] characters so you either lose some other characters, or use tri-graphs.

    By golly, you're right. The 3278 APL keyboard had them. We used 3290s
    with the APL keyboard; great piece of gear.



    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Peter Flass@3:633/280.2 to All on Tue Jul 15 12:57:49 2025
    On 7/14/25 14:14, Scott Lurndal wrote:
    Dan Espen <dan1espen@gmail.com> writes:
    Peter Flass <Peter@Iron-Spring.com> writes:

    On 7/13/25 07:18, Niklas Karlsson wrote:
    On 2025-07-13, Nuno Silva <nunojsilva@invalid.invalid> wrote:
    On 2025-07-13, Niklas Karlsson wrote:

    Not EBCDIC, but your mention of square brackets reminded me of the >>>>>> modified 7-bit ASCII that was used to write Swedish before ISO 8859-1 >>>>>> and later Unicode made it big.

    "} { | ] [ \" were shown as " " on Swedish-adapted equipment, >>>>>> making C code look absolutely ridiculous. Similar conventions applied >>>>>> for the other Nordic languages and German.

    I played with ISO-646-FI/SE once in a Televideo terminal, but not for >>>>> long enough to figure out how to handle day-to-day usage of a UNIX-like >>>>> system without these characters.

    I (barely) know C has (had?) syntax and also iso646.h for such cases, >>>>> but how would e.g. shell scripting be handled?
    Couldn't say. I came in a little to late to really have to butt
    heads
    with that issue.


    That's why C had trigraphs. PL/I(F) did the same thing with its
    "48-character set"

    I go onto my first UNIX on mainframe project and all the developers had
    already accepted TRIGRAPHS. I found that totally unacceptable. It took
    me a month or 2 to find a 3270 emulator that I could patch up to finally
    be able to see and type square brackets.

    To IBM's credit I used IBM's internally used 3270 emulator (MITE I
    believe) with some patches I came up with. I dumped the binary, found
    the translate table and fixed it.

    I can't fathom why trigraphs were considered an acceptable solution.

    Not many keypunches had a square bracket key. Granted, if one were
    skilled on the keypunch, one can synthesize any hollerith sequence;
    so assuming one knew how the hardware translated the hollerith into
    EBCDIC (and the C compiler used the same EBCDIC character) they
    could punch a square bracket, albeit rather painfully. trigraphs
    were much more convenient.

    I got pretty good at multi-punching at one time in the long ago.


    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Peter Flass@3:633/280.2 to All on Tue Jul 15 13:00:27 2025
    On 7/14/25 14:37, Lawrence D'Oliveiro wrote:
    On Mon, 14 Jul 2025 09:40:28 GMT, Charlie Gibbs wrote:

    In the mainframe world, lower case was generally held in low regard. The
    myth was that anything not in all caps didn't look appropriately
    computerish. This myth survived for decades afterwards.

    I read somewhere that, when AT&T engineers were designing the first teletypes, they had room to include either uppercase letters or lowercase, but not both. Executives decided that entire uppercase was preferable to entire lowercase, solely because “god” seemed like a less respectful way of writing the name (or was it occupation?) of their favourite deity than “GOD”.

    I have no idea if this story is credible or not ...

    When I discovered that the DEC systems (including language compilers) I
    was using as an undergrad were case-insensitive, and that I could write Fortran code in lowercase or even mixed case if I wanted, some other
    people did look at me a little strangely ...

    PL/I is case-insensitive also - anything but PL/I(F) om OS/360.


    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Peter Flass@3:633/280.2 to All on Tue Jul 15 13:01:48 2025
    On 7/14/25 18:29, Lawrence D'Oliveiro wrote:
    On Mon, 14 Jul 2025 16:36:19 -0400, Dan Espen wrote:

    I can't fathom why trigraphs were considered an acceptable solution.

    What would have been better?

    FORTRAN used .OR., .AND., etc.

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Tue Jul 15 15:59:13 2025
    On Mon, 14 Jul 2025 20:01:48 -0700, Peter Flass wrote:

    On 7/14/25 18:29, Lawrence D'Oliveiro wrote:

    On Mon, 14 Jul 2025 16:36:19 -0400, Dan Espen wrote:

    I can't fathom why trigraphs were considered an acceptable solution.

    What would have been better?

    FORTRAN used .OR., .AND., etc.

    But C avoided using meaningful names for that kind of thing.

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Nuno Silva@3:633/280.2 to All on Tue Jul 15 17:39:50 2025
    On 2025-07-15, Peter Flass wrote:

    On 7/14/25 18:29, Lawrence D'Oliveiro wrote:
    On Mon, 14 Jul 2025 16:36:19 -0400, Dan Espen wrote:

    I can't fathom why trigraphs were considered an acceptable solution.

    What would have been better?

    FORTRAN used .OR., .AND., etc.

    That seems to be what iso646.h allows. The problem are then other uses
    of the unavailable characters. For that, perhaps it's better to have the trigraphs or something similar instead of a meaningful name that can be
    wrong in other contexts?

    --
    Nuno Silva

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Nuno Silva@3:633/280.2 to All on Tue Jul 15 18:03:03 2025
    On 2025-07-13, Don Poitras wrote:

    Scott Lurndal <scott@slp53.sl.home> wrote:
    Niklas Karlsson <nikke.karlsson@gmail.com> writes:
    On 2025-07-12, Dan Espen <dan1espen@gmail.com> wrote:

    Or, famously, square brackets.
    But what really p'd me off it that for characters they couldn't decide
    on, they translated multiple different characters to the same character. >> >> Making their mistakes impossible to recover from.

    Not EBCDIC, but your mention of square brackets reminded me of the
    modified 7-bit ASCII that was used to write Swedish before ISO 8859-1
    and later Unicode made it big.

    "} { | ] [ \" were shown as "?? ?? ?? ?? ?? ??" on Swedish-adapted equipment,

    A (sub)thread touching the topic of encodings and charsets having
    encoding problems? :-)

    making C code look absolutely ridiculous. Similar conventions applied
    for the other Nordic languages and German.

    Ah, but there were always trigraphs. Sadly they weren't much prettier.

    '??(' and '??)'.

    SAS/C (C compiler written for IBM mainframes after Lattice C was
    purchased by Sas in 1987) introduced 'di-graphs':

    (| and |)

    Looked a little nicer.

    But would be problematic for ISO-646, as "|" is one of the replaceable characters.

    --
    Nuno Silva

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From David Wade@3:633/280.2 to All on Tue Jul 15 18:06:50 2025
    On 15/07/2025 03:56, Peter Flass wrote:
    On 7/14/25 14:02, David Wade wrote:
    On 14/07/2025 21:36, Dan Espen wrote:
    Peter Flass <Peter@Iron-Spring.com> writes:

    On 7/13/25 07:18, Niklas Karlsson wrote:
    On 2025-07-13, Nuno Silva <nunojsilva@invalid.invalid> wrote:
    On 2025-07-13, Niklas Karlsson wrote:

    Not EBCDIC, but your mention of square brackets reminded me of the >>>>>>> modified 7-bit ASCII that was used to write Swedish before ISO
    8859-1
    and later Unicode made it big.

    "} { | ] [ \" were shown as " " on Swedish-adapted
    equipment,
    making C code look absolutely ridiculous. Similar conventions
    applied
    for the other Nordic languages and German.

    I played with ISO-646-FI/SE once in a Televideo terminal, but not for >>>>>> long enough to figure out how to handle day-to-day usage of a
    UNIX- like
    system without these characters.

    I (barely) know C has (had?) syntax and also iso646.h for such cases, >>>>>> but how would e.g. shell scripting be handled?
    Couldn't say. I came in a little to late to really have to butt
    heads
    with that issue.


    That's why C had trigraphs. PL/I(F) did the same thing with its
    "48-character set"

    I go onto my first UNIX on mainframe project and all the developers had
    already accepted TRIGRAPHS. I found that totally unacceptable. It took >>> me a month or 2 to find a 3270 emulator that I could patch up to finally >>> be able to see and type square brackets.

    To IBM's credit I used IBM's internally used 3270 emulator (MITE I
    believe) with some patches I came up with. I dumped the binary, found
    the translate table and fixed it.

    I can't fathom why trigraphs were considered an acceptable solution.

    On a real 3178 there are no [] characters so you either lose some
    other characters, or use tri-graphs.

    By golly, you're right. The 3278 APL keyboard had them. We used 3290s
    with the APL keyboard; great piece of gear.


    This also means that two standards evolved for representing them in
    EBCDIC. I believe the Universities came up with one, and when IBM added
    them to later terminals it used different ones...
    .... I worked on coloured book software on IBM VM

    https://en.wikipedia.org/wiki/Coloured_Book_protocols

    ... always a problem

    Dave

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Tue Jul 15 18:35:52 2025
    On Tue, 15 Jul 2025 09:06:50 +0100, David Wade wrote:

    ... I worked on coloured book software on IBM VM

    The only colour I remember is “Grey Book” for email. Oh, and JANET had the domain components the other way round, didn’t it, e.g. uk.ac.ic.src.

    Do copies/scans of those books survive anywhere? It seems to me those
    specs would be of valuable historical interest nowadays.

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Scott Lurndal@3:633/280.2 to All on Wed Jul 16 00:06:09 2025
    Reply-To: slp53@pacbell.net

    Peter Flass <Peter@Iron-Spring.com> writes:
    On 7/14/25 14:14, Scott Lurndal wrote:
    Dan Espen <dan1espen@gmail.com> writes:
    Peter Flass <Peter@Iron-Spring.com> writes:

    On 7/13/25 07:18, Niklas Karlsson wrote:
    On 2025-07-13, Nuno Silva <nunojsilva@invalid.invalid> wrote:
    On 2025-07-13, Niklas Karlsson wrote:

    Not EBCDIC, but your mention of square brackets reminded me of the >>>>>>> modified 7-bit ASCII that was used to write Swedish before ISO 8859-1 >>>>>>> and later Unicode made it big.

    "} { | ] [ \" were shown as "å ä ö Å Ä Ö" on Swedish-adapted equipment,
    making C code look absolutely ridiculous. Similar conventions applied >>>>>>> for the other Nordic languages and German.

    I played with ISO-646-FI/SE once in a Televideo terminal, but not for >>>>>> long enough to figure out how to handle day-to-day usage of a UNIX-like >>>>>> system without these characters.

    I (barely) know C has (had?) syntax and also iso646.h for such cases, >>>>>> but how would e.g. shell scripting be handled?
    Couldn't say. I came in a little to late to really have to butt
    heads
    with that issue.


    That's why C had trigraphs. PL/I(F) did the same thing with its
    "48-character set"

    I go onto my first UNIX on mainframe project and all the developers had
    already accepted TRIGRAPHS. I found that totally unacceptable. It took >>> me a month or 2 to find a 3270 emulator that I could patch up to finally >>> be able to see and type square brackets.

    To IBM's credit I used IBM's internally used 3270 emulator (MITE I
    believe) with some patches I came up with. I dumped the binary, found
    the translate table and fixed it.

    I can't fathom why trigraphs were considered an acceptable solution.

    Not many keypunches had a square bracket key. Granted, if one were
    skilled on the keypunch, one can synthesize any hollerith sequence;
    so assuming one knew how the hardware translated the hollerith into
    EBCDIC (and the C compiler used the same EBCDIC character) they
    could punch a square bracket, albeit rather painfully. trigraphs
    were much more convenient.

    I got pretty good at multi-punching at one time in the long ago.


    Control cards on the Burroughs systems were designated by an
    invalid punch in column one. We generally punched 1-2-3
    for the invalid column followed by the command.

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: UsenetServer - www.usenetserver.com (3:633/280.2@fidonet)
  • From Peter Flass@3:633/280.2 to All on Wed Jul 16 00:21:12 2025
    On 7/14/25 22:59, Lawrence D'Oliveiro wrote:
    On Mon, 14 Jul 2025 20:01:48 -0700, Peter Flass wrote:

    On 7/14/25 18:29, Lawrence D'Oliveiro wrote:

    On Mon, 14 Jul 2025 16:36:19 -0400, Dan Espen wrote:

    I can't fathom why trigraphs were considered an acceptable solution.

    What would have been better?

    FORTRAN used .OR., .AND., etc.

    But C avoided using meaningful names for that kind of thing.

    Not meaningful with the dots.

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Peter Flass@3:633/280.2 to All on Wed Jul 16 00:34:05 2025
    On 7/15/25 07:02, Scott Lurndal wrote:
    Peter Flass <Peter@Iron-Spring.com> writes:
    On 7/14/25 18:29, Lawrence D'Oliveiro wrote:
    On Mon, 14 Jul 2025 16:36:19 -0400, Dan Espen wrote:

    I can't fathom why trigraphs were considered an acceptable solution.

    What would have been better?

    FORTRAN used .OR., .AND., etc.

    FORTRAN is not C. Trigraphs worked perfectly well,
    irrespective of your personal feelings. Ugly, perhaps,
    but not as ugly as .OR.

    I don't have any feelings one way or the other, because I never used
    them, or had a need to. What I know is the number of people complaining
    about them. I am more familiar with the 48-character set in PL/I, and I
    know I hated that. I was converting a very old program recently, and the
    first thing I did was mass-change all of them.

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Scott Lurndal@3:633/280.2 to All on Wed Jul 16 04:49:32 2025
    Reply-To: slp53@pacbell.net

    rbowman <bowman@montana.com> writes:
    On Mon, 14 Jul 2025 19:56:56 -0700, Peter Flass wrote:

    On 7/14/25 14:02, David Wade wrote:
    On 14/07/2025 21:36, Dan Espen wrote:
    Peter Flass <Peter@Iron-Spring.com> writes:

    On 7/13/25 07:18, Niklas Karlsson wrote:
    On 2025-07-13, Nuno Silva <nunojsilva@invalid.invalid> wrote:
    On 2025-07-13, Niklas Karlsson wrote:

    Not EBCDIC, but your mention of square brackets reminded me of the >>>>>>>> modified 7-bit ASCII that was used to write Swedish before ISO >>>>>>>> 8859-1 and later Unicode made it big.

    "} { | ] [ \" were shown as "å ä ö Å Ä Ö" on Swedish-adapted >>>>>>>> equipment,
    making C code look absolutely ridiculous. Similar conventions
    applied for the other Nordic languages and German.

    I played with ISO-646-FI/SE once in a Televideo terminal, but not >>>>>>> for long enough to figure out how to handle day-to-day usage of a >>>>>>> UNIX- like system without these characters.

    I (barely) know C has (had?) syntax and also iso646.h for such
    cases,
    but how would e.g. shell scripting be handled?
    Couldn't say. I came in a little to late to really have to butt
    heads with that issue.


    That's why C had trigraphs. PL/I(F) did the same thing with its
    "48-character set"

    I go onto my first UNIX on mainframe project and all the developers
    had already accepted TRIGRAPHS.  I found that totally unacceptable.  >>>> It took me a month or 2 to find a 3270 emulator that I could patch up
    to finally be able to see and type square brackets.

    To IBM's credit I used IBM's internally used 3270 emulator (MITE I
    believe) with some patches I came up with.  I dumped the binary, found >>>> the translate table and fixed it.

    I can't fathom why trigraphs were considered an acceptable solution.

    On a real 3178 there are no [] characters so you either lose some other
    characters, or use tri-graphs.

    By golly, you're right. The 3278 APL keyboard had them. We used 3290s
    with the APL keyboard; great piece of gear.

    APL keyboards had many strange and wondrous characters... The IBM 5120 had
    a selector switch for BASIC or APL and had the APL character set, iirc on >the front of the keycaps.

    On the 5110, the switch was on the face adjacent to the monitor, just
    above the 7/4/1 row on the numeric keypad side of the keyboard.

    I got to use one briefly in 1980.

    https://en.wikipedia.org/wiki/IBM_5110#/media/File:IBM_5110_computer_-_Ridai_Museum_of_Modern_Science,_Tokyo_-_DSC07664.JPG

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: UsenetServer - www.usenetserver.com (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Wed Jul 16 13:59:20 2025
    On Tue, 15 Jul 2025 07:21:12 -0700, Peter Flass wrote:

    On 7/14/25 22:59, Lawrence D'Oliveiro wrote:

    On Mon, 14 Jul 2025 20:01:48 -0700, Peter Flass wrote:

    On 7/14/25 18:29, Lawrence D'Oliveiro wrote:

    On Mon, 14 Jul 2025 16:36:19 -0400, Dan Espen wrote:

    I can't fathom why trigraphs were considered an acceptable solution.

    What would have been better?

    FORTRAN used .OR., .AND., etc.

    But C avoided using meaningful names for that kind of thing.

    Not meaningful with the dots.

    You think you can’t tell that “.OR.” came from “or”, and “.AND.’ from
    “and”?

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Waldek Hebisch@3:633/280.2 to All on Wed Jul 16 14:08:34 2025
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    On Mon, 14 Jul 2025 16:36:19 -0400, Dan Espen wrote:

    I can't fathom why trigraphs were considered an acceptable solution.

    What would have been better?

    Digraphs. They give alternative spelling for needed C tokens.
    Trigraphs apply everwhere, including strings and to lower chance
    of accidental match they are deliberatly obscure. But
    substitution in strings is of limited use: before trigraphs there
    were already way to include arbitrary characters in C strings.

    Of course, best way is to have all characters on the keyboard.

    --
    Waldek Hebisch

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: To protect and to server (3:633/280.2@fidonet)
  • From Stefan Ram@3:633/280.2 to All on Wed Jul 16 21:27:36 2025
    antispam@fricas.org (Waldek Hebisch) wrote or quoted:
    Digraphs. They give alternative spelling for needed C tokens.
    Trigraphs apply everwhere, including strings and to lower chance
    of accidental match they are deliberatly obscure.

    Right, so even in TeX, you've got these triple combos like "^^@" -
    and that one, for example, just maps to the character with code zero,
    since the code for "@" is 64 and you're basically subtracting 64 here.

    There's a whole set of rules for this stuff, though, so any character
    can end up with a "^^" escape version. This gets handled super early
    in the input process, so if you've ever run into a missing symbol on
    your keyboard, this trick can actually sort that out. "^^M" works out
    to a carriage return, and "^^I" throws in a tab.



    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: Stefan Ram (3:633/280.2@fidonet)
  • From Peter Flass@3:633/280.2 to All on Thu Jul 17 00:54:01 2025
    On 7/15/25 20:59, Lawrence D'Oliveiro wrote:
    On Tue, 15 Jul 2025 07:21:12 -0700, Peter Flass wrote:

    On 7/14/25 22:59, Lawrence D'Oliveiro wrote:

    On Mon, 14 Jul 2025 20:01:48 -0700, Peter Flass wrote:

    On 7/14/25 18:29, Lawrence D'Oliveiro wrote:

    On Mon, 14 Jul 2025 16:36:19 -0400, Dan Espen wrote:

    I can't fathom why trigraphs were considered an acceptable solution. >>>>>
    What would have been better?

    FORTRAN used .OR., .AND., etc.

    But C avoided using meaningful names for that kind of thing.

    Not meaningful with the dots.

    You think you can’t tell that “.OR.” came from “or”, and “.AND.’ from
    “and”?

    Of course. What I meant was "not otherwise significant to the parser,"
    so not confusable with anything else.

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Thu Jul 17 11:50:35 2025
    On Wed, 16 Jul 2025 07:54:01 -0700, Peter Flass wrote:

    On 7/15/25 20:59, Lawrence D'Oliveiro wrote:

    On Tue, 15 Jul 2025 07:21:12 -0700, Peter Flass wrote:

    On 7/14/25 22:59, Lawrence D'Oliveiro wrote:

    On Mon, 14 Jul 2025 20:01:48 -0700, Peter Flass wrote:

    On 7/14/25 18:29, Lawrence D'Oliveiro wrote:

    On Mon, 14 Jul 2025 16:36:19 -0400, Dan Espen wrote:

    I can't fathom why trigraphs were considered an acceptable
    solution.

    What would have been better?

    FORTRAN used .OR., .AND., etc.

    But C avoided using meaningful names for that kind of thing.

    Not meaningful with the dots.

    You think you can’t tell that “.OR.” came from “or”, and “.AND.’ from
    “and”?

    Of course. What I meant was "not otherwise significant to the parser,"
    so not confusable with anything else.

    Will cause trouble in C, because “.” already means something else.

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Dan Espen@3:633/280.2 to All on Fri Jul 18 02:53:35 2025
    David Wade <g4ugm@dave.invalid> writes:

    On 14/07/2025 21:36, Dan Espen wrote:
    Peter Flass <Peter@Iron-Spring.com> writes:

    On 7/13/25 07:18, Niklas Karlsson wrote:
    On 2025-07-13, Nuno Silva <nunojsilva@invalid.invalid> wrote:
    On 2025-07-13, Niklas Karlsson wrote:

    Not EBCDIC, but your mention of square brackets reminded me of the >>>>>> modified 7-bit ASCII that was used to write Swedish before ISO 8859-1 >>>>>> and later Unicode made it big.

    "} { | ] [ \" were shown as " " on Swedish-adapted equipment, >>>>>> making C code look absolutely ridiculous. Similar conventions applied >>>>>> for the other Nordic languages and German.

    I played with ISO-646-FI/SE once in a Televideo terminal, but not for >>>>> long enough to figure out how to handle day-to-day usage of a UNIX-like >>>>> system without these characters.

    I (barely) know C has (had?) syntax and also iso646.h for such cases, >>>>> but how would e.g. shell scripting be handled?
    Couldn't say. I came in a little to late to really have to butt
    heads
    with that issue.


    That's why C had trigraphs. PL/I(F) did the same thing with its
    "48-character set"
    I go onto my first UNIX on mainframe project and all the developers
    had
    already accepted TRIGRAPHS. I found that totally unacceptable. It took
    me a month or 2 to find a 3270 emulator that I could patch up to finally
    be able to see and type square brackets.
    To IBM's credit I used IBM's internally used 3270 emulator (MITE I
    believe) with some patches I came up with. I dumped the binary, found
    the translate table and fixed it.
    I can't fathom why trigraphs were considered an acceptable solution.

    On a real 3178 there are no [] characters so you either lose some
    other characters, or use tri-graphs.

    Did the 3178 come with an APL feature?

    Real terminals went away pretty quickly.
    The project I was on was using emulators except for some of us with
    3290s.

    --
    Dan Espen

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Charlie Gibbs@3:633/280.2 to All on Fri Jul 18 03:46:08 2025
    On 2025-07-17, Dan Espen <dan1espen@gmail.com> wrote:

    Did the 3178 come with an APL feature?

    In my university days, APL was done on a 2741 with a
    custom typeball.

    --
    /~\ Charlie Gibbs | Growth for the sake of
    \ / <cgibbs@kltpzyxm.invalid> | growth is the ideology
    X I'm really at ac.dekanfrus | of the cancer cell.
    / \ if you read it the right way. | -- Edward Abbey

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: ---:- FTN<->UseNet Gate -:--- (3:633/280.2@fidonet)
  • From Peter Flass@3:633/280.2 to All on Fri Jul 18 03:49:35 2025
    On 7/17/25 09:53, Dan Espen wrote:
    David Wade <g4ugm@dave.invalid> writes:

    On 14/07/2025 21:36, Dan Espen wrote:
    Peter Flass <Peter@Iron-Spring.com> writes:

    On 7/13/25 07:18, Niklas Karlsson wrote:
    On 2025-07-13, Nuno Silva <nunojsilva@invalid.invalid> wrote:
    On 2025-07-13, Niklas Karlsson wrote:

    Not EBCDIC, but your mention of square brackets reminded me of the >>>>>>> modified 7-bit ASCII that was used to write Swedish before ISO 8859-1 >>>>>>> and later Unicode made it big.

    "} { | ] [ \" were shown as " " on Swedish-adapted equipment, >>>>>>> making C code look absolutely ridiculous. Similar conventions applied >>>>>>> for the other Nordic languages and German.

    I played with ISO-646-FI/SE once in a Televideo terminal, but not for >>>>>> long enough to figure out how to handle day-to-day usage of a UNIX-like >>>>>> system without these characters.

    I (barely) know C has (had?) syntax and also iso646.h for such cases, >>>>>> but how would e.g. shell scripting be handled?
    Couldn't say. I came in a little to late to really have to butt
    heads
    with that issue.


    That's why C had trigraphs. PL/I(F) did the same thing with its
    "48-character set"
    I go onto my first UNIX on mainframe project and all the developers
    had
    already accepted TRIGRAPHS. I found that totally unacceptable. It took >>> me a month or 2 to find a 3270 emulator that I could patch up to finally >>> be able to see and type square brackets.
    To IBM's credit I used IBM's internally used 3270 emulator (MITE I
    believe) with some patches I came up with. I dumped the binary, found
    the translate table and fixed it.
    I can't fathom why trigraphs were considered an acceptable solution.

    On a real 3178 there are no [] characters so you either lose some
    other characters, or use tri-graphs.

    Did the 3178 come with an APL feature?

    I don't think so. I looked up the 3278 first, and then went to the 3278.
    I suppose it may have been an RPQ.


    Real terminals went away pretty quickly.
    The project I was on was using emulators except for some of us with
    3290s.


    FSVO "quickly" I started working with 3270s when they were relatively
    new - early 70s probably, and PPOE still had a few 3290s around when I
    left around 2010. (I was almost the last holdout, I refused to give up
    mine when most people went to emulators, although I also had a PC for up/downloads, etc.) A run of 40+ years in this business ain't bad.


    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From David Wade@3:633/280.2 to All on Fri Jul 18 07:31:23 2025

    On a real 3178 there are no [] characters so you either lose some
    other characters, or use tri-graphs.

    Did the 3178 come with an APL feature?


    Not unless you paid a lot of money. In those times every mod was an
    expensive extra, even if it was a link of wire..


    Real terminals went away pretty quickly.
    The project I was on was using emulators except for some of us with
    3290s.


    I think you were late on the scene. I started on 2260's which date from
    1964. The IBM PC wasn't released until 1981, some 17 years later. 3270 emulation didn't happen until I think a couple of years later, so almost
    20 years after the first terminals. Yes they quickly replaced terminals
    once they were available, but they were around for a long time...

    Dave

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Waldek Hebisch@3:633/280.2 to All on Sat Jul 19 04:23:23 2025
    Dan Espen <dan1espen@gmail.com> wrote:
    Lynn Wheeler <lynn@garlic.com> writes:

    other trivia: account about biggest computer "goof" ever, 360s
    originally were going to be ASCII machines, but the ASCII unit record
    gear weren't ready ... so were going to start shipping with old BCD gear
    (with EBCDIC) and move later
    https://web.archive.org/web/20180513184025/http://www.bobbemer.com/P-BIT.HTM

    I don't know what dreams they were having within IBM but those machines
    were never going to be ASCII. It would be pretty hard to do 14xx
    emulation with ASCII and IBM NEVER EVER did a competent ASCII - EBCDIC translate table.

    Emulation would work without any change, CPU and almost all microcode
    would be the same. IIUC what would differ would be translation tables
    on output and input. This could require extra space in case of
    ASCII peripherials. But normal 1401 memory size were decimal, so
    lower than corresponding binary numbers. And actual core had extra
    space for use by microcode. So it does not look like a big problem.

    It is hard to say what technical problems with ASCII were.
    BCD gear used properties of BCD, so rewiring it for ASCII
    could require some effort. But it does not look like a
    big effort. So they probably could announce ASCII before
    I/O equipement was fully ready (after all, they announced
    before they had working systems and did not ship some
    of what was announced).

    --
    Waldek Hebisch

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: To protect and to server (3:633/280.2@fidonet)
  • From Waldek Hebisch@3:633/280.2 to All on Sat Jul 19 04:42:16 2025
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    On Mon, 14 Jul 2025 09:40:28 GMT, Charlie Gibbs wrote:

    In the mainframe world, lower case was generally held in low regard. The
    myth was that anything not in all caps didn't look appropriately
    computerish. This myth survived for decades afterwards.

    I read somewhere that, when AT&T engineers were designing the first teletypes, they had room to include either uppercase letters or lowercase, but not both. Executives decided that entire uppercase was preferable to entire lowercase, solely because “god” seemed like a less respectful way of writing the name (or was it occupation?) of their favourite deity than “GOD”.

    I have no idea if this story is credible or not ...

    Before computer equipement there was long tradition of telegraphic
    equipement and punched cards machines using only upper case. Also,
    I think that earlies typewriters were upper case only. So for
    early computer use upper case only was a no brainer.

    Concerning upper case on earliest equipement, most people would
    be upset seeing their names in lower case. Also, lower case
    letter shapes are more complicated, so upper case is more robust
    to low quality print (say due to wear of printing mechanizm,
    used ink, etc).

    --
    Waldek Hebisch

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: To protect and to server (3:633/280.2@fidonet)
  • From Waldek Hebisch@3:633/280.2 to All on Sat Jul 19 05:02:59 2025
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    On Mon, 7 Jul 2025 16:10:25 -0000 (UTC), Waldek Hebisch wrote:

    Endianness matter for character/digit addresable machines.

    I thought such machines always stored the digits in order of ascending significance, because it didn’t make sense to do it the other way.

    I think that bit/digit serial machines did arithmetic starting from the
    lowest digit. But early computer equipment needed to cooperate with
    punched card equipement, that is accept mixture of character and
    numeric data written in English writing order.

    Concering sense, early equipement did various interesting things.
    1401 did arithetic starting from highest address digit and going
    to lower addresses (printing and I/O in general worked in natural
    order). Some machine loaded variable length number from memory
    to registers before doing arithmetic.

    BTW: One of early Polish machines used base -2, which meant that
    say 8-bit numbers would have range from -170 to 85 (9bit ones
    would have range from -170 to 341)

    --
    Waldek Hebisch

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: To protect and to server (3:633/280.2@fidonet)
  • From Bob Eager@3:633/280.2 to All on Sat Jul 19 06:46:37 2025
    On Fri, 18 Jul 2025 18:23:23 +0000, Waldek Hebisch wrote:

    Emulation would work without any change, CPU and almost all microcode
    would be the same. IIUC what would differ would be translation tables
    on output and input. This could require extra space in case of ASCII peripherials. But normal 1401 memory size were decimal, so lower than corresponding binary numbers. And actual core had extra space for use
    by microcode. So it does not look like a big problem.

    I worked on a mainframe that supported both ASCII and EBCDIC. There was a
    mode bit which selected which it would use.

    The difference was conversion from decimal nibbles to normal bytes, in
    that different zone bits were used.


    --
    Using UNIX since v6 (1975)...

    Use the BIG mirror service in the UK:
    http://www.mirrorservice.org

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: ---:- FTN<->UseNet Gate -:--- (3:633/280.2@fidonet)
  • From Rich Alderson@3:633/280.2 to All on Sat Jul 19 07:17:09 2025
    antispam@fricas.org (Waldek Hebisch) writes:

    Dan Espen <dan1espen@gmail.com> wrote:
    Lynn Wheeler <lynn@garlic.com> writes:

    other trivia: account about biggest computer "goof" ever, 360s
    originally were going to be ASCII machines, but the ASCII unit record
    gear weren't ready ... so were going to start shipping with old BCD gear >>> (with EBCDIC) and move later
    https://web.archive.org/web/20180513184025/http://www.bobbemer.com/P-BIT.HTM

    I don't know what dreams they were having within IBM but those machines
    were never going to be ASCII. It would be pretty hard to do 14xx
    emulation with ASCII and IBM NEVER EVER did a competent ASCII - EBCDIC
    translate table.

    Emulation would work without any change, CPU and almost all microcode
    would be the same. IIUC what would differ would be translation tables
    on output and input. This could require extra space in case of
    ASCII peripherials. But normal 1401 memory size were decimal, so
    lower than corresponding binary numbers. And actual core had extra
    space for use by microcode. So it does not look like a big problem.

    It is hard to say what technical problems with ASCII were.
    BCD gear used properties of BCD, so rewiring it for ASCII
    could require some effort. But it does not look like a
    big effort. So they probably could announce ASCII before
    I/O equipement was fully ready (after all, they announced
    before they had working systems and did not ship some
    of what was announced).

    In addition to any technical problem, there was the political problem created by IBM's version of 8-bit ASCII vs. the rest of the industry's version.

    Instead of adding a high order bit to the 7-bit code, IBM wanted to put the extra bit in position 5 (counting from the right), thus splitting the defined and undefined characters into "stripes" in the table. I have no idea why they thought this was a good idea, but the rest of the industry said FOAD, and the rest, as is said, is history.

    --
    Rich Alderson news@alderson.users.panix.com
    Audendum est, et veritas investiganda; quam etiamsi non assequamur,
    omnino tamen proprius, quam nunc sumus, ad eam perveniemus.
    --Galen

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: PANIX Public Access Internet and UNIX, NYC (3:633/280.2@fidonet)
  • From John Ames@3:633/280.2 to All on Sat Jul 19 08:07:33 2025
    On 18 Jul 2025 17:17:09 -0400
    Rich Alderson <news@alderson.users.panix.com> wrote:

    Instead of adding a high order bit to the 7-bit code, IBM wanted to
    put the extra bit in position 5 (counting from the right), thus
    splitting the defined and undefined characters into "stripes" in the
    table. I have no idea why they thought this was a good idea, but the
    rest of the industry said FOAD, and the rest, as is said, is history.

    Good Lord, that hurts just to *think* about. The only potential
    justification that springs to mind is that ASCII is already more-or-
    less divided into 32-character blocks (control characters, numerals/ punctuation, uppercase & lowercase letters) and they might've thought
    that they'd rather extend those blocks than tack on additional ones -
    but the division was *already* imperfect (English being considerably
    short of 32 letters, extra punctuation crept into the free spaces,) and
    any scheme that'd involve *that* much breakage should've been binned
    right out of the gate.

    *Gah.* Add that to "Russia wins the Cold War" and "Biff Tannen becomes President" on the list of possible timelines we can all be extremely
    grateful we'll never have to deal with...


    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Scott Lurndal@3:633/280.2 to All on Sat Jul 19 08:27:21 2025
    Reply-To: slp53@pacbell.net

    antispam@fricas.org (Waldek Hebisch) writes:
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    On Mon, 7 Jul 2025 16:10:25 -0000 (UTC), Waldek Hebisch wrote:

    Endianness matter for character/digit addresable machines.

    I thought such machines always stored the digits in order of ascending
    significance, because it didn’t make sense to do it the other way.

    I think that bit/digit serial machines did arithmetic starting from the >lowest digit. But early computer equipment needed to cooperate with
    punched card equipement, that is accept mixture of character and
    numeric data written in English writing order.

    Burroughs B3500 did arithmetic starting at the most significant digit, which allowed the detection of overflow before updating the receiving (sum) field
    in memory. The most significant digit would be at the lowest address.
    The least significant digit would be at address + fieldlen - 1, where the
    field length ranged from 1 to 100 and was encoded into the instruction,
    the addend and augend could differ in length, the receiveing field was the larger of the addend and augend operands. It could operate on 4-bit data,
    or 8-bit data automatically setting the zone digits to either 0x3 or 0xf depending on the processor state ASCII flag.

    Algorithm is described in 1025475_B2500_B3500_RefMan_Oct69.pdf on bitsavers,
    p. 51.


    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: UsenetServer - www.usenetserver.com (3:633/280.2@fidonet)
  • From Charlie Gibbs@3:633/280.2 to All on Sat Jul 19 09:23:12 2025
    On 2025-07-18, Rich Alderson <news@alderson.users.panix.com> wrote:

    In addition to any technical problem, there was the political problem created by IBM's version of 8-bit ASCII vs. the rest of the industry's version.

    Instead of adding a high order bit to the 7-bit code, IBM wanted to put the extra bit in position 5 (counting from the right), thus splitting the defined and undefined characters into "stripes" in the table. I have no idea why they
    thought this was a good idea, but the rest of the industry said FOAD, and the rest, as is said, is history.

    If you look at an EBCDIC code chart, you can sort of see what they were thinking of. Special characters were in the range 0x40 through 0x7f, lower-case letters were in the range 0x81-0xa9, upper-case letters
    were in the range 0xc1-0xe9, and numerics were in the range 0xf0-0xf9.
    Bit 5 (known as bit 2 in IBM parlance) split these groups up nicely,
    so IBM probably figured they could muck with ASCII in the same way.
    Or perhaps they were trying to make it so cumbersome that nobody
    would bother trying to use it. To quote Ted Nelson's _Computer Lib_:

    ASCII and ye shall receive. -- the industry
    ASCII not, what your machine can do for you. -- IBM

    --
    /~\ Charlie Gibbs | Growth for the sake of
    \ / <cgibbs@kltpzyxm.invalid> | growth is the ideology
    X I'm really at ac.dekanfrus | of the cancer cell.
    / \ if you read it the right way. | -- Edward Abbey

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: ---:- FTN<->UseNet Gate -:--- (3:633/280.2@fidonet)
  • From Jason Howe@3:633/280.2 to All on Sat Jul 19 09:58:27 2025
    Reply-To: jason@smbfc.net

    On 10 Jul 2025 16:20:45 -0400, Rich Alderson <news@alderson.users.panix.com> wrote:
    scott@alfter.diespammersdie.us (Scott Alfter) writes:

    In article <md6n3pFgaflU8@mid.individual.net>,
    Bob Eager <news0009@eager.cx> wrote:

    Don't forget the ACT Sirius. A DOS machine, that crammed more data onto a >>> diskette buy using a variable speed drive (5 speeds, I think).

    Apple used the same trick with its 3.5" floppy drives to fit 800K onto a
    disk that was only good for 720K elsewhere.

    And before the 800K floppy, there was the single-sided 400K floppy on the same
    controller.

    Aye, I really like the internal 400k floppy on my 128k Mac because you can hear the drive speeding up and slowing down depending on which region is being read.

    --Jason

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Sat Jul 19 12:27:39 2025
    On Fri, 18 Jul 2025 18:42:16 -0000 (UTC), Waldek Hebisch wrote:

    Also, lower case letter shapes are more complicated, so upper case
    is more robust to low quality print ...

    Apparently we get more information from the upper parts of letters than
    from their lower parts. And lower-case letters have more variations in
    their upper parts. This makes them easier to distinguish, i.e. more
    readable.

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Sat Jul 19 12:29:13 2025
    On Fri, 18 Jul 2025 23:23:12 GMT, Charlie Gibbs wrote:

    ASCII not, what your machine can do for you. -- IBM

    .... “ASCII what you can do for your machine”.

    Sums up IBM equipment (and software) in a nutshell.

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Sat Jul 19 12:30:36 2025
    On Fri, 18 Jul 2025 23:58:27 -0000 (UTC), Jason Howe wrote:

    Aye, I really like the internal 400k floppy on my 128k Mac because
    you can hear the drive speeding up and slowing down depending on
    which region is being read.

    Such a melodious sound ... soothing, even.

    What a pity it wasn’t around for long. Even videos on vintage channels of those particular machines seem to be rare.

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Charlie Gibbs@3:633/280.2 to All on Sun Jul 20 03:49:03 2025
    On 2025-07-19, Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    On Fri, 18 Jul 2025 23:23:12 GMT, Charlie Gibbs wrote:

    ASCII not, what your machine can do for you. -- IBM

    ... “ASCII what you can do for your machine”.

    Sums up IBM equipment (and software) in a nutshell.

    From the Personal Computer onward, perhaps.
    I think their mainframe systems (except Linux) still use EBCDIC.

    --
    /~\ Charlie Gibbs | Growth for the sake of
    \ / <cgibbs@kltpzyxm.invalid> | growth is the ideology
    X I'm really at ac.dekanfrus | of the cancer cell.
    / \ if you read it the right way. | -- Edward Abbey

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: ---:- FTN<->UseNet Gate -:--- (3:633/280.2@fidonet)
  • From Dan Espen@3:633/280.2 to All on Sun Jul 20 05:16:03 2025
    David Wade <g4ugm@dave.invalid> writes:


    On a real 3178 there are no [] characters so you either lose some
    other characters, or use tri-graphs.
    Did the 3178 come with an APL feature?


    Not unless you paid a lot of money. In those times every mod was an
    expensive extra, even if it was a link of wire..


    Real terminals went away pretty quickly.
    The project I was on was using emulators except for some of us with
    3290s.


    I think you were late on the scene. I started on 2260's which date
    from 1964. The IBM PC wasn't released until 1981, some 17 years
    later. 3270 emulation didn't happen until I think a couple of years
    later, so almost 20 years after the first terminals. Yes they quickly replaced terminals once they were available, but they were around for
    a long time...

    Me, late on the scene?

    I started programming in 1964 on IBM 14xx in Autocoder.
    Did my first 2260 project using BTAM and assembler in 1968.

    One of my favorite 327xs were the 3279 color terminals. Great keyboards
    on those things. Looking back there was the punched card era, the 3270
    era, then the 327x emulator era. I think I put in more years in
    emulator era than the real terminal era.


    --
    Dan Espen

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Dan Espen@3:633/280.2 to All on Sun Jul 20 05:27:12 2025
    antispam@fricas.org (Waldek Hebisch) writes:

    Dan Espen <dan1espen@gmail.com> wrote:
    Lynn Wheeler <lynn@garlic.com> writes:

    other trivia: account about biggest computer "goof" ever, 360s
    originally were going to be ASCII machines, but the ASCII unit record
    gear weren't ready ... so were going to start shipping with old BCD gear >>> (with EBCDIC) and move later
    https://web.archive.org/web/20180513184025/http://www.bobbemer.com/P-BIT.HTM

    I don't know what dreams they were having within IBM but those machines
    were never going to be ASCII. It would be pretty hard to do 14xx
    emulation with ASCII and IBM NEVER EVER did a competent ASCII - EBCDIC
    translate table.

    Emulation would work without any change, CPU and almost all microcode
    would be the same. IIUC what would differ would be translation tables
    on output and input. This could require extra space in case of
    ASCII peripherials. But normal 1401 memory size were decimal, so
    lower than corresponding binary numbers. And actual core had extra
    space for use by microcode. So it does not look like a big problem.

    Can't make much sense of the above.
    14xx programs in emulation, by definition had to use BCD.
    ASCII had a different collating sequence. It's not a translation issue.

    --
    Dan Espen

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Dan Espen@3:633/280.2 to All on Sun Jul 20 05:28:53 2025
    Bob Eager <news0009@eager.cx> writes:

    On Fri, 18 Jul 2025 18:23:23 +0000, Waldek Hebisch wrote:

    Emulation would work without any change, CPU and almost all microcode
    would be the same. IIUC what would differ would be translation tables
    on output and input. This could require extra space in case of ASCII
    peripherials. But normal 1401 memory size were decimal, so lower than
    corresponding binary numbers. And actual core had extra space for use
    by microcode. So it does not look like a big problem.

    I worked on a mainframe that supported both ASCII and EBCDIC. There was a mode bit which selected which it would use.

    The difference was conversion from decimal nibbles to normal bytes, in
    that different zone bits were used.

    Every 360 had a ASCII bit. That bit took quite a while to disappear
    from the PSW. Never saw anyone attempt to turn it on.

    --
    Dan Espen

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Peter Flass@3:633/280.2 to All on Sun Jul 20 06:12:32 2025
    On 7/19/25 12:28, Dan Espen wrote:
    Bob Eager <news0009@eager.cx> writes:

    On Fri, 18 Jul 2025 18:23:23 +0000, Waldek Hebisch wrote:

    Emulation would work without any change, CPU and almost all microcode
    would be the same. IIUC what would differ would be translation tables
    on output and input. This could require extra space in case of ASCII
    peripherials. But normal 1401 memory size were decimal, so lower than
    corresponding binary numbers. And actual core had extra space for use
    by microcode. So it does not look like a big problem.

    I worked on a mainframe that supported both ASCII and EBCDIC. There was a
    mode bit which selected which it would use.

    The difference was conversion from decimal nibbles to normal bytes, in
    that different zone bits were used.

    Every 360 had a ASCII bit. That bit took quite a while to disappear
    from the PSW. Never saw anyone attempt to turn it on.


    It never did anything. Its only defined effect was to change the signs generated for packed-decimal data. I don't know what IBM was thinking.

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Bob Eager@3:633/280.2 to All on Sun Jul 20 07:08:55 2025
    On Sat, 19 Jul 2025 15:28:53 -0400, Dan Espen wrote:

    Bob Eager <news0009@eager.cx> writes:

    On Fri, 18 Jul 2025 18:23:23 +0000, Waldek Hebisch wrote:

    Emulation would work without any change, CPU and almost all microcode
    would be the same. IIUC what would differ would be translation tables
    on output and input. This could require extra space in case of ASCII
    peripherials. But normal 1401 memory size were decimal, so lower than
    corresponding binary numbers. And actual core had extra space for use
    by microcode. So it does not look like a big problem.

    I worked on a mainframe that supported both ASCII and EBCDIC. There was
    a mode bit which selected which it would use.

    The difference was conversion from decimal nibbles to normal bytes, in
    that different zone bits were used.

    Every 360 had a ASCII bit. That bit took quite a while to disappear
    from the PSW. Never saw anyone attempt to turn it on.

    This was the ICL 2900 series. Also used the IBM hex floating point format.

    --
    Using UNIX since v6 (1975)...

    Use the BIG mirror service in the UK:
    http://www.mirrorservice.org

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: ---:- FTN<->UseNet Gate -:--- (3:633/280.2@fidonet)
  • From Scott Lurndal@3:633/280.2 to All on Sun Jul 20 07:45:15 2025
    Reply-To: slp53@pacbell.net

    Dan Espen <dan1espen@gmail.com> writes:
    antispam@fricas.org (Waldek Hebisch) writes:

    Dan Espen <dan1espen@gmail.com> wrote:
    Lynn Wheeler <lynn@garlic.com> writes:

    other trivia: account about biggest computer "goof" ever, 360s
    originally were going to be ASCII machines, but the ASCII unit record
    gear weren't ready ... so were going to start shipping with old BCD gear >>>> (with EBCDIC) and move later
    https://web.archive.org/web/20180513184025/http://www.bobbemer.com/P-BIT.HTM

    I don't know what dreams they were having within IBM but those machines
    were never going to be ASCII. It would be pretty hard to do 14xx
    emulation with ASCII and IBM NEVER EVER did a competent ASCII - EBCDIC
    translate table.

    Emulation would work without any change, CPU and almost all microcode
    would be the same. IIUC what would differ would be translation tables
    on output and input. This could require extra space in case of
    ASCII peripherials. But normal 1401 memory size were decimal, so
    lower than corresponding binary numbers. And actual core had extra
    space for use by microcode. So it does not look like a big problem.

    Can't make much sense of the above.
    14xx programs in emulation, by definition had to use BCD.
    ASCII had a different collating sequence. It's not a translation issue.


    With ASCII, all the alphabetic characters are contiguous A-Z and a-z,
    so testing for a lower character character can be a simple range
    comparision, while with EBCDIC there are gaps in the LC and UC sets.

    Converting from UC to LC in ASCII required addition. In EBCDIC,
    one only needed to XOR with 0x40 to flip case, AND with 0xd0 to
    switch to LC and OR with 0x40 to switch to UC.

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: UsenetServer - www.usenetserver.com (3:633/280.2@fidonet)
  • From Scott Lurndal@3:633/280.2 to All on Sun Jul 20 07:46:20 2025
    Reply-To: slp53@pacbell.net

    Peter Flass <Peter@Iron-Spring.com> writes:
    On 7/19/25 12:28, Dan Espen wrote:
    Bob Eager <news0009@eager.cx> writes:

    On Fri, 18 Jul 2025 18:23:23 +0000, Waldek Hebisch wrote:

    Emulation would work without any change, CPU and almost all microcode
    would be the same. IIUC what would differ would be translation tables >>>> on output and input. This could require extra space in case of ASCII
    peripherials. But normal 1401 memory size were decimal, so lower than >>>> corresponding binary numbers. And actual core had extra space for use >>>> by microcode. So it does not look like a big problem.

    I worked on a mainframe that supported both ASCII and EBCDIC. There was a >>> mode bit which selected which it would use.

    The difference was conversion from decimal nibbles to normal bytes, in
    that different zone bits were used.

    Every 360 had a ASCII bit. That bit took quite a while to disappear
    from the PSW. Never saw anyone attempt to turn it on.


    It never did anything. Its only defined effect was to change the signs >generated for packed-decimal data. I don't know what IBM was thinking.

    On the Burroughs B3500, the ASCII bit controlled the zone digit when
    doing arithmetic on alpha (8-bit) numbers.

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: UsenetServer - www.usenetserver.com (3:633/280.2@fidonet)
  • From Kerr-Mudd, John@3:633/280.2 to All on Sun Jul 20 18:37:43 2025
    :
    On Sat, 19 Jul 2025 15:16:03 -0400
    Dan Espen <dan1espen@gmail.com> wrote:

    David Wade <g4ugm@dave.invalid> writes:


    On a real 3178 there are no [] characters so you either lose some
    other characters, or use tri-graphs.
    Did the 3178 come with an APL feature?


    Not unless you paid a lot of money. In those times every mod was an expensive extra, even if it was a link of wire..


    Real terminals went away pretty quickly.
    The project I was on was using emulators except for some of us with
    3290s.


    I think you were late on the scene. I started on 2260's which date
    from 1964. The IBM PC wasn't released until 1981, some 17 years
    later. 3270 emulation didn't happen until I think a couple of years
    later, so almost 20 years after the first terminals. Yes they quickly replaced terminals once they were available, but they were around for
    a long time...

    Me, late on the scene?

    I started programming in 1964 on IBM 14xx in Autocoder.
    Did my first 2260 project using BTAM and assembler in 1968.

    One of my favorite 327xs were the 3279 color terminals. Great keyboards
    on those things. Looking back there was the punched card era, the 3270
    era, then the 327x emulator era. I think I put in more years in
    emulator era than the real terminal era.


    Yeahbut I'd have to book the colour terminal way in advance - anyhow
    green on black is more restful to the eyes. I missed out on autocoder,
    being a mere stripling.


    --
    Dan Espen


    --
    Bah, and indeed Humbug.

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: Dis (3:633/280.2@fidonet)
  • From Waldek Hebisch@3:633/280.2 to All on Mon Jul 21 03:10:15 2025
    Dan Espen <dan1espen@gmail.com> wrote:
    antispam@fricas.org (Waldek Hebisch) writes:

    Dan Espen <dan1espen@gmail.com> wrote:
    Lynn Wheeler <lynn@garlic.com> writes:

    other trivia: account about biggest computer "goof" ever, 360s
    originally were going to be ASCII machines, but the ASCII unit record
    gear weren't ready ... so were going to start shipping with old BCD gear >>>> (with EBCDIC) and move later
    https://web.archive.org/web/20180513184025/http://www.bobbemer.com/P-BIT.HTM

    I don't know what dreams they were having within IBM but those machines
    were never going to be ASCII. It would be pretty hard to do 14xx
    emulation with ASCII and IBM NEVER EVER did a competent ASCII - EBCDIC
    translate table.

    Emulation would work without any change, CPU and almost all microcode
    would be the same. IIUC what would differ would be translation tables
    on output and input. This could require extra space in case of
    ASCII peripherials. But normal 1401 memory size were decimal, so
    lower than corresponding binary numbers. And actual core had extra
    space for use by microcode. So it does not look like a big problem.

    Can't make much sense of the above.
    14xx programs in emulation, by definition had to use BCD.

    Yes. And using ASCII in 360 OS-es have nothing to do with the
    above.

    ASCII had a different collating sequence. It's not a translation issue.

    Internally emulator works in BCD. The only problem is to correctly
    emulate I/O when working with ASCII periperials. That is solved
    by using translation table (so that BCD code from emulator gives
    correct glyph on the printer, etc).

    --
    Waldek Hebisch

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: To protect and to server (3:633/280.2@fidonet)
  • From Dan Espen@3:633/280.2 to All on Mon Jul 21 03:38:53 2025
    "Kerr-Mudd, John" <admin@127.0.0.1> writes:

    On Sat, 19 Jul 2025 15:16:03 -0400
    Dan Espen <dan1espen@gmail.com> wrote:

    David Wade <g4ugm@dave.invalid> writes:


    On a real 3178 there are no [] characters so you either lose some
    other characters, or use tri-graphs.
    Did the 3178 come with an APL feature?


    Not unless you paid a lot of money. In those times every mod was an
    expensive extra, even if it was a link of wire..


    Real terminals went away pretty quickly.
    The project I was on was using emulators except for some of us with
    3290s.


    I think you were late on the scene. I started on 2260's which date
    from 1964. The IBM PC wasn't released until 1981, some 17 years
    later. 3270 emulation didn't happen until I think a couple of years
    later, so almost 20 years after the first terminals. Yes they quickly
    replaced terminals once they were available, but they were around for
    a long time...

    Me, late on the scene?

    I started programming in 1964 on IBM 14xx in Autocoder.
    Did my first 2260 project using BTAM and assembler in 1968.

    One of my favorite 327xs were the 3279 color terminals. Great keyboards
    on those things. Looking back there was the punched card era, the 3270
    era, then the 327x emulator era. I think I put in more years in
    emulator era than the real terminal era.


    Yeahbut I'd have to book the colour terminal way in advance - anyhow
    green on black is more restful to the eyes. I missed out on autocoder,
    being a mere stripling.

    One of my more favorite pastimes was redoing IBMs default 4-color color
    scheme of their ISPF screens. A 3279 was a 7 color terminal with
    reverse image, underlining. It's amazing how much better you can make
    a screen look with a little artistic skill.

    At Bell Labs I had the 3279 on my desk for a year or so.

    --
    Dan Espen

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Dan Espen@3:633/280.2 to All on Mon Jul 21 03:49:27 2025
    antispam@fricas.org (Waldek Hebisch) writes:

    Dan Espen <dan1espen@gmail.com> wrote:
    antispam@fricas.org (Waldek Hebisch) writes:

    Dan Espen <dan1espen@gmail.com> wrote:
    Lynn Wheeler <lynn@garlic.com> writes:

    other trivia: account about biggest computer "goof" ever, 360s
    originally were going to be ASCII machines, but the ASCII unit record >>>>> gear weren't ready ... so were going to start shipping with old BCD gear >>>>> (with EBCDIC) and move later
    https://web.archive.org/web/20180513184025/http://www.bobbemer.com/P-BIT.HTM

    I don't know what dreams they were having within IBM but those machines >>>> were never going to be ASCII. It would be pretty hard to do 14xx
    emulation with ASCII and IBM NEVER EVER did a competent ASCII - EBCDIC >>>> translate table.

    Emulation would work without any change, CPU and almost all microcode
    would be the same. IIUC what would differ would be translation tables
    on output and input. This could require extra space in case of
    ASCII peripherials. But normal 1401 memory size were decimal, so
    lower than corresponding binary numbers. And actual core had extra
    space for use by microcode. So it does not look like a big problem.

    Can't make much sense of the above.
    14xx programs in emulation, by definition had to use BCD.

    Yes. And using ASCII in 360 OS-es have nothing to do with the
    above.

    ASCII had a different collating sequence. It's not a translation issue.

    Internally emulator works in BCD. The only problem is to correctly
    emulate I/O when working with ASCII periperials. That is solved
    by using translation table (so that BCD code from emulator gives
    correct glyph on the printer, etc).

    If printing is all your app does.

    Cards are Hollerith. A close cousin of BCD.
    The app would expect any card master file to to in BCD order.
    Tapes and disk have the same issue.

    --
    Dan Espen

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Waldek Hebisch@3:633/280.2 to All on Mon Jul 21 06:39:14 2025
    Dan Espen <dan1espen@gmail.com> wrote:
    antispam@fricas.org (Waldek Hebisch) writes:

    Dan Espen <dan1espen@gmail.com> wrote:
    antispam@fricas.org (Waldek Hebisch) writes:

    Dan Espen <dan1espen@gmail.com> wrote:
    Lynn Wheeler <lynn@garlic.com> writes:

    other trivia: account about biggest computer "goof" ever, 360s
    originally were going to be ASCII machines, but the ASCII unit record >>>>>> gear weren't ready ... so were going to start shipping with old BCD gear >>>>>> (with EBCDIC) and move later
    https://web.archive.org/web/20180513184025/http://www.bobbemer.com/P-BIT.HTM

    I don't know what dreams they were having within IBM but those machines >>>>> were never going to be ASCII. It would be pretty hard to do 14xx
    emulation with ASCII and IBM NEVER EVER did a competent ASCII - EBCDIC >>>>> translate table.

    Emulation would work without any change, CPU and almost all microcode
    would be the same. IIUC what would differ would be translation tables >>>> on output and input. This could require extra space in case of
    ASCII peripherials. But normal 1401 memory size were decimal, so
    lower than corresponding binary numbers. And actual core had extra
    space for use by microcode. So it does not look like a big problem.

    Can't make much sense of the above.
    14xx programs in emulation, by definition had to use BCD.

    Yes. And using ASCII in 360 OS-es have nothing to do with the
    above.

    ASCII had a different collating sequence. It's not a translation issue.

    Internally emulator works in BCD. The only problem is to correctly
    emulate I/O when working with ASCII periperials. That is solved
    by using translation table (so that BCD code from emulator gives
    correct glyph on the printer, etc).

    If printing is all your app does.

    Cards are Hollerith. A close cousin of BCD.
    The app would expect any card master file to to in BCD order.

    Yes, card reader and card punch also need translation table.
    That why I wrote etc above.

    Tapes and disk have the same issue.

    That is less clear: 1401 discs and tapes stored word marks which
    made them incompatible with ususal 360 formats. And discs were
    ususally read on system of the same type. So extra translation
    program (needed anyway due to word marks) could also handle change
    of character codes when transfering data between system.

    Clearly 1401 compatibility did not prevent introduction of CKD
    discs. And CKD means different on disk format than 1401 disc.

    --
    Waldek Hebisch

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: To protect and to server (3:633/280.2@fidonet)
  • From Kerr-Mudd, John@3:633/280.2 to All on Mon Jul 21 18:26:43 2025
    :
    On Sun, 20 Jul 2025 13:38:53 -0400
    Dan Espen <dan1espen@gmail.com> wrote:

    "Kerr-Mudd, John" <admin@127.0.0.1> writes:

    On Sat, 19 Jul 2025 15:16:03 -0400
    Dan Espen <dan1espen@gmail.com> wrote:

    David Wade <g4ugm@dave.invalid> writes:


    On a real 3178 there are no [] characters so you either lose some
    other characters, or use tri-graphs.
    Did the 3178 come with an APL feature?


    Not unless you paid a lot of money. In those times every mod was an
    expensive extra, even if it was a link of wire..


    Real terminals went away pretty quickly.
    The project I was on was using emulators except for some of us with
    3290s.


    I think you were late on the scene. I started on 2260's which date
    from 1964. The IBM PC wasn't released until 1981, some 17 years
    later. 3270 emulation didn't happen until I think a couple of years
    later, so almost 20 years after the first terminals. Yes they quickly
    replaced terminals once they were available, but they were around for
    a long time...

    Me, late on the scene?

    I started programming in 1964 on IBM 14xx in Autocoder.
    Did my first 2260 project using BTAM and assembler in 1968.

    One of my favorite 327xs were the 3279 color terminals. Great keyboards >> on those things. Looking back there was the punched card era, the 3270
    era, then the 327x emulator era. I think I put in more years in
    emulator era than the real terminal era.


    Yeahbut I'd have to book the colour terminal way in advance - anyhow
    green on black is more restful to the eyes. I missed out on autocoder, being a mere stripling.

    One of my more favorite pastimes was redoing IBMs default 4-color color scheme of their ISPF screens. A 3279 was a 7 color terminal with
    reverse image, underlining. It's amazing how much better you can make
    a screen look with a little artistic skill.

    A short-term works colleague who was planning on doing-up^wrebuilding a
    cottage in mid-Wales for the quiet country life translated the ISPF panels
    into Welsh.


    At Bell Labs I had the 3279 on my desk for a year or so.



    --
    Bah, and indeed Humbug.

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: Dis (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Tue Jul 22 08:39:36 2025
    On Mon, 21 Jul 2025 09:26:43 +0100, Kerr-Mudd, John wrote:

    A short-term works colleague who was planning on doing-up^wrebuilding a cottage in mid-Wales for the quiet country life translated the ISPF
    panels into Welsh.

    For some reason, former Linux kernel developer Alan Cox immediately came
    to mind ...

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Scott Lurndal@3:633/280.2 to All on Mon Jul 14 01:34:49 2025
    Reply-To: slp53@pacbell.net

    Niklas Karlsson <nikke.karlsson@gmail.com> writes:
    On 2025-07-13, Nuno Silva <nunojsilva@invalid.invalid> wrote:
    On 2025-07-13, Niklas Karlsson wrote:

    Not EBCDIC, but your mention of square brackets reminded me of the
    modified 7-bit ASCII that was used to write Swedish before ISO 8859-1
    and later Unicode made it big.

    "} { | ] [ \" were shown as "å ä ö Å Ä Ö" on Swedish-adapted equipment,
    making C code look absolutely ridiculous. Similar conventions applied
    for the other Nordic languages and German.

    I played with ISO-646-FI/SE once in a Televideo terminal, but not for
    long enough to figure out how to handle day-to-day usage of a UNIX-like
    system without these characters.

    I (barely) know C has (had?) syntax and also iso646.h for such cases,
    but how would e.g. shell scripting be handled?

    Couldn't say. I came in a little to late to really have to butt heads
    with that issue.

    Shell scripting wouldn't have been an issue in the EBCDIC systems, which
    didn't have shells per se. On the Burroughs side, the closest was WFL (Work Flow
    Language), which was compiled into an executable. As square brackets
    weren't part of the mainframe lexicon, they weren't use in WFL scripts.

    IBM had JCL, which was excessively (even ridiculously) verbose and widely disliked, but there too, square brackets were not used or useful.

    Burroughs did have a standard for the conversion.

    "Burroughs EBCDIC/ASCII Code Translation", document 1284 9097

    I don't have a copy anymore.

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: UsenetServer - www.usenetserver.com (3:633/280.2@fidonet)
  • From Dan Espen@3:633/280.2 to All on Wed Jul 23 05:10:59 2025
    antispam@fricas.org (Waldek Hebisch) writes:

    Dan Espen <dan1espen@gmail.com> wrote:
    antispam@fricas.org (Waldek Hebisch) writes:

    Dan Espen <dan1espen@gmail.com> wrote:
    antispam@fricas.org (Waldek Hebisch) writes:

    Dan Espen <dan1espen@gmail.com> wrote:
    Lynn Wheeler <lynn@garlic.com> writes:

    other trivia: account about biggest computer "goof" ever, 360s
    originally were going to be ASCII machines, but the ASCII unit record >>>>>>> gear weren't ready ... so were going to start shipping with old BCD gear
    (with EBCDIC) and move later
    https://web.archive.org/web/20180513184025/http://www.bobbemer.com/P-BIT.HTM

    I don't know what dreams they were having within IBM but those machines >>>>>> were never going to be ASCII. It would be pretty hard to do 14xx
    emulation with ASCII and IBM NEVER EVER did a competent ASCII - EBCDIC >>>>>> translate table.

    Emulation would work without any change, CPU and almost all microcode >>>>> would be the same. IIUC what would differ would be translation tables >>>>> on output and input. This could require extra space in case of
    ASCII peripherials. But normal 1401 memory size were decimal, so
    lower than corresponding binary numbers. And actual core had extra
    space for use by microcode. So it does not look like a big problem.

    Can't make much sense of the above.
    14xx programs in emulation, by definition had to use BCD.

    Yes. And using ASCII in 360 OS-es have nothing to do with the
    above.

    ASCII had a different collating sequence. It's not a translation issue. >>>
    Internally emulator works in BCD. The only problem is to correctly
    emulate I/O when working with ASCII periperials. That is solved
    by using translation table (so that BCD code from emulator gives
    correct glyph on the printer, etc).

    If printing is all your app does.

    Cards are Hollerith. A close cousin of BCD.
    The app would expect any card master file to to in BCD order.

    Yes, card reader and card punch also need translation table.
    That why I wrote etc above.

    Tapes and disk have the same issue.

    That is less clear: 1401 discs and tapes stored word marks which
    made them incompatible with ususal 360 formats.

    True there were op codes to write word marks to tape, NEVER saw them
    used. The word marks were placed in storage according to the format of
    the date being read.

    And discs were
    ususally read on system of the same type. So extra translation
    program (needed anyway due to word marks) could also handle change
    of character codes when transfering data between system.

    I think you are missing the collating sequence difference.

    Clearly 1401 compatibility did not prevent introduction of CKD
    discs. And CKD means different on disk format than 1401 disc.

    Really, you couldn't write 100 character data blocks to a CKD disk?

    --
    Dan Espen

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Kerr-Mudd, John@3:633/280.2 to All on Fri Jul 25 02:50:32 2025
    :
    On Mon, 21 Jul 2025 22:39:36 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    On Mon, 21 Jul 2025 09:26:43 +0100, Kerr-Mudd, John wrote:

    A short-term works colleague who was planning on doing-up^wrebuilding a cottage in mid-Wales for the quiet country life translated the ISPF
    panels into Welsh.

    For some reason, former Linux kernel developer Alan Cox immediately came
    to mind ...

    Nah, that wasn't his name.

    --
    Bah, and indeed Humbug.

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: Dis (3:633/280.2@fidonet)