• Re: Isn't that beauty ? (no it's not)

    From Tim Rentsch@3:633/10 to All on Mon Mar 23 21:23:47 2026
    scott@slp53.sl.home (Scott Lurndal) writes:

    Tim Rentsch <tr.17687@z991.linuxsc.com> writes:
    scott@slp53.sl.home (Scott Lurndal) writes:
    scott@slp53.sl.home (Scott Lurndal) writes:
    DFS <nospam@dfs.com> writes:
    [...]
    Why are you allowed to malloc size 0?

    "If the size of the space requested is 0, the behavior is
    implementation-defined: either a null pointer shall be returned,
    or the behavior shall be as if the size were some non-zero value,
    except that the behavior is undefined if the returned pointer is
    used to access an object."

    https://pubs.opengroup.org/onlinepubs/9799919799/functions/malloc.html

    IIRC, this caveat was added due to differences in the malloc(3)
    implementations for System V Unix and BSD Unix.

    That sounds right, except I might say "put in" rather than "added"
    since as best I can determine that allowance was present in the
    earliest drafts of the C standard and POSIX discussions.

    Note that malloc() is not mentioned in K&R, and apparently was
    added to AT&T Unix in Unix V7. The timing on that was about the
    same time as the first BSD Unix.

    Right. K&R 1st Ed. defined a 'morecore' function based on sbrk.

    Ahh, I missed that. So it does.

    K&R 2nd Ed. (ANSI) added malloc.

    Presumably malloc() was included in the early standard efforts
    well before K&R 2, which came out in 1988 IIRC. The early
    standardization work started in 1982 or 1983 IIANM, and malloc()
    had already been available since 1979, as both AT&T Unix V7 and
    BSD Unix were available at that time.

    --- PyGate Linux v1.5.13
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Tristan Wibberley@3:633/10 to All on Wed Mar 25 00:45:43 2026
    On 18/03/2026 01:40, Janis Papanagnou wrote:

    If there are simple better algorithms there's no reason but
    ignorance to not use them in the first place!

    Sure there is: bubble-sort follows a typical process a person might use
    for sorting words on bits of paper. When you automate someone's process,
    the redundant labourer can review the automated system's actions for correctness even when the labourer is not analytically minded.


    --
    Tristan Wibberley

    The message body is Copyright (C) 2026 Tristan Wibberley except
    citations and quotations noted. All Rights Reserved except that you may,
    of course, cite it academically giving credit to me, distribute it
    verbatim as part of a usenet system or its archives, and use it to
    promote my greatness and general superiority without misrepresentation
    of my opinions other than my opinion of my greatness and general
    superiority which you _may_ misrepresent. You definitely MAY NOT train
    any production AI system with it but you may train experimental AI that
    will only be used for evaluation of the AI methods it implements.


    --- PyGate Linux v1.5.13
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Tim Rentsch@3:633/10 to All on Thu Mar 26 20:08:31 2026
    DFS <nospam@dfs.com> writes:

    On 3/21/2026 2:07 AM, Tim Rentsch wrote:

    DFS <nospam@dfs.com> writes:

    On 3/12/2026 2:24 AM, Bonita Montero wrote:

    There was a programming-"contest" on comp.lang.c and I wanted to show
    the simpler code in C, here it is:

    [C++ code]

    C++ really rocks since you've to deal with much less details than in C. >>>
    Is your strategy to just ignore reality, and keep making bogus claims
    that - for this challenge at least - you can't support?

    Please don't encourage people who insist on using comp.lang.c
    for other languages. It's better to just ignore them.

    It doesn't bother me.

    I'm sorry you place so little value on my participation.

    It's no more off-topic than a discussion of sorting algorithms.

    That's wrong. There have been lots of posts in the discussion of
    sorting that are relevant to C. The problem with encouraging
    discussion of C++, or Python, or other such languages, is they
    tend to produce lots of postings that have nothing to do with C,
    and rarely produce postings that have anything to do with C.

    --- PyGate Linux v1.5.13
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From DFS@3:633/10 to All on Fri Mar 27 00:35:39 2026
    On 3/26/2026 11:08 PM, Tim Rentsch wrote:
    DFS <nospam@dfs.com> writes:

    On 3/21/2026 2:07 AM, Tim Rentsch wrote:

    DFS <nospam@dfs.com> writes:

    On 3/12/2026 2:24 AM, Bonita Montero wrote:

    There was a programming-"contest" on comp.lang.c and I wanted to show >>>>> the simpler code in C, here it is:

    [C++ code]

    C++ really rocks since you've to deal with much less details than in C. >>>>
    Is your strategy to just ignore reality, and keep making bogus claims
    that - for this challenge at least - you can't support?

    Please don't encourage people who insist on using comp.lang.c
    for other languages. It's better to just ignore them.

    It doesn't bother me.

    I'm sorry you place so little value on my participation.


    Not true at all [1]. But clc is a "No Kings" newsgroup, and nobody can dictate who or what kind of posts are made. If Montero or I annoy you, killfile us.



    It's no more off-topic than a discussion of sorting algorithms.

    That's wrong. There have been lots of posts in the discussion of
    sorting that are relevant to C. The problem with encouraging
    discussion of C++, or Python, or other such languages, is they
    tend to produce lots of postings that have nothing to do with C,
    and rarely produce postings that have anything to do with C.

    As a percent, I don't see lots of postings that have nothing to do with C.

    This group is still very clean, as opposed to the rec.sport.golf group I
    used to participate in; non-golfing interlopers completely wrecked it
    years ago.




    [1] You da man. You deserve another kudos with your performance in the
    'sort of trivial code' challenge. I was looking at that code again, and
    it's kind of mind-blowing. Is that what you do to challenge yourself?
    How long have you been writing C programs? Do you do it for a living?


    --- PyGate Linux v1.5.13
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Keith Thompson@3:633/10 to All on Thu Mar 26 21:53:28 2026
    DFS <nospam@dfs.com> writes:
    [...]
    Not true at all [1]. But clc is a "No Kings" newsgroup, and nobody
    can dictate who or what kind of posts are made. If Montero or I annoy
    you, killfile us.
    [...]

    There is no authority that can enforce topicality rules in
    comp.lang.c. But come on, the name of the programming language is
    right in the name.

    A lot of other languages have their own newsgroups, some of which
    I actually read. Posts about language Foo, with no C content,
    are better posted to comp.lang.foo, or perhaps to comp.lang.misc
    if comp.lang.foo doesn't exist, or to comp.programming if the post
    is more about algorithms than about the language, or ....

    I dislike posts here that have no C content at all, and there have
    been too many such posts lately.

    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    void Void(void) { Void(); } /* The recursive call of the void */

    --- PyGate Linux v1.5.13
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Kenny McCormack@3:633/10 to All on Fri Mar 27 17:03:16 2026
    In article <10q51er$3ec1o$3@dont-email.me>, DFS <nospam@dfs.com> wrote:
    ...
    (Tim wrote)
    I'm sorry you place so little value on my participation.

    Not true at all [1]. But clc is a "No Kings" newsgroup, and nobody can >dictate who or what kind of posts are made. If Montero or I annoy you, >killfile us.

    I think yuo get this week's "Totally Missing the Point" award.

    --
    I've learned that people will forget what you said, people will forget
    what you did, but people will never forget how you made them feel.

    - Maya Angelou -

    --- PyGate Linux v1.5.13
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Tim Rentsch@3:633/10 to All on Mon Apr 6 12:32:48 2026
    Bart <bc@freeuk.com> writes:

    On 21/03/2026 02:35, DFS wrote:

    On 3/20/2026 9:53 PM, Janis Papanagnou wrote:

    On 2026-03-20 22:10, DFS wrote:

    Sometimes listening to you clc guys is like listening to a room
    full of CS professors (which I suspect some of you are or were).

    CS professors sitting in their ivory tower and without practical
    experiences, and programmers without a substantial CS background;
    both of these extreme characters can be problematic (if fanatic).

    It was meant as a compliment. Plenty of CS professors have good
    practical and industry experience, too, which you'll see on their
    bios and cv's.

    It's got to be tough to find good CS teachers that stick around,
    given they can probably make much more money in private industry.


    Many folks here seem to have a good mix of necessary practical
    IT, Project, and CS knowledge and proficiencies, as I'd value it.
    And a clear mind to discuss topics. Sometimes it gets heated and
    personal, though; certainly nothing to foster.

    I've definitely seen Bart take some heat for continuing to post code
    in his scripting language, rather than C

    People post bits of code in all sorts of languages when it suits
    them. It's happened in this thread (Bash, AWK, Python as well as C++),
    and in many others.

    But I get a lot of heat because I use my personal languages,

    You get heat because you ignore the rules of usual newgroup
    etiquette, which most other posters observe, and because you
    insist on focusing on your self-centered interests, without
    regard or consideration for other readers or the chartered
    purpose of the group. Besides being better for the group,
    it would be better for you if you could learn to be less
    self-focused, and more considerate of others.

    --- PyGate Linux v1.5.13
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Tim Rentsch@3:633/10 to All on Mon Apr 6 13:23:02 2026
    antispam@fricas.org (Waldek Hebisch) writes:

    [...]

    Heapsort is only marginaly more complex than bubble sort [...]

    I think this claim is ridiculous. Code for heapsort is more than
    twice as long as code for bubble sort, and is both harder to write
    and harder to understand. Heapsort really needs two functions
    rather than one. All of the simple sorts (bubble sort, selection
    sort, insertion sort, merge sort) are IME easily coded without
    needing any significant thought. Not so for heapsort. Even
    quicksort, which isn't trivial, is easier than heapsort.

    On average heapsort is not as fast as quicksort (main factor seem
    to by much worse cache behaviour),

    More to say on that matter; saving for another reply and hoping to
    get to that soon.

    --- PyGate Linux v1.5.13
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Tim Rentsch@3:633/10 to All on Tue Apr 7 02:12:49 2026
    Michael S <already5chosen@yahoo.com> writes:

    [..lots left out..]

    On Wed, 18 Mar 2026 11:20:03 +0100
    David Brown <david.brown@hesbynett.no> wrote:
    [...]
    I don't think pure bubblesort has much serious use outside
    educational purposes, but it is easy to understand, easy to
    implement correctly in pretty much any language, and can do a
    perfectly good job if you don't need efficient sorting of large
    datasets (for small enough datasets, a hand-written bubblesort
    in C will be faster than calling qsort).

    IMHO, Bubble Sort is harder to understand then Straight Select
    Sort.

    After reading through the several discussions regarding sorting,
    I have a variety of comments to make about sorting algorithms,
    not just the remarks above but to the discussions in general.
    The posting above seems a reasonable departure point for those
    comments, although much of what I have to say is prompted by
    comments in other postings.

    For starters, I contend that a completely simple bubble sort is
    the easiest sorting algorithm to show to novice programmers. By
    novice here I mean people who barely even know what is meant by
    the word algorithm. To be specific, what I mean by simple bubble
    sort is like this:

    void
    simple_minded_bubble_sort( unsigned n, Element a[n] ){
    unsigned i, j;
    for( i = 0; i < n; i++ ){
    for( j = 1; j < n; j++ ){
    if( follows( j-1, j, a ) ) swap( j-1, j, a );
    }
    }
    }

    Incidentally all the algorithms I will show here are written in
    terms of the two functions follows() and swap(), whose meanings
    are (I hope) very obvious. The array to be sorted has values of
    type Element, which doesn't play any direct role in the
    discussion.

    I think it goes without saying that this is a crummy algorithm.
    But it's a very simple algorithm, which makes it a good first
    choice for novice programmers.

    We can reasonably ask how bad this algorithm is, and ask in a way
    that novices can understand. It seems obvious that a good
    sorting algorithm should compare any two particular elements at
    most once. If we try this algorithm on an array of randomly
    chosen input values, we might get statistics like this:

    simple bubble 151277700 37391291 200.000

    Here the columns are name, number of compares, number of swaps,
    and number of compares shown as a percentage of the "maximal"
    number of compares. The crumminess is evident: this algorithm
    compares each pair of elements not once but twice. Awful!

    There is a second and more subtle problem, which is that even
    though the algorithm is simple it isn't obvious that it works or
    why. However it isn't hard to see that the inner loop carries
    the largest value to the upper end of the array, and similarly
    the second iteration of the inner loop will carry the second
    largest value to the last place before that, and so on. This
    observation leads to a second algorithm that is both faster and
    has a motivating argument for why it works:

    void
    bubble_sort( unsigned n, Element a[n] ){
    unsigned i, j;
    for( i = n; i > 1; i-- ){
    for( j = 1; j < i; j++ ){
    if( follows( j-1, j, a ) ) swap( j-1, j, a );
    }
    }
    }

    Now the inner loop works over smaller and smaller subsections of
    the whole array, each time carrying the largest remaining value
    to its appropriate location in the to-be-sorted array. Using the
    same test data as before, we get these measurements:

    simple bubble 151277700 37391291 200.000
    bubble 75638850 37391291 100.000

    Excellent! Besides being able to see why it works, we do better
    on the number of compares by a factor of two. The number of
    swaps is the same (as indeed it cannot be improved for any
    algorithm that swaps only adjacent elements), but cutting down on
    the number of compares is clearly a step in the right direction.

    Now if we are inspired we can notice something. During one
    iteration of the inner loop, if we remember the last place a swap
    occurred, subsequent iterations of the inner don't need to go any
    farther than that. The reason is that we know all the values
    after the last swap must the larger than the element being
    carried along, so only smaller values were passed over, so we
    don't need to look past the location of the rightmost swap. It's
    kind of a subtle argument, but we can program it thusly:

    void
    knuth_bubble_sort( unsigned n, Element a[n] ){
    unsigned i, j, t;
    for( i = n; i > 1; i = t ){
    for( t = 0, j = 1; j < i; j++ ){
    if( follows( j-1, j, a ) ){
    swap( j-1, j, a ), t = j;
    }
    }
    }
    }

    The variable 't' records the location past which no more compares
    need be done in the next iteration. By using 'i = t' in the
    outer loop control, we get to limit the range of what is looked
    at in the next iteration of the inner loop. Here is the data for
    the first three algorithms, again all run on the same input:

    simple bubble 151277700 37391291 200.000
    bubble 75638850 37391291 100.000
    knuth bubble 75611850 37391291 99.964

    The number of compares done has indeed gone down, but not by
    much. There are a couple of dirty secrets about the "improved"
    version of bubble sort. One is that it doesn't help very much.
    The other is about the idea that it terminates early when the
    array values are already almost sorted. That is mostly a myth.
    It can happen that when the array values are already almost
    sorted the knuth_bubble_sort() will terminate early, but it
    almost never does. Even when there is only one array value out
    of place, it is relatively rare for this algorithm to get a big
    speedup.

    It's time to look at another algorithm. Here is one that wasn't
    around when I was first learning about sorting algorithms, called
    gnome sort:

    void
    gnome_sort( unsigned n, Element a[n] ){
    unsigned i = 0;
    while( ++i < n ){
    if( follows( i-1, i, a ) ){
    swap( i-1, i, a );
    i = i > 1 ? i-2 : 1;
    }
    }
    }

    The idea is simple. Imagine a gnome wandering among the array
    elements, starting at the left end. If the two elements being
    looked at are in order the gnome moves right; if they aren't
    then the gnome swaps them and moves left (of course never moving
    past the left end). So the gnome just wanders back and forth,
    swapping elements that are out of order and going either left or
    right, depending on whether it just did a swap or not. How does
    it do, measurement-wise? Here is the data:

    simple bubble 151277700 37391291 200.000
    bubble 75638850 37391291 100.000
    knuth bubble 75611850 37391291 99.964
    gnome 74794859 37391291 98.884

    We see the relatively simple gnome sort does better than the more
    complicated knuth bubble sort. Perhaps not a lot better, but
    still better.

    Even so, these numbers are not very inspiring. We can do better
    if we take the basic idea of (knuth) bubble sort and make it
    boustrophedonic, or what might be called a ping pong sort:

    void
    ping_pong_sort( unsigned n, Element a[n] ){
    unsigned i=1, j=-1, k=n, newj, newk;
    while( j+1 < k-1 ){
    for( i = j+2, newk = i; i < k; i++ ){
    if( follows( i-1, i, a ) ){
    swap( i-1, i, a ), newk = i;
    }
    }
    k = newk;
    for( i = k-1, newj = i; i > j+1; i-- ){
    if( follows( i-1, i, a ) ){
    swap( i-1, i, a ), newj = i-1;
    }
    }
    j = newj;
    }
    }

    Gosh, what a mess. But let's look at the metrics.

    simple bubble 151277700 37391291 200.000
    bubble 75638850 37391291 100.000
    knuth bubble 75611850 37391291 99.964
    gnome 74794859 37391291 98.884
    ping pong 49983149 37391291 66.081

    That is nice to see - a healthy performance improvement. But of
    course what everyone is waiting for are the well-known simple
    standard sorts:

    void
    selection_sort( unsigned n, Element a[n] ){
    unsigned i, j;
    for( i = 0; i < n; i++ ){
    unsigned k = i;
    for( j = i+1; j < n; j++ ){
    if( follows( k, j, a ) ) k = j;
    }
    swap( i, k, a );
    }
    }

    void
    insertion_sort( unsigned n, Element a[n] ){
    unsigned i, j;
    for( i = 1; i < n; i++ ){
    for( j = i; j > 0; j-- ){
    if( !follows( j-1, j, a ) ) break;
    swap( j-1, j, a );
    }
    }
    }

    simple bubble 151277700 37391291 200.000
    bubble 75638850 37391291 100.000
    knuth bubble 75611850 37391291 99.964
    gnome 74794859 37391291 98.884
    ping pong 49983149 37391291 66.081

    selection 75638850 12289 100.000
    insertion 37403579 37391291 49.450

    Clearly selection sort should be used only when the amount of
    movement is the overriding factor, and compares don't matter.
    Conversely, insertion sort is the flip side of that - the number
    of compares is much lower, the amount of movement is the same as
    all the others.

    For my own amusement I wrote a new sorting algorithm, which I
    whimsically call titanic sort:

    void
    titanic_sort( unsigned n, Element a[n] ){
    unsigned i, j;
    for( i = n; i > 0; i-- ){
    for( j = i-1; j < n-1; j++ ){
    if( follows( j, j+1, a ) ) swap( j, j+1, a );
    }
    }
    }

    Titanic sort is just like an (ordinary) bubble sort, except it
    does ever larger iterations in the inner loop, rather than ever
    smaller iterations like an ordinary bubble sort. Here are the
    metrics:

    simple bubble 151277700 37391291 200.000
    bubble 75638850 37391291 100.000
    knuth bubble 75611850 37391291 99.964
    gnome 74794859 37391291 98.884
    ping pong 49983149 37391291 66.081

    selection 75638850 12289 100.000
    insertion 37403579 37391291 49.450

    titanic 75638850 37391291 100.000

    After writing the titanic sort, I had an idea. Because we are
    working on ever growing subintervals, we can stop the inner loop
    early whenever we find we don't need to swap. For this sort I
    once again chose a whimsical name, namely seven up sort:

    void
    sevenup_sort( unsigned n, Element a[n] ){
    unsigned i, j;
    for( i = n; i > 0; i-- ){
    for( j = i-1; j < n-1; j++ ){
    if( !follows( j, j+1, a ) ) break;
    swap( j, j+1, a );
    }
    }
    }

    simple bubble 151277700 37391291 200.000
    bubble 75638850 37391291 100.000
    knuth bubble 75611850 37391291 99.964
    gnome 74794859 37391291 98.884
    ping pong 49983149 37391291 66.081

    selection 75638850 12289 100.000
    insertion 37403579 37391291 49.450

    titanic 75638850 37391291 100.000
    seven up 37403579 37391291 49.450

    Oh of course; a seven up sort is just like an insertion sort,
    but inserting on the far end rather than the near end. Duh!

    After doing all these I wrote and measured three other sorts, for
    which I will give the metrics but the code for only one. Here
    are the metrics:

    simple bubble 151277700 37391291 200.000
    bubble 75638850 37391291 100.000
    knuth bubble 75611850 37391291 99.964
    gnome 74794859 37391291 98.884
    ping pong 49983149 37391291 66.081

    selection 75638850 12289 100.000
    insertion 37403579 37391291 49.450

    titanic 75638850 37391291 100.000
    seven up 37403579 37391291 49.450

    heapsort 309459 168889 0.409
    quicksort 192033 117831 0.254
    binary insertion 150102 37391291 0.198

    The code for heapsort and for quicksort was not as clean as I
    would like so I'm not posting that. Here is the code for binary
    insertion sort:

    void
    binary_insertion_sort( unsigned n, Element a[n] ){
    unsigned i, j, k, m;
    for( i = 1; i < n; i++ ){
    j = -1, k = i;
    while( j+1 < k ){
    m = j+(k-j)/2;
    if( follows( m, i, a ) ) k = m;
    else j = m;
    }
    for( j = i; j > k; j-- ) swap( j-1, j, a );
    }
    }

    A comment about heapsort, especially as compared with quicksort.
    Using this test input heapsort did a lot more compares than
    quicksort did. That result is not an accident of the data. Of
    course sometimes quicksort misbehaves, but usually it doesn't,
    and in those cases heapsort is going to run slower than quicksort
    because it does more compares. That result is inherent in the
    heapsort algorithm. It's true that the worst case for heapsort
    is O( N * log N ), but constant for heapsort is bigger than (the
    average case) for quicksort. Also it's worth noting that binary
    insertion sort does the best of all in terms of number of
    compares done.

    My conclusions...

    The right place to start on sorting algorithms is with bubble
    sort, not because it's a good algorithm, but because it's the
    easiest one to show, and what is more important it's a good way
    to teach about how algorithms evolve.

    There is no reason to use the knuth version of bubble sort.
    Compared to ordinary bubble sort the upside is small and not
    worth the downside of needing more intricate code.

    If there is a good chance that input starts off being close to
    sorted, ping pong sort can be considered.

    In almost all cases prefer binary insertion sort to plain
    insertion sort, not because the average case is better but
    because the worst case is better. Moreover the extra code needed
    is small and easily understood.

    Writing a full-scale sorting algorithm, such as a replacement for
    qsort, is a non-trivial undertaking.

    Other considerations apply if the sort needs to be stable, but
    surely this posting is long enough already. :)

    --- PyGate Linux v1.5.13
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Michael S@3:633/10 to All on Tue Apr 7 14:00:41 2026
    On Tue, 07 Apr 2026 02:12:49 -0700
    Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:


    <snip>

    My conclusions...

    The right place to start on sorting algorithms is with bubble
    sort, not because it's a good algorithm, but because it's the
    easiest one to show, and what is more important it's a good way
    to teach about how algorithms evolve.


    If you write it in terms of moves rather than in terms of swaps,
    bubble sort is not easier than insertion and selection.

    There is no reason to use the knuth version of bubble sort.
    Compared to ordinary bubble sort the upside is small and not
    worth the downside of needing more intricate code.


    knuth version has at least one merit. You claim that in practice the
    it's too rare that the merit is exercised. May be, it is true. Or may
    be cases of almost sorted array with misplaced elements coming in pairs
    with close proximity between peers is not that uncommon. I don't know.
    But merit exist.
    OTOH, 'ordinary bubble sort' has no merits at all relatively to simple insertion sort.

    If there is a good chance that input starts off being close to
    sorted, ping pong sort can be considered.

    In almost all cases prefer binary insertion sort to plain
    insertion sort, not because the average case is better but
    because the worst case is better. Moreover the extra code needed
    is small and easily understood.


    The most important practical use case for insertion sorts is in sorting
    short sections created by higher level sorting algorithms. I'm
    reasonably certain that for this use case straight insertion sort is
    better than binary insertion sort.
    Esp. so on modern CPUs where branch mispredictions become relatively
    more expensive (about the same as 15-20 years ago in terms of cycles,
    but in each cycle modern CPU can do 3-4 times more work) and with
    modern gcc compiler that [rightly] applies branch avoidance techniques (if-conversion in gcc docs) much more conservatively than in the older versions.

    Writing a full-scale sorting algorithm, such as a replacement for
    qsort, is a non-trivial undertaking.

    Other considerations apply if the sort needs to be stable, but
    surely this posting is long enough already. :)

    Still qute a lot shorter TAOCP vol.3 :-)


    BTW, thank you for 'boustrophedon'. Not that there is a serious chance
    that I will remember the word tomorrow or even tonight, But
    hopefully I'd remember that such word exists.





    --- PyGate Linux v1.5.13
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From DFS@3:633/10 to All on Tue Apr 7 16:39:03 2026
    On 4/7/2026 5:12 AM, Tim Rentsch wrote:

    <snip>
    > simple bubble 151277700 37391291 200.000
    bubble 75638850 37391291 100.000
    knuth bubble 75611850 37391291 99.964
    gnome 74794859 37391291 98.884
    ping pong 49983149 37391291 66.081

    selection 75638850 12289 100.000
    insertion 37403579 37391291 49.450

    titanic 75638850 37391291 100.000
    seven up 37403579 37391291 49.450

    heapsort 309459 168889 0.409
    quicksort 192033 117831 0.254
    binary insertion 150102 37391291 0.198



    Nice post! About 15 minutes to write? :)


    A visualization with sound would be nice:

    https://www.youtube.com/watch?v=NVIjHj-lrT4








    --- PyGate Linux v1.5.13
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Tim Rentsch@3:633/10 to All on Sun Apr 12 11:16:50 2026
    Tim Rentsch <tr.17687@z991.linuxsc.com> writes:

    [...]

    After my previous posting I got curious about how different sorting
    algorithms behave for "nearly sorted" inputs. I decided to run some
    trials for some of the sorting functions given in my last posting.
    Here are the results:

    Average number of compares for sorting 100 elements, with one
    element out of place:

    insertion sort 132.647
    gnome sort 166.293
    ping pong sort 180.177
    knuth bubble sort 956.500

    Average number of compares for sorting 100 elements, with two
    elements out of place:

    insertion sort 166.126
    gnome sort 233.253
    ping pong sort 234.163
    knuth bubble sort 1566.487

    My conclusion, based on the above, is that bubble sort has nothing
    to recommend it as a practical sorting algorithm, even considering
    the refinement of early pruning (called the "knuth bubble sort" in
    the list above). The gnome sort is both simpler and better in terms
    of performance.

    Incidentally, some years ago I devised a variation of gnome sort,
    which I call "leaping gnome sort". A leaping gnome sort is like
    gnome sort when going right to left, that is, when the elements
    being looked at are out of order, swap them and move one place to
    the left. When no swap is needed, the gnome "leaps" to the last
    place a rightward shift was initiated. In terms of code, leaping
    gnome sort can be written as follows:

    void
    leaping_gnome_sort( unsigned n, Element a[n] ){
    for(
    unsigned i = 1, j = 2;
    i < n;
    i = follows(i-1,i,a) ? swap(i-1,i,a), i>1 ?i-1 :j++ : j++
    ){}
    }

    The "leaping" behavior is evident whenever the current location 'i'
    gets the value 'j++' rather than 'i-1'.

    Of course this algorithm could be written using a while() loop
    rather than a for() loop:

    void
    leaping_gnome_sort( unsigned n, Element a[n] ){
    unsigned i = 1, j = 2;
    while( i < n ){
    if( !follows(i-1,i,a) ) i = j++;
    else {
    swap(i-1,i,a);
    i = i > 1 ? i-1 : j++;
    }
    }
    }

    In either case it's nice having only a single loop structure rather
    than an outer loop and a second, nested loop.

    --- PyGate Linux v1.5.13
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)
  • From Tim Rentsch@3:633/10 to All on Thu Apr 16 10:23:23 2026
    Michael S <already5chosen@yahoo.com> writes:

    On Tue, 07 Apr 2026 02:12:49 -0700
    Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:


    <snip>

    My conclusions...

    The right place to start on sorting algorithms is with bubble
    sort, not because it's a good algorithm, but because it's the
    easiest one to show, and what is more important it's a good way
    to teach about how algorithms evolve.

    If you write it in terms of moves rather than in terms of swaps,
    bubble sort is not easier than insertion and selection.

    A reason to use swap() is that swap is easier to understand
    than moves. Each swap() operation, all by itself, preserves
    the invariant that the array contains the same elements after
    the swap() is done that were there before. With code written
    using moves, several statements together need to be looked at
    to verify that they are doing the right thing. Furthermore,
    when used in conjunction with follows(), whenever we see the
    two together written like this

    if( follows(x,y,array) ) swap(x,y,array);

    we know that both (1) the array has the same elements as it did
    before, and (2) the "sortedness" of array elements has increased,
    or at least has remained unchanged. Compare that to an insertion
    sort written using moves, where a whole loop body (including a
    'break;' no less), plus a statement after, needs to be examined
    to verify that it is doing something sensible. Clearly less
    mental effort is needed to see that the one line written above is
    working than what is needed for the eight lines seen in the
    insertion sort shown in your (much) earlier posting.

    There is no reason to use the knuth version of bubble sort.
    Compared to ordinary bubble sort the upside is small and not
    worth the downside of needing more intricate code.

    knuth version has at least one merit. You claim that in practice the
    it's too rare that the merit is exercised. May be, it is true. Or may
    be cases of almost sorted array with misplaced elements coming in pairs
    with close proximity between peers is not that uncommon. I don't know.
    But merit exist.

    I posted a performance comparison of several sorts including knuth
    bubble sort. It's true that knuth bubble sort is better than plain
    bubble sort. But it's still awful compared to the even simpler
    gnome sort.

    OTOH, 'ordinary bubble sort' has no merits at all relatively to simple insertion sort.

    Simple bubble sort is easier to understand than insertion sort.

    For those who don't like bubble sort, there is gnome sort, which
    is even simpler than bubble sort, and actually has better
    performance. And, if people will forgive me, it's only a short
    leap to get to leaping gnome sort, which has better performance
    still.

    BTW, thank you for 'boustrophedon'. Not that there is a
    serious chance that I will remember the word tomorrow or even
    tonight, But hopefully I'd remember that such word exists.

    It comes from Greek, meaning roughly "ox plow turning". Maybe
    that will help you remember; I think it does for me.

    --- PyGate Linux v1.5.14
    * Origin: Dragon's Lair, PyGate NNTP<>Fido Gate (3:633/10)