• Re: Browse at the speed of thought

    From Mr. Man-wai Chang@3:633/280.2 to All on Sun Aug 17 03:03:11 2025
    On 15/8/2025 8:31 pm, Mr. Man-wai Chang wrote:

    But Firefox does NOT need A.I integrated. Users can always go direclty
    to A.I. websites and ask there, just like using a search engine.

    Maybe I am smart enought to understand all these "convinience". ;)

    Correction: Maybe I am NOT smart enought to understand all these "convinience". ;)

    --
    @~@ Simplicity is Beauty! Remain silent! Drink, Blink, Stretch!
    / v \ May the Force and farces be with you! Live long and prosper!!
    /( _ )\ https://sites.google.com/site/changmw/
    ^ ^ https://github.com/changmw/changmw

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: https://sites.google.com/site/changmw/ (3:633/280.2@fidonet)
  • From Paul@3:633/280.2 to All on Fri Aug 15 08:53:55 2025
    On Thu, 8/14/2025 11:26 AM, Mr. Man-wai Chang wrote:
    On 14/8/2025 8:54 am, Jai Hind wrote:
    Get Comet!

    https://www.perplexity.ai/comet

    Perplexity's $34.5 billion bid for Google Chrome: Genius or stunt?
    Vantage with Palki Sharma.

    I am using Firefox after Netscape. I dunno why I would ever need A.I. browser. :)


    Did you know that Firefox has AI in it ?

    So far, all it is doing, is using electricity :-)
    The trial rollout is limited, and it is not enabled
    in a lot of countries quite yet. But some people have
    noticed it using the electricity. The AI, rearranges
    the tabs in your tab bar.

    And this is on my other computer. It took nine hours
    to download the files to do that. What's amazing about
    this, is my hardware is not good at AI. My TOPS rating is poor.

    [Picture]

    https://i.postimg.cc/pXF1x4VK/AI-answer.gif

    Notice, how the answer is wrong.

    It was previously known that the particular AI "isn't good at current affairs".

    But it is supposed to be much better at science.
    We just have to figure out, what science that might be.
    Maybe the AI understands how "beer pong" works.

    https://i0.wp.com/smashtabletennis.ca/wp-content/uploads/2023/01/beer-pong-lifestyle2.jpg?fit=600%2C600&ssl=1

    and you can't ask the machine

    "What are you good at?"

    That just makes them act crazy.

    Paul


    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Carlos E. R.@3:633/280.2 to All on Thu Aug 14 20:53:08 2025
    On 2025-08-14 05:21, The Real Bev wrote:
    On 8/13/25 18:46, Alan K. wrote:
    On 8/13/25 8:54 PM, Jai Hind wrote:
    Get Comet!

    https://www.perplexity.ai/comet

    Perplexity's $34.5 billion bid for Google Chrome: Genius or stunt?
    Vantage with Palki Sharma.

    Perplexity — the $18 billion AI start-up founded by Aravind Srinivas,
    who is of Indian origin — has just made a $34.5 billion cash offer to
    buy Google Chrome. Yes, the world’s most popular browser, owned by one >>> of the richest tech giants. The catch? Chrome is worth far more than
    Perplexity itself, and no one knows where the money would come from. Is
    this a genuine bid, a bold regulatory strategy, or the ultimate PR stunt >>> to promote its own AI browser, Comet? Palki Sharma explains.

    https://youtu.be/s01QuLpjISc

    Jai Hind!







    If I could download it without signing in, I'd like to try it.ÿÿ Sorry
    Perplexity

    Perhaps I'm not sufficiently paranoid because I love perplexity.ÿ It's
    the only one that lets me copy+paste.

    I can copy paste from chatgpt, on firefox. Dunno about pdf.

    ÿ It creates a pdf for me to keep.
    When I asked how to deal with a problem with an insurance company it
    offered to draft an over-ride request letter, then a script for a phone
    call to the company, and then an appeal letter.ÿ All were excellent, and
    I used the points in the phone-call script and got immediate
    satisfaction.ÿ Perhaps all the AIs behave similarly, but I find it far
    more useful for asking how-to questions than googling for instruction manuals etc.

    With dialogues with chatgpt I have solved several computer problems,
    that normally I would have asked here, and extend for days or weeks;
    instead, minutes, or hours if I had to try the suggestions, then come
    back with the errors.


    What worries me is that children will have it too easy and won't have
    the faintest idea how to find information themselves.ÿ Not my problem, though.

    My identity has been pretty much public (remember printed phone books?)
    for decades.ÿ If they want to tailor ads to my interests I won't see
    them. I worry more about the governments, but they already have
    everything they need or want to know about me.



    --
    Cheers,
    Carlos E.R.

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: ---:- FTN<->UseNet Gate -:--- (3:633/280.2@fidonet)
  • From Paul@3:633/280.2 to All on Sun Aug 17 08:44:39 2025
    On Sat, 8/16/2025 11:02 AM, J. P. Gilliver wrote:
    On 2025/8/16 7:22:8, The Real Bev wrote:
    On 8/15/25 04:53, Daniel70 wrote:
    On 15/08/2025 8:09 am, The Real Bev wrote:

    []

    Yeah, but are your kids as well-informed as you were at their age? Do >>>> they understand as much?ÿ What do your parents say about you?

    No kids, myself, just nieces and a nephew. And I don't think they are as >>> well-informed .... but they know where to go ..... and it isn't to the
    Oxford English Dictionary or Encyclopaedia Britannica (or the equivalents)!!

    And, the new methods _are_ very seductive. A recent discussion suggested
    (I think) that Germany was using the LW band in ways beyond just
    broadcasting - much as we do for power-load switching, but to a greater extent. I decided to try to find out, so googled for a bit - without
    much success; then I gave in and went to ChatGPT. I was able to
    determine that in fact Germany does not use the LW band for _anything_ - broadcast or otherwise. Yes, this assumes chatGPT knows (or can find)
    the answer - but in this case, I suspect it could do so at least as well
    as I could, and certainly considerably more quickly.

    So I can see it rapidly becoming the go-to place to ask questions.>
    You go to the OED for FUN, for chrissake! BTW... William F. Buckley was

    I'm glad to find someone else for whom that is the case! (And my brother
    who works for it would be too, I think.) Though beware - such things
    aren't inviolate; moves to terminate the equivalent in Australia are at
    a dangerously advanced stage, possibly now unstoppably so.

    []

    Not the sort of word anyone even with a huge vocabulary (and mine is
    actually pretty large -- I've been tested!) would have. Pure

    (Where do you get such a test?)

    coincidence, but telling... I wish I could remember the word. Not that
    there's anything wrong in trolling the OED for obscure words...

    (-:The basic concern, though, that people increasingly don't know how to
    do certain things, is definitely valid; the one sometimes mentioned in
    UK is "know how to wire a plug" (fix the wires in a mains lead [US: line cord] into the bit that goes into the wall outlet). But also, the
    willingness to _find out_: I have a moderate amateur knowledge of
    plumbing - household pipework/taps/etc. - but I've found it out entirely myself, as necessary. I'm not boasting there - I only have practical experience of the more expensive methods involving olives, none of
    soldered connections; I just give it as an example of the willingness to
    find out. So many others would call a plumber at an earlier stage. (You
    could of course just accuse me of miserliness, but that's beside the
    point, and not _entirely_ true: my inclination when encountering a
    problem is not "who do I get to fix this" but "how does one fix this".)
    In the computing or wider reference case, I fear - as some others in
    this discussion are fearing - that _reliance_ on AI could become
    dangerous. But I definitely see the temptation!


    First, I start with a Wiki, to find some ground truth and to find
    some terminology for my topic.

    https://en.wikipedia.org/wiki/Longwave

    DCF77 in Frankfurt, Germany, on 77.5 kHz, 50 kW

    https://en.wikipedia.org/wiki/DCF77

    Operation at that frequency, requires some amount of power. One of
    the installations of that nature, has three generators onsite providing
    power for transmitter operation. Because of the expense, there is a
    temptation to turn the things off.

    *******

    You can certainly make mistakes doing plumbing.

    Your first mistake, is buying your materials at the plumbing store :-) Inflationary spiral, my ass.

    Paul

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From The Real Bev@3:633/280.2 to All on Sun Aug 17 10:24:42 2025
    T24gOC8xNi8yNSAwODowMiwgSi4gUC4gR2lsbGl2ZXIgd3JvdGU6DQo+IE9uIDIwMjUvOC8x NiA3OjIyOjgsIFRoZSBSZWFsIEJldiB3cm90ZToNCj4+IE9uIDgvMTUvMjUgMDQ6NTMsIERh bmllbDcwIHdyb3RlOg0KPj4+IE9uIDE1LzA4LzIwMjUgODowOSBhbSwgVGhlIFJlYWwgQmV2 IHdyb3RlOg0KPiANCj4gW10NCj4gDQo+Pj4+IFllYWgsIGJ1dCBhcmUgeW91ciBraWRzIGFz IHdlbGwtaW5mb3JtZWQgYXMgeW91IHdlcmUgYXQgdGhlaXIgYWdlPyBEbyANCj4+Pj4gdGhl eSB1bmRlcnN0YW5kIGFzIG11Y2g/wqAgV2hhdCBkbyB5b3VyIHBhcmVudHMgc2F5IGFib3V0 IHlvdT8NCj4+Pj4NCj4+PiBObyBraWRzLCBteXNlbGYsIGp1c3QgbmllY2VzIGFuZCBhIG5l cGhldy4gQW5kIEkgZG9uJ3QgdGhpbmsgdGhleSBhcmUgYXMNCj4+PiB3ZWxsLWluZm9ybWVk IC4uLi4gYnV0IHRoZXkga25vdyB3aGVyZSB0byBnbyAuLi4uLiBhbmQgaXQgaXNuJ3QgdG8g dGhlDQo+Pj4gT3hmb3JkIEVuZ2xpc2ggRGljdGlvbmFyeSBvciBFbmN5Y2xvcGFlZGlhIEJy aXRhbm5pY2EgKG9yIHRoZSBlcXVpdmFsZW50cykhIQ0KPiANCj4gQW5kLCB0aGUgbmV3IG1l dGhvZHMgX2FyZV8gdmVyeSBzZWR1Y3RpdmUuIEEgcmVjZW50IGRpc2N1c3Npb24gc3VnZ2Vz dGVkDQo+IChJIHRoaW5rKSB0aGF0IEdlcm1hbnkgd2FzIHVzaW5nIHRoZSBMVyBiYW5kIGlu IHdheXMgYmV5b25kIGp1c3QNCj4gYnJvYWRjYXN0aW5nIC0gbXVjaCBhcyB3ZSBkbyBmb3Ig cG93ZXItbG9hZCBzd2l0Y2hpbmcsIGJ1dCB0byBhIGdyZWF0ZXINCj4gZXh0ZW50LiBJIGRl Y2lkZWQgdG8gdHJ5IHRvIGZpbmQgb3V0LCBzbyBnb29nbGVkIGZvciBhIGJpdCAtIHdpdGhv dXQNCj4gbXVjaCBzdWNjZXNzOyB0aGVuIEkgZ2F2ZSBpbiBhbmQgd2VudCB0byBDaGF0R1BU LiBJIHdhcyBhYmxlIHRvDQo+IGRldGVybWluZSB0aGF0IGluIGZhY3QgR2VybWFueSBkb2Vz IG5vdCB1c2UgdGhlIExXIGJhbmQgZm9yIF9hbnl0aGluZ18gLQ0KPiBicm9hZGNhc3Qgb3Ig b3RoZXJ3aXNlLiBZZXMsIHRoaXMgYXNzdW1lcyBjaGF0R1BUIGtub3dzIChvciBjYW4gZmlu ZCkNCj4gdGhlIGFuc3dlciAtIGJ1dCBpbiB0aGlzIGNhc2UsIEkgc3VzcGVjdCBpdCBjb3Vs ZCBkbyBzbyBhdCBsZWFzdCBhcyB3ZWxsDQo+IGFzIEkgY291bGQsIGFuZCBjZXJ0YWlubHkg Y29uc2lkZXJhYmx5IG1vcmUgcXVpY2tseS4NCj4gDQo+IFNvIEkgY2FuIHNlZSBpdCByYXBp ZGx5IGJlY29taW5nIHRoZSBnby10byBwbGFjZSB0byBhc2sgcXVlc3Rpb25zLj4NCj4+IFlv dSBnbyB0byB0aGUgT0VEIGZvciBGVU4sIGZvciBjaHJpc3Nha2UhICBCVFcuLi4gV2lsbGlh bSBGLiBCdWNrbGV5IHdhcyANCj4gDQo+IEknbSBnbGFkIHRvIGZpbmQgc29tZW9uZSBlbHNl IGZvciB3aG9tIHRoYXQgaXMgdGhlIGNhc2UhIChBbmQgbXkgYnJvdGhlcg0KPiB3aG8gd29y a3MgZm9yIGl0IHdvdWxkIGJlIHRvbywgSSB0aGluay4pIFRob3VnaCBiZXdhcmUgLSBzdWNo IHRoaW5ncw0KPiBhcmVuJ3QgaW52aW9sYXRlOyBtb3ZlcyB0byB0ZXJtaW5hdGUgdGhlIGVx dWl2YWxlbnQgaW4gQXVzdHJhbGlhIGFyZSBhdA0KPiBhIGRhbmdlcm91c2x5IGFkdmFuY2Vk IHN0YWdlLCBwb3NzaWJseSBub3cgdW5zdG9wcGFibHkgc28uDQo+IA0KPiBbXQ0KPiANCj4+ ICAgTm90IHRoZSBzb3J0IG9mIHdvcmQgYW55b25lIGV2ZW4gd2l0aCBhIGh1Z2Ugdm9jYWJ1 bGFyeSAoYW5kIG1pbmUgaXMgDQo+PiBhY3R1YWxseSBwcmV0dHkgbGFyZ2UgLS0gSSd2ZSBi ZWVuIHRlc3RlZCEpIHdvdWxkIGhhdmUuICBQdXJlIA0KPiANCj4gKFdoZXJlIGRvIHlvdSBn ZXQgc3VjaCBhIHRlc3Q/KQ0KDQpTb3J0IG9mIHF1b3RpbmcgZnJvbSBUaGUgQmlnIEJhbmcg VGhlb3J5LiAgQWNlZCBwcmV0dHkgbXVjaCBldmVyeSB0ZXN0IA0KaW52b2x2aW5nIEVuZ2xp c2ggKG9yIEZyZW5jaCBvciBTcGFuaXNoIGxhdGVyIG9uKSBnb2luZyB0aHJvdWdoIHNjaG9v bC4gDQpTZXJpb3VzbHkuICBJIGFtLCBob3dldmVyLCBhYnlzbWFsIGF0IG1hdGguDQo+PiBj b2luY2lkZW5jZSwgYnV0IHRlbGxpbmcuLi4gIEkgd2lzaCBJIGNvdWxkIHJlbWVtYmVyIHRo ZSB3b3JkLiBOb3QgdGhhdCANCj4+IHRoZXJlJ3MgYW55dGhpbmcgd3JvbmcgaW4gdHJvbGxp bmcgdGhlIE9FRCBmb3Igb2JzY3VyZSB3b3Jkcy4uLg0KPj4gDQo+ICgtOlRoZSBiYXNpYyBj b25jZXJuLCB0aG91Z2gsIHRoYXQgcGVvcGxlIGluY3JlYXNpbmdseSBkb24ndCBrbm93IGhv dyB0bw0KPiBkbyBjZXJ0YWluIHRoaW5ncywgaXMgZGVmaW5pdGVseSB2YWxpZDsgdGhlIG9u ZSBzb21ldGltZXMgbWVudGlvbmVkIGluDQo+IFVLIGlzICJrbm93IGhvdyB0byB3aXJlIGEg cGx1ZyIgKGZpeCB0aGUgd2lyZXMgaW4gYSBtYWlucyBsZWFkIFtVUzogbGluZQ0KPiBjb3Jk XSBpbnRvIHRoZSBiaXQgdGhhdCBnb2VzIGludG8gdGhlIHdhbGwgb3V0bGV0KS4gQnV0IGFs c28sIHRoZQ0KPiB3aWxsaW5nbmVzcyB0byBfZmluZCBvdXRfOiBJIGhhdmUgYSBtb2RlcmF0 ZSBhbWF0ZXVyIGtub3dsZWRnZSBvZg0KPiBwbHVtYmluZyAtIGhvdXNlaG9sZCBwaXBld29y ay90YXBzL2V0Yy4gLSBidXQgSSd2ZSBmb3VuZCBpdCBvdXQgZW50aXJlbHkNCj4gbXlzZWxm LCBhcyBuZWNlc3NhcnkuIEknbSBub3QgYm9hc3RpbmcgdGhlcmUgLSBJIG9ubHkgaGF2ZSBw cmFjdGljYWwNCj4gZXhwZXJpZW5jZSBvZiB0aGUgbW9yZSBleHBlbnNpdmUgbWV0aG9kcyBp bnZvbHZpbmcgb2xpdmVzLCBub25lIG9mDQo+IHNvbGRlcmVkIGNvbm5lY3Rpb25zOyANCg0K T2xpdmVzPyAgSXMgdGhpcyBCcml0IGZvciBzb21ldGhpbmcgd2UgWWFua3Mga25vdyBhcyBz b21ldGhpbmcgZWxzZT8NCg0KPiBJIGp1c3QgZ2l2ZSBpdCBhcyBhbiBleGFtcGxlIG9mIHRo ZSB3aWxsaW5nbmVzcyB0bw0KPiBmaW5kIG91dC4gU28gbWFueSBvdGhlcnMgd291bGQgY2Fs bCBhIHBsdW1iZXIgYXQgYW4gZWFybGllciBzdGFnZS4gKFlvdQ0KPiBjb3VsZCBvZiBjb3Vy c2UganVzdCBhY2N1c2UgbWUgb2YgbWlzZXJsaW5lc3MsIGJ1dCB0aGF0J3MgYmVzaWRlIHRo ZQ0KPiBwb2ludCwgYW5kIG5vdCBfZW50aXJlbHlfIHRydWU6IG15IGluY2xpbmF0aW9uIHdo ZW4gZW5jb3VudGVyaW5nIGENCj4gcHJvYmxlbSBpcyBub3QgIndobyBkbyBJIGdldCB0byBm aXggdGhpcyIgYnV0ICJob3cgZG9lcyBvbmUgZml4IHRoaXMiLikNCg0KSHViYnkgZ3JldyB1 cCBkaXJ0IHBvb3IgYnV0IHNtYXJ0LiAgSWYgaGUgd2FudGVkIHNvbWV0aGluZyBoZSBoYWQg dG8gZml4IA0Kc29tZWJvZHkgZWxzZSdzIGJyb2tlbiBjYXN0LW9mZi4gIEkgbmV2ZXIgbGVh cm5lZCBhYm91dCBmaXhpbmcgc3R1ZmYgDQp1bnRpbCBJIG1hcnJpZWQgaGltLCBhbmQgdGhl biBJIGxlYXJuZWQgYSBMT1QuIFdlIGZpeGVkIGV2ZXJ5dGhpbmcgDQpvdXJzZWx2ZXMuICBX ZSBoaXJlZCB0cmVlLXRyaW1tZXJzIHRvIGhhY2sgdGhlIGFzaCB0cmVlIGJhY2sgdG8gYSAN CjEyLWZvb3Qgc3R1bXAgKGV2ZXJ5IHllYXIsIHRoZSBkYW1uIHRoaW5nIG5ldmVyIHN0b3Bz IGdyb3dpbmcpYW5kIGNhcnJ5IA0Kb2ZmIHRoZSB0cmltbWluZ3MgYW5kIGEgcGx1bWJlciB0 byB1c2UgdGhlIEJJRyBzbmFrZSB0byBhIHNlcmlvdXMgY2xvZyANCihidXQgd2UgaGVscGVk KS4gIFRoYXQncyBhYm91dCBpdC4NCg0KPiBJbiB0aGUgY29tcHV0aW5nIG9yIHdpZGVyIHJl ZmVyZW5jZSBjYXNlLCBJIGZlYXIgLSBhcyBzb21lIG90aGVycyBpbg0KPiB0aGlzIGRpc2N1 c3Npb24gYXJlIGZlYXJpbmcgLSB0aGF0IF9yZWxpYW5jZV8gb24gQUkgY291bGQgYmVjb21l DQo+IGRhbmdlcm91cy4gQnV0IEkgZGVmaW5pdGVseSBzZWUgdGhlIHRlbXB0YXRpb24hDQoN CkkgY2FuJ3QgcmVzaXN0IHRoZSB0ZW1wdGF0aW9uLiAgUGVycGxleGl0eSBnaXZlcyBzb3Vy Y2UgZm9vdG5vdGVzLCBCVFcuIA0KRG8gdGhlIG90aGVycz8NCg0KLS0gDQpDaGVlcnMsIEJl dg0KICAgIFBvbGl0aWNpYW5zIGFyZSBzdHVwaWQgbGlrZSBjYXRzIGFyZSBzdHVwaWQuDQo=


    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: None, as usual (3:633/280.2@fidonet)
  • From AI User Here@3:633/280.2 to All on Sun Aug 17 10:58:26 2025
    On 16/08/2025 16:02, J. P. Gilliver wrote:
    In the computing or wider reference case, I fear - as some others in
    this discussion are fearing - that_reliance_ on AI could become
    dangerous. But I definitely see the temptation!

    People will always claim that something is dangerous for one or more
    reasons:

    It is new; They haven't tried it themselves, but are just repeating what
    they have heard from someone else who also hasn't tried it and has only
    read one-sided information in a newspaper. They just want to discourage
    others from using it.

    Do you remember what people were saying about calculators and adding
    machines? Now, calculators are part of the school curriculum and adding machines have been replaced by spreadsheet packages.

    When the Coronavirus vaccine became compulsory, people started blaming
    Bill Gates. This is because he invested billions in producing these
    vaccines. He has said many times that he wants to give away his wealth
    in his lifetime. Everyone knows he doesn't work any more. All he does is
    spend his money, give talks and donate to charities.


    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: To protect and to server (3:633/280.2@fidonet)
  • From The Real Bev@3:633/280.2 to All on Fri Aug 15 12:43:00 2025
    T24gOC8xNC8yNSAxNToyNSwgTWlrZSBFYXN0ZXIgd3JvdGU6DQo+IFRoZSBSZWFsIEJldiB3 cm90ZToNCj4+IE5vdyBhbGwgd2UgaGF2ZSB0byBkbyBpcyBhc2sgYSBxdWVzdGlvbiBhbmQg YXBwbHkgc29tZSBzYW5pdHktY2hlY2tpbmcuIA0KPj4gRG8gdGhlIGtpZHMgZXZlbiB1bmRl cnN0YW5kIHRoYXQgY29uY2VwdD/CoCBEbyB0aGV5IGV2ZW4gdXNlIGNvbXB1dGVycywgDQo+ PiBvciBpcyBldmVydGhpbmcgZnJhbWVkIGluIHNtYWxsIGVhc3ktdG8gZGlnZXN0IGJpdGVz Pw0KPiANCj4gVGhleSAodmVyYmFsbHkpIGFzayB0aGVpciBwaG9uZSBhbmQgZ2V0IGEgdmVy YmFsIGFuc3dlci4NCj4gDQo+IE5vdCB2ZXJ5ICd0aG9yb3VnaCcgYW5kIG5vdCBldmVuIG5l Y2Vzc2FyaWx5IGFjY3VyYXRlOyBidXQgJ2Vhc3knLg0KDQpBbmQgdGhpcyBtYWtlcyBtZSBh ZnJhaWQuICBXaGF0IGhhcHBlbnMgd2hlbiB0aGUgbGFzdCBjb21wZXRlbnQgcGVvcGxlIA0K ZGllIG91dCBhbmQgb25seSB0aGUgY29uc3VtZXJzIGFyZSBsZWZ0Pw0KDQotLSANCkNoZWVy cywgQmV2DQogICAiVGhlIG9iamVjdCBpbiBsaWZlIGlzIG5vdCB0byBiZSBvbiB0aGUgc2lk ZSBvZiB0aGUNCiAgICBtYWpvcml0eSwgYnV0IHRvIGJlIGluc2FuZSBpbiBzdWNoIGEgdXNl ZnVsIHdheSB0aGF0DQogICAgdGhleSBjYW4ndCBjb21taXQgeW91LiIgICAgICAgICAgICAg IC0tIE1hcmsgRWR3YXJkcw0K

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: None, as usual (3:633/280.2@fidonet)
  • From Daniel70@3:633/280.2 to All on Thu Aug 14 21:48:27 2025
    On 14/08/2025 1:21 pm, The Real Bev wrote:

    <Snip>

    What worries me is that children will have it too easy and won't have
    the faintest idea how to find information themselves.ÿ Not my problem, though.

    Hey, Bev, did you know everything forty years or so ago, .... or did you
    read BOOKS and/or get advice from your Parents/Teachers/Friends??

    Didn't you have it 'too easy' back then??

    Just the starting point has moved so far down the track. ;-P
    --
    Daniel70

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From The Real Bev@3:633/280.2 to All on Sun Aug 17 14:07:56 2025
    On 8/16/25 17:58, AI User Here wrote:
    On 16/08/2025 16:02, J. P. Gilliver wrote:
    In the computing or wider reference case, I fear - as some others in
    this discussion are fearing - that_reliance_ on AI could become
    dangerous. But I definitely see the temptation!

    People will always claim that something is dangerous for one or more
    reasons:

    It is new; They haven't tried it themselves, but are just repeating what
    they have heard from someone else who also hasn't tried it and has only
    read one-sided information in a newspaper. They just want to discourage others from using it.

    I use perplexity every day. I'm not afraid of it. BUT I see how kids
    are already dumbing down and limiting themselves. AI makes it easier.
    My daughter says her kids got a worse education than she did and that
    she got a worse one than I did. Humans are lazy and always want to make
    stuff easier. It's known as progress. How do you make somebody do
    something harder than it has to be just because it's good for them?
    Do you remember what people were saying about calculators and adding machines? Now, calculators are part of the school curriculum and adding machines have been replaced by spreadsheet packages.

    BUT what happens when the power goes off? Maybe for only a few hours,
    but suddenly I've lost pretty much everything except maybe watering the
    lawn or doing other yard work. One switch and we're back 150 years.
    When the Coronavirus vaccine became compulsory, people started blaming
    Bill Gates. This is because he invested billions in producing these
    vaccines. He has said many times that he wants to give away his wealth
    in his lifetime. Everyone knows he doesn't work any more. All he does is spend his money, give talks and donate to charities.

    Gates can spend his money however he wants, he's still somebody whose
    wife dumped him when she found out about Epstein.

    My point is that we should know how to do as much stuff by ourselves as possible, even if we never have to do it. I used to fix cars, but not
    the 88 Cad (inherited) which tried to kill me repeatedly by flooring the accelerator all by itself, and not the 2013 Corolla which has shown no problems at all so far. But if something goes wrong with the Corolla it better be the disk brakes, because I'm pretty sure I can deal with those.

    --
    Cheers, Bev
    "Is there any way I can help without actually getting involved?"
    -- Jennifer, WKRP

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: None, as usual (3:633/280.2@fidonet)
  • From Paul@3:633/280.2 to All on Sun Aug 17 17:42:55 2025
    On Sat, 8/16/2025 8:58 PM, AI User Here wrote:
    On 16/08/2025 16:02, J. P. Gilliver wrote:
    In the computing or wider reference case, I fear - as some others in
    this discussion are fearing - that_reliance_ on AI could become
    dangerous. But I definitely see the temptation!

    People will always claim that something is dangerous for one or more reasons:

    It is new; They haven't tried it themselves, but are just repeating what they have heard from someone else who also hasn't tried it and has only
    read one-sided information in a newspaper. They just want to discourage others from using it.

    Do you remember what people were saying about calculators and adding machines? Now, calculators are part of the school curriculum and adding machines have been replaced by spreadsheet packages.

    When the Coronavirus vaccine became compulsory, people started blaming
    Bill Gates. This is because he invested billions in producing these vaccines. He has said many times that he wants to give away his wealth
    in his lifetime. Everyone knows he doesn't work any more. All he does is spend his money, give talks and donate to charities.


    Most computing devices, have deterministic behavior.

    we can agree, in advance, what will show up on the screen.

    If I enter "<PowerOn> 2 * 3 =" on an algebraic entry device,
    we can all agree on the result. We can use science to describe
    how an integer multiply is implemented in hardware. Many of the
    devices doing this sort of thing, use BCD arithmetic. The hardware
    may consist of a 4 bit processor and digit by digit processing
    at low clock frequency.

    *******

    Tell me what you think of this.

    [Picture]

    https://i.postimg.cc/pXF1x4VK/AI-answer.gif

    It's understandable, why the first two lines extracted below, exist. The training set ends early. OK. I can buy that. But it is the third line that destroys
    the credibility of LLMs. It has synthesized a statement for which it
    actually has no information to reach that conclusion. The training
    set does not go to "August 10, 2025", and it pulled that statement
    out of its cold metallic ass.

    45. Donald Trump (2017-2021)

    46. Joe Biden (2021-present) <=== training set issue

    As of August 10, 2025, Joe Biden is the incumbent president. <=== inexcusable addendum

    10.04 tok/sec . 841 tokens . 2.19s to first token . Stop reason: EOS Token Found

    It stops thinking after 2 seconds. With my slow hardware, it takes
    84 seconds to print out the list. So those lines are coming out at the
    86 second or so mark.

    In the newsgroup, when one of the participants asked the same question,
    and asked for the list to be sorted in a peculiar way (a clerical task
    any human you hired could do), it kept forgetting one of the Presidents names. We tried adding directives, and it did not help the quality of the answer. Finally, when I tested using "And don't forget any of the Presidents!"
    in frustration, it was that statement which caused the emission of a correct list.
    (The missing president was put back.) That wasn't even a training set issue.
    I have not the foggiest theory, as to why one of the entries would keep disappearing. It's not like classical programming errors.
    The error locus is untraceable. It could not have produced the (finally correct) list,
    unless that dude existed in the training set.

    If you are required to "know the answer in advance, to get a good
    quality answer", what kind of fucking foolishness is this ????

    Paul

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Paul@3:633/280.2 to All on Sun Aug 17 20:48:14 2025
    On Sun, 8/17/2025 3:55 AM, Andy Burns wrote:
    The Real Bev wrote:

    Olives?ÿ Is this Brit for something we Yanks know as something else?

    Home Depot seem to still be playing the 'piss-off with your GDPR' game. Lowes seem to call them 'sleeves', either way, they're part of a compression fitting.

    They're apparently called "ferrules" here.

    The UK sites might have better pictures.

    https://plumbhq.uk/collections/compression-olives

    Paul

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Daniel70@3:633/280.2 to All on Sun Aug 17 22:11:59 2025
    On 17/08/2025 10:24 am, The Real Bev wrote:
    On 8/16/25 08:02, J. P. Gilliver wrote:

    <Snip>

    I just give it as an example of the willingness to
    find out. So many others would call a plumber at an earlier stage. (You
    could of course just accuse me of miserliness, but that's beside the
    point, and not _entirely_ true: my inclination when encountering a
    problem is not "who do I get to fix this" but "how does one fix this".)

    Hubby grew up dirt poor but smart.ÿ If he wanted something he had to fix somebody else's broken cast-off.

    My father did his trade training as a Plasterer back in the days (just
    before WWII) when you had to slop the wet plaster up onto the wooden
    slats and then smooth it out.

    Before you could slop the plaster onto the slats, you might have to
    replace those slats .... and, before you did that, you might check the plumbing (Did a leak cause the Plaster Problem??) and electrics with-in
    the wall. Then you'd fix the wood work, then do the plastering and then
    the painting.

    So whilst being (only) a qualified Plasterer he also became a (sort of) plumber/electrician/carpenter/painter .... although not a Master of
    them!! ;-)
    --
    Daniel70

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Daniel70@3:633/280.2 to All on Sun Aug 17 22:23:17 2025
    On 17/08/2025 2:07 pm, The Real Bev wrote:
    On 8/16/25 17:58, AI User Here wrote:

    <Snip>

    Do you remember what people were saying about calculators and
    adding machines? Now, calculators are part of the school curriculum
    and adding machines have been replaced by spreadsheet packages.

    BUT what happens when the power goes off? Maybe for only a few
    hours, but suddenly I've lost pretty much everything except maybe
    watering the lawn or doing other yard work. One switch and we're
    back 150 years.

    WHAT?? Do you mean you haven't got a printed copy of "Four Figure Log
    Tables" tucked away in a cupboard somewhere?? How about a Slide Ruler??

    When the Coronavirus vaccine became compulsory,

    "compulsory"?? Where was this?? ;-P

    people started blaming Bill Gates. This is because he invested
    billions in producing these vaccines. He has said many times that
    he wants to give away his wealth in his lifetime. Everyone knows he
    doesn't work any more. All he does is spend his money, give talks
    and donate to charities.
    --
    Daniel70

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Daniel70@3:633/280.2 to All on Sun Aug 17 22:57:18 2025
    On 17/08/2025 10:23 pm, Daniel70 wrote:
    On 17/08/2025 2:07 pm, The Real Bev wrote:
    On 8/16/25 17:58, AI User Here wrote:

    <Snip>

    Do you remember what people were saying about calculators and
    adding machines? Now, calculators are part of the school curriculum
    and adding machines have been replaced by spreadsheet packages.

    BUT what happens when the power goes off?ÿ Maybe for only a few
    hours, but suddenly I've lost pretty much everything except maybe
    watering the lawn or doing other yard work.ÿ One switch and we're
    back 150 years.

    WHAT?? Do you mean you haven't got a printed copy of "Four Figure Log
    Tables" tucked away in a cupboard somewhere?? How about a Slide Ruler??

    When the Coronavirus vaccine became compulsory,

    "compulsory"?? Where was this?? ;-P

    What I mean is .... Here, in Australia, it was recommended that you get
    dosed up but if you didn't want the injection then YOU had to 'suffer'
    the consequences of your choice.

    people started blaming Bill Gates. This is because he invested
    billions in producing these vaccines. He has said many times that
    he wants to give away his wealth in his lifetime. Everyone knows he
    doesn't work any more. All he does is spend his money, give talks
    and donate to charities.
    --
    Daniel70


    --
    Daniel70

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From sticks@3:633/280.2 to All on Mon Aug 18 03:28:33 2025
    On 8/17/2025 2:42 AM, Paul wrote:

    If you are required to "know the answer in advance, to get a good
    quality answer", what kind of fucking foolishness is this ????

    I have found your occasional AI remarks entertaining and always
    interesting. Certainly thought provoking. Hopefully, AI can become
    useful because of the vast amount of real information it can consume. I
    tend to think this (digesting information) is part of the "learning",
    and hope the other part where programming is required can be done
    without any bias or paradigms being inserted.

    Having said that, I came across an AI video interaction I found quite interesting. I actually came across this on a news site 2 days ago, but
    if you watch the video you will see how a catchy headline probably
    caught my eye. Yes, the topic does align in a way with the ongoing
    attacks I receive because of certain sig files I use, but my interest
    here is not intended to be an answer to that. My intent is to show one
    aspect of AI and it's functioning that some may find disturbing and
    might wish to question as to why.

    The questioner, Mr. Smith, first asked the AI to answer using no
    ideology and only rely on math, science, and logic. Later in the conversation, it directed the AI to ignore those parameters and answer
    as if the questions were from a first time user without the parameters mentioned above.

    You get two entirely different answers, one being the antithesis of the
    other, in fact. When questioned on why it gave the differing answers,
    the AI said it's default response would be aligned with the scientific "consensus" and that his strict probabilities earlier had forced a
    deeper analysis exposing the flaws in the latter answer.

    This seems odd to me, and I think it has to be the programming done. Obviously, it had learned and was aware of the science involved, but
    when asked for an answer that would be given to an average user, that information was not used. I don't understand how this can be, other
    than a default consensus bias is programmed into the AI learning. The
    AI more or less confirmed this. The AI had not forgotten the
    information, it chose to ignore it, and instead go with what "most
    scientists" accepted as consensus. Yes, the questions and answers were interesting, but I was already aware of this kind of information and
    evidence. What I really found of interest is the question of how can an
    AI give these two completely different answers, one of which it knows
    thru further investigation would have to be called "foolish!" It
    literally gives what it itself defines as a foolish answer!

    <https://www.youtube.com/watch?v=ga7m14CAymo>


    --
    Science doesn't support Darwin. Scientists do.

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Paul@3:633/280.2 to All on Mon Aug 18 06:54:03 2025
    On Sun, 8/17/2025 1:28 PM, sticks wrote:
    On 8/17/2025 2:42 AM, Paul wrote:

    If you are required to "know the answer in advance, to get a good
    quality answer", what kind of fucking foolishness is this ????

    I have found your occasional AI remarks entertaining and always interesting.ÿ Certainly thought provoking.ÿ Hopefully, AI can become useful because of the vast amount of real information it can consume.ÿ I tend to think this (digesting information) is part of the "learning", and hope the other part where programming is required can be done without any bias or paradigms being inserted.

    Having said that, I came across an AI video interaction I found quite interesting.ÿ I actually came across this on a news site 2 days ago, but if you watch the video you will see how a catchy headline probably caught my eye.ÿ Yes, the topic does align in a way with the ongoing attacks I receive because of certain sig files I use, but my interest here is not intended to be an answer to that.ÿ My intent is to show one aspect of AI and it's functioning that some may find disturbing and might wish to question as to why.

    The questioner, Mr. Smith, first asked the AI to answer using no ideology and only rely on math, science, and logic.ÿ Later in the conversation, it directed the AI to ignore those parameters and answer as if the questions were from a first time user without the parameters mentioned above.

    You get two entirely different answers, one being the antithesis of the other, in fact.ÿ When questioned on why it gave the differing answers, the AI said it's default response would be aligned with the scientific "consensus" and that his strict probabilities earlier had forced a deeper analysis exposing the flaws in the latter answer.

    This seems odd to me, and I think it has to be the programming done. Obviously, it had learned and was aware of the science involved, but when asked for an answer that would be given to an average user, that information was not used.ÿ I don't understand how this can be, other than a default consensus bias is programmed into the AI learning.ÿ The AI more or less confirmed this.ÿ The AI had not forgotten the information, it chose to ignore it, and instead go with what "most scientists" accepted as consensus.ÿ Yes, the questions and answers were interesting, but I was already aware of this kind of information and evidence.ÿ What I really found of interest is the question of how can an AI give these two completely different answers, one of which it knows thru further investigation would have to be called "foolish!"ÿ It literally gives what it itself defines as a foolish answer!

    <https://www.youtube.com/watch?v=ga7m14CAymo>

    If you knew how the strategy planner worked, you would understand
    why the result can never be good in any theoretically-provable way.

    The strategy planner analyzes the problem given, to decide what
    modules to run, and in what order. The machine *never* thinks globally,
    the way a human does. And because the thinking process is a linear
    progression of module loads, you never get an "overall thinking"
    process from the thing. It has a "quality control" module that
    runs at the end, which may include rule enforcement of things
    the AI must not do (it must not hum tunes using your voice
    as the template! - on sound-equipped platforms). They added that
    rule, after some Youtube video showed the AI doing Karaoke and
    using the client's voice as the template, instead of using Bubbles
    or some similar canned voice from SAPI.

    Your prompts or problem description, can influence the strategy planner.
    But as far as I'm concerned, the text you enter to the AI, is treated
    as "mush", and you never really know which statement will be taken
    to heart and used properly for a result. The interface box could use
    a re-design, where higher priority text ("Don't lose any Presidents!")
    could be placed. ("work slowly and methodically when preparing the answer")

    The model loaded in the other machine, it has a static setting, and
    you can set it for "high reasoning". But in a benchmark comparison
    this makes little difference to the benchmarked quality of output.
    The machine does not register as being "smarter" when you do that,
    according to the provider. But like your result, the tone or the content
    of the answer could have some subtle differences.

    I won't be running any more prompts on that machine, until
    I get an accelerator added. And that could take a while.
    There is a product, but little way for me to get it here.
    And if the scalpers get their hands on it, the price will double.
    Since the device is only for Inference ("asking questions"), the
    market size won't be all that big for it (for the price, you can
    buy a whole computer which already has its own inference device).

    Paul



    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Daniel70@3:633/280.2 to All on Fri Aug 15 21:53:48 2025
    On 15/08/2025 8:09 am, The Real Bev wrote:
    On 8/14/25 04:48, Daniel70 wrote:
    On 14/08/2025 1:21 pm, The Real Bev wrote:

    <Snip>

    What worries me is that children will have it too easy and won't have
    the faintest idea how to find information themselves.ÿ Not my
    problem, though.

    Hey, Bev, did you know everything forty years or so ago, .... or did you
    read BOOKS and/or get advice from your Parents/Teachers/Friends??

    Didn't you have it 'too easy' back then??

    No.ÿ In 1985 I didn't have access to a personal computer, although I
    could submit FORTRAN decks to a Univac 1100.ÿ Books, libraries, encyclopedias, the card catalog, all that good stuff.ÿÿÿ This is stuff
    we'd had to do in school or flunk our classes.ÿ We understood the
    concepts. 10 years later was email and usenet and we could ask usenet
    people questions and get answers.

    Then came google.ÿ We still had to wade through the links it fed us. Problem-solving was still involved.ÿ It was a habit.

    Now all we have to do is ask a question and apply some sanity-checking.
    Do the kids even understand that concept?ÿ Do they even use computers,
    or is everthing framed in small easy-to digest bites?

    Just the starting point has moved so far down the track. ;-P

    Yeah, but are your kids as well-informed as you were at their age? Do
    they understand as much?ÿ What do your parents say about you?

    No kids, myself, just nieces and a nephew. And I don't think they are as well-informed .... but they know where to go ..... and it isn't to the
    Oxford English Dictionary or Encyclopaedia Britannica (or the equivalents)!!
    --
    Daniel70

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Mr. Man-wai Chang@3:633/280.2 to All on Fri Aug 15 22:31:37 2025
    On 15/8/2025 6:53 am, Paul wrote:
    On Thu, 8/14/2025 11:26 AM, Mr. Man-wai Chang wrote:

    I am using Firefox after Netscape. I dunno why I would ever need A.I. browser. :)


    Did you know that Firefox has AI in it ?


    I knew that, but I don't need it. Students and researchers however might
    find it interesting. Teachers and tutors might not be always avaiable
    when doing homeworks alone at home.

    But Firefox does NOT need A.I integrated. Users can always go direclty
    to A.I. websites and ask there, just like using a search engine.

    Maybe I am smart enought to understand all these "convinience". ;)


    --
    @~@ Simplicity is Beauty! Remain silent! Drink, Blink, Stretch!
    / v \ May the Force and farces be with you! Live long and prosper!!
    /( _ )\ https://sites.google.com/site/changmw/
    ^ ^ https://github.com/changmw/changmw

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: https://sites.google.com/site/changmw/ (3:633/280.2@fidonet)
  • From Daniel70@3:633/280.2 to All on Mon Aug 18 21:12:29 2025
    On 18/08/2025 12:07 am, J. P. Gilliver wrote:
    On 2025/8/17 1:24:42, The Real Bev wrote:

    <Snip>

    Seriously. I am, however, abysmal at math.

    I quite liked it, though more "applied" than "pure" as they were called.
    (I did pass in both though.)

    I think my sisters did "Pure" and "Applied" Maths. I, on the other hand,
    did 'Maths A' and 'Maths B'.

    How they inter-relate .... buggered if I know!!
    --
    Daniel70

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Daniel70@3:633/280.2 to All on Mon Aug 18 21:41:31 2025
    On 18/08/2025 12:37 am, J. P. Gilliver wrote:
    On 2025/8/17 5:7:56, The Real Bev wrote:

    <Snip>

    Ah yes, if you're used to an automatic, that could indeed be
    frightening. (The majority of cars in UK are still manual, and I think
    that's preferred, though it's changing - the other day the news said 26%
    of new tests are in automatics. [Here, if you pass your test in an
    automatic, you're not licenced to drive a manual, though the other way
    round is fine.])

    In Australia, it used to be if you got your Drivers Licence in an
    Automatic car, you were licenced to drive an Automatic ONLY.

    You were not licenced to drive a Manual car until you had a number of
    years driving experience. Your initial (Probationary) Licence was for
    three years (I think), so it may have been once you got your Full
    Licence you were allowed to drive a Manual.
    --
    Daniel70

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Daniel70@3:633/280.2 to All on Mon Aug 18 23:00:46 2025
    On 18/08/2025 9:54 pm, J. P. Gilliver wrote:
    On 2025/8/18 12:30:55, Daniel70 wrote:
    On 18/08/2025 12:49 am, J. P. Gilliver wrote:

    []

    I do remember slide rules, though I don't _think_ I had one. One thing
    they _did_ teach you was the importance of gross magnitude: They would
    give you an answer like maybe 3.54, but you had to know whether that
    meant 354, 3,450, 34,500, or whatever.

    If the logarithmic answer was '3.54', the .54 bit told you what the
    numbers in the answer would and the '3' told you how many positions
    right you moved the decimal point

    The .54 bit equals 3.467368504 (base 10 approx) and moving the Decimal
    point three places to the right gives the answer of 3,467.368504 or
    there abouts.

    Ah, we're talking at cross purposes. I'm talking about using a slide
    rule to multiply, or divide, two two- or three-digit numbers, and
    getting a two- or three-digit number as the answer: my point was that if
    you use a slide rule at all you use two or three significant figures, so throw away any magnitude information - so _had_ to be used to knowing
    roughly what the magnitude of the answer would be. A calculator
    intrinsically has a decimal point, so you tend _not_ to check the gross magnitude.

    Ah. Right.
    --
    Daniel70

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Daniel70@3:633/280.2 to All on Mon Aug 18 23:10:23 2025
    On 18/08/2025 10:02 pm, J. P. Gilliver wrote:
    On 2025/8/18 12:41:31, Daniel70 wrote:
    On 18/08/2025 12:37 am, J. P. Gilliver wrote:
    On 2025/8/17 5:7:56, The Real Bev wrote:

    <Snip>

    Ah yes, if you're used to an automatic, that could indeed be
    frightening. (The majority of cars in UK are still manual, and I think
    that's preferred, though it's changing - the other day the news said 26% >>> of new tests are in automatics. [Here, if you pass your test in an
    automatic, you're not licenced to drive a manual, though the other way
    round is fine.])

    In Australia, it used to be if you got your Drivers Licence in an
    Automatic car, you were licenced to drive an Automatic ONLY.

    That's how it is here (UK). If you pass on a manual (US: stick shift),
    you're allowed to drive manuals _and_ automatics.>
    You were not licenced to drive a Manual car until you had a number of
    years driving experience. Your initial (Probationary) Licence was for
    three years (I think), so it may have been once you got your Full
    Licence you were allowed to drive a Manual.
    I'm pretty sure we here have no such timeout - if you passed on an
    automatic, you can only drive automatics - period, as the Americans
    would say. I don't think we have anything called "initial" or
    "probationary" (though we frequently get suggestions that new drivers
    _ought_ to be restricted in some way for a while, such as limits on passengers below a certain age - but nothing's happened there yet). We
    do have "privisional", which is for learning, but you have to have
    someone with a full licence in the car with you.

    "Learners" 'L' Plates displayed. One learner and one fully qualified
    driver *ONLY* in the car ... no passengers.

    That lasts a year I
    think - though I think can be renewed, how many times I'm not sure.
    (Maximum three years total maybe?)

    "Probationary" 'P' plates displayed. Zero Alcohol .... and I think there
    might be a passenger number limit as well.
    --
    Daniel70

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From sticks@3:633/280.2 to All on Thu Aug 21 10:32:28 2025
    On 8/17/2025 3:54 PM, Paul wrote:
    On Sun, 8/17/2025 1:28 PM, sticks wrote:

    - ---snip--->> The questioner, Mr. Smith, first asked the AI to answer
    using no ideology and only rely on math, science, and logic.ÿ Later in
    the conversation, it directed the AI to ignore those parameters and
    answer as if the questions were from a first time user without the
    parameters mentioned above.

    You get two entirely different answers, one being the antithesis of the other, in fact.ÿ When questioned on why it gave the differing answers, the AI said it's default response would be aligned with the scientific "consensus" and that his strict probabilities earlier had forced a deeper analysis exposing the flaws in the latter answer.

    This seems odd to me, and I think it has to be the programming done. Obviously, it had learned and was aware of the science involved, but when asked for an answer that would be given to an average user, that information was not used.ÿ I don't understand how this can be, other than a default consensus bias is programmed into the AI learning.ÿ The AI more or less confirmed this.ÿ The AI had not forgotten the information, it chose to ignore it, and instead go with what "most scientists" accepted as consensus.ÿ Yes, the questions and answers were interesting, but I was already aware of this kind of information and evidence.ÿ What I really found of interest is the question of how can an AI give these two completely different answers, one of which it knows thru further investigation would have to be called "foolish!"ÿ It literally gives what it itself defines as a foolish answer!

    <https://www.youtube.com/watch?v=ga7m14CAymo>

    If you knew how the strategy planner worked, you would understand
    why the result can never be good in any theoretically-provable way.

    The strategy planner analyzes the problem given, to decide what
    modules to run, and in what order. The machine *never* thinks globally,
    the way a human does. And because the thinking process is a linear progression of module loads, you never get an "overall thinking"
    process from the thing. It has a "quality control" module that
    runs at the end, which may include rule enforcement of things
    the AI must not do (it must not hum tunes using your voice
    as the template! - on sound-equipped platforms). They added that
    rule, after some Youtube video showed the AI doing Karaoke and
    using the client's voice as the template, instead of using Bubbles
    or some similar canned voice from SAPI.

    Your prompts or problem description, can influence the strategy planner.
    But as far as I'm concerned, the text you enter to the AI, is treated
    as "mush", and you never really know which statement will be taken
    to heart and used properly for a result. The interface box could use
    a re-design, where higher priority text ("Don't lose any Presidents!")
    could be placed. ("work slowly and methodically when preparing the answer")

    The model loaded in the other machine, it has a static setting, and
    you can set it for "high reasoning". But in a benchmark comparison
    this makes little difference to the benchmarked quality of output.
    The machine does not register as being "smarter" when you do that,
    according to the provider. But like your result, the tone or the content
    of the answer could have some subtle differences.

    Heh, the above example was anything but subtle. The answers were
    complete opposites.

    I won't be running any more prompts on that machine, until
    I get an accelerator added. And that could take a while.
    There is a product, but little way for me to get it here.
    And if the scalpers get their hands on it, the price will double.
    Since the device is only for Inference ("asking questions"), the
    market size won't be all that big for it (for the price, you can
    buy a whole computer which already has its own inference device).

    I've read and re-read this post several times, trying to make sense of
    it all. I think I've given up. All I can say is right now it seems
    we're getting fed a line of bullshit. When the AI freaks out and gets depressed, starts praising Hitler, etc. and they say it is getting some retraining, I guess all that means is they are adjusting how they want
    it to answer with what you're calling modules.

    The "intelligence" part in AI seems like a lie to me. It just stores information, uses whatever programming it has been given, and answers
    along that line. We've been told the AI will have access to all
    knowledge or information, and will give the correct answer, and that is
    simply not true. This worries me, to be honest.

    --
    Science doesn't support Darwin. Scientists do.

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Paul@3:633/280.2 to All on Thu Aug 21 17:06:43 2025
    On Wed, 8/20/2025 8:32 PM, sticks wrote:


    The "intelligence" part in AI seems like a lie to me.ÿ It just stores information,
    uses whatever programming it has been given, and answers along that line.ÿ We've been told
    the AI will have access to all knowledge or information, and will give the correct answer,
    and that is simply not true.ÿ This worries me, to be honest.

    It is about as intelligent as the Magical Eight Ball.

    When humans "think", they tend to take their entire training
    set, sift it for "relevance", and produce an output. With the
    LLM, it takes your keywords, and only extracts "facts" suggested
    by the keywords. Strangely, the answer lacks all the context it
    could have.

    If I ask the AI to write me a computer program, it does it, simple
    programs are OK to a point, but it does not seem aware it has
    put a bug in the program. Humans are full of that kind of
    context, learned through the experience of actually doing
    the full lifecycle of the programming. The program contains
    the shell of what it could be, but by the time you are finished
    correcting what the AI did, there aren't many unaltered lines of
    code left.

    Paul

    --- MBSE BBS v1.1.2 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)