Maria Sophia wrote:
It appears to me that Occam's Razor tells us that yt-dlp is fine.
I just need a modern FFMPEG (such as ffmpeg-git-full or ffmpeg-release-full).
<https://www.gyan.dev/ffmpeg/builds/>
On 17.01.2026 04:08, Maria Sophia wrote:
Maria Sophia wrote:
It appears to me that Occam's Razor tells us that yt-dlp is fine.
I just need a modern FFMPEG (such as ffmpeg-git-full or ffmpeg-release-full).
<https://www.gyan.dev/ffmpeg/builds/>
cool! I thought yt-dlp update is all you need!
ciao...
It looks like YouTube has blocked video downloads using yt-dlp. Has
anybody noticed this?
I've also tried using VPN and the same result. It says error 403
forbidden or something like that.
[download] Got error: HTTP Error 403: Forbidden. Retrying fragment 1 (1/10)...
It looks like YouTube has blocked video downloads using yt-dlp. Has
anybody noticed this?
On 17/01/2026 1:21 pm, Maria Sophia wrote:
Simon wrote:
It looks like YouTube has blocked video downloads using yt-dlp. Has
anybody noticed this?
I've also tried using VPN and the same result. It says error 403
forbidden or something like that.
[download] Got error: HTTP Error 403: Forbidden. Retrying fragment 1
(1/10)...
While I haven't downloaded YouTube videos on Windows for a while
since I use the Android Newpipe YouTube client to do that, so as to
volunteer to help you out, I 1st used the Aloha VPN browser to visit
this adult video page (which in the past many browsers asked for a login). >> <https://www.youtube.com/watch?v=BY82T7Q8hiw>
While this might not play in other browsers without a Google Account,
it plays just fine in the default Aloha VPN browser with VPN turned on.
Clicking your link played just fine in my SeaMonkey Suite without a
Google Account .... after I clicked 'Disallow' on a Pop-Up.
Andy Burns wrote:
what's this weird font:
It's a result of his 'random' headers, it makes thunderbird (maybe other >>newsreaders too) think the text is using chinese characters.
Content-Type: text/plain; charset=Big5
it has been pointed out that avoiding charset=big5 would be wise.
Hi Andy,
Thanks for taking a stab at why sometimes the charset header is borked.
It's off topic for this conversation, but I'll try to explain what I know.
It is absolutely not caused by the randomized headers for privacy, as they >are static, although I understand why it might look related since we talked >about that earlier this year. The only header causing it is the charset.
Which I don't overtly touch.
Nor do I randomize.
The charset problem only happens when I copy and paste text from outside my >text editor into the editor that feeds the posting scripts, and when I
forget to (or don't think about) converting to ASCII with the Notepad++ >conversion script we've discussed which converts Unicode to ASCII.
If I type everything by hand using the keyboard 95 characters, the charset >never switches to Big5 so it happening in my charset recognition binary.
The strange part is that the pasted text often looks fine in my edits.
In this case, it's simply the filespec copied over from a Windows CLI.
But most of the time the issue shows up when the source is a web page, >usually using a Chrome-based web browser which, in my case, is the Aloha >privacy browser variant, which apparently tries to prettify text. That
means it inserts things like curly quotes, em dashes, non breaking hyphens, >zero width spaces, and other characters that are not part of plain ASCII.
Once even one non ASCII character slips in, the script that guesses the >encoding gets confused and decides the whole post must be Big5 or whatever.
I don't set it. The binary I picked up off the Internet is doing that. >Thunderbird, in your case, I think, then assumes the text is Chinese.
I added some code to inspect the characters in the message so I can track >down which one is triggering the mis detection, but I am still working on >that. So I apologize as this is a quirk of writing your own newsreader.
It looks like YouTube has blocked video downloads using yt-dlp. Has
anybody noticed this?
On Sat, 17 Jan 2026 00:20:42 +0000, Simon wrote:
It looks like YouTube has blocked video downloads using yt-dlp. Has
anybody noticed this?
I downloaded a couple of them yesterday without issues. I hadn't used
yt-dlp in a while, so I first ran "yt-dlp -U" to get the latest.
Do you by chance still have an older version?
Schugo wrote:
cool! I thought yt-dlp update is all you need!
ciao...
BTW.. what's this weird font:
Content-Type: text/plain; charset=Big5
I have no idea. It gets inserted somehow. I don't specify the font.
It happens when I copy/paste from the Windows command line for some reason.
On 1/17/2026 11:45 AM, Stan Brown wrote:
On Sat, 17 Jan 2026 00:20:42 +0000, Simon wrote:
It looks like YouTube has blocked video downloads using yt-dlp. Has
anybody noticed this?
I downloaded a couple of them yesterday without issues. I hadn't used
yt-dlp in a while, so I first ran "yt-dlp -U" to get the latest.
Do you by chance still have an older version?
Recent versions of yt_dlp are not compatible with Windows 7, either x32
or c64. ...
On 17.01.2026 22:40, David E. Ross wrote:
On 1/17/2026 11:45 AM, Stan Brown wrote:
On Sat, 17 Jan 2026 00:20:42 +0000, Simon wrote:
It looks like YouTube has blocked video downloads using yt-dlp. Has
anybody noticed this?
I downloaded a couple of them yesterday without issues. I hadn't used
yt-dlp in a while, so I first ran "yt-dlp -U" to get the latest.
Do you by chance still have an older version?
Recent versions of yt_dlp are not compatible with Windows 7, either x32
or c64. ...
huh?
https://github.com/yt-dlp/yt-dlp/releases
yt-dlp 2025.12.08 (Latest)
C:\Users\Admin\_work>yt-dlp --version
2025.12.08
Win7 Ultimate 64
Maria Sophia wrote:
The flow would be from gVIM (which only has the 95 keyboard ASCII
characters when I type but it has funky stuff when I copy/paste) to the
charset filter, to telnet to the Internet NNTP server then back to you.
Thanks to Carlos, Andy & Adam Kerman (anonymizer?), I made a change just
now so this is simply a test of whether the change makes any difference.
<https://i.postimg.cc/vHP4gjSC/charset01.jpg>
1. In my custom newsreader, I just now wiped out all headers of the prefix:
Mime-Version:
Content-Type: (this is where the charset seems to be embedded?????)
Content-Transfer-Encoding:
And, I removed the binary charset filter that I had originally installed
(which I picked up back in the 1990s or so from somewhere on the net).
2. So that we KNOW that change was made, I cloned Adam Kerman's Newsreader:
X-Newsreader: trn 4.0-test77 (Sep 1, 2010)
3. The resulting header line for a pure ASCII text became (magically?)
Date: Sat, 17 Jan 2026 23:47:47 -0500
Message-ID: <10kholk$2ej4$1@nnrp.usenet.blueworldhosting.com>
X-Newsreader: trn 4.0-test77 (Sep 1, 2010)
Previously, in the post just prior to that post, the header was:
Date: Sat, 17 Jan 2026 23:34:36 -0500
Message-ID: <10khnss$7e6$1@nnrp.usenet.blueworldhosting.com>
... Pus these which came from my binary charset filter from the 1990s.
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Which I didn't overtly set as it was set by my binary filter that I
added about a month ago when Carlos/Andy first brought this issue up.
So, if the content is purely the 95 characters of my QWERTY USA keyboard, then it seems there is no header line for the character set, which is fine.
But now for THIS post, I will add, on purpose unicode and chinese text.
a. This line includes a Chinese character: ?
b. This line includes a Japanese character: ?
c. Funky German characters: , , , , , ,
d. Non?ASCII currencies: ? (euro), (pound), (yen), ? (won), ? (rupee)
e. Curly quotes: ?curly opening? and ?curly closing?
f. Dashes: em dash ?, en dash ?, figure dash ?
g. Ellipses: ?
h. Non?breaking space between these words: hello world
i. Zero?width non?joiner between these letters: a?b
j. Zero?width space between these letters: x?y
k. A few extra oddballs: ?, ?, ?, ?, ?, ?
Thanks to Carlos, Andy & Adam Kerman (anonymizer?),
I made a change just
now so this is simply a test of whether the change makes any difference.
<https://i.postimg.cc/vHP4gjSC/charset01.jpg>
1. In my custom newsreader, I just now wiped out all headers of the prefix:
Mime-Version:
Content-Type: (this is where the charset seems to be embedded?????)
Content-Transfer-Encoding:
And, I removed the binary charset filter that I had originally installed
(which I picked up back in the 1990s or so from somewhere on the net).
2. So that we KNOW that change was made, I cloned Adam Kerman's Newsreader:
X-Newsreader: trn 4.0-test77 (Sep 1, 2010)
3. The resulting header line for a pure ASCII text became (magically?)
Date: Sat, 17 Jan 2026 23:47:47 -0500
Message-ID: <10kholk$2ej4$1@nnrp.usenet.blueworldhosting.com>
X-Newsreader: trn 4.0-test77 (Sep 1, 2010)
But now for THIS post, I will add, on purpose unicode and chinese text.
Schugo wrote:
The correct headers would be:
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Without, the unicode chars look broken.
I have to manually switch Decoding->Western(default) to Decoding->Unicode OR >> Autodetect->Japanese.
Thanks for that as I read using the same newsreader that I write, both of which use gVim as the media so I don't see what you see, I don't think.
But may I ask if I statically set the headers to what you suggest, would
the problem disappear when I pasted characters that don't fit UTF-8?
Schugo wrote:
Maria Sophia wrote:
But now for THIS post, I will add, on purpose unicode and chinese text.
a. This line includes a Chinese character: ?
b. This line includes a Japanese character: ?
c. Funky German characters: , , , , , ,
d. Non?ASCII currencies: ? (euro), (pound), (yen), ? (won), ? (rupee) >>> e. Curly quotes: ?curly opening? and ?curly closing?
f. Dashes: em dash ?, en dash ?, figure dash ?
g. Ellipses: ?
h. Non?breaking space between these words: hello world
i. Zero?width non?joiner between these letters: a?b
j. Zero?width space between these letters: x?y
k. A few extra oddballs: ?, ?, ?, ?, ?, ?
By avoiding setting content type and encoding, I suppose you'll get
varying results depending on which newsreader is interpreting it, for me
it doesn't look bad, so thunderbird is presumably guessing at UTF-8
The correct headers would be:
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Without, the unicode chars look broken.
I have to manually switch Decoding->Western(default) to Decoding->Unicode OR >> Autodetect->Japanese.
That option (for the user to override the encoding of received messages)
was removed a long time ago.
Carlos E.R. wrote:
I have no idea. It gets inserted somehow. I don't specify the font.Yes you do. We discussed this and told you what to do about a month ago.
It happens when I copy/paste from the Windows command line for some reason. >>
You told us you used random headers, including Content-Type.
Hi Carlos,
Thanks for remembering what it used to be because it used to be that way.
But it's not that way now.
So wipe that idea out as it's not how it works in my setup.
Having said that, I will say, as I always have said, that I really do not understand how charset headers are set. I just don't understand it.
If I understood it, I'm sure this problem would never occur.
But it does. But rarely. Very rarely. Once in every 100 posts or so.
And, it's no big deal, right?
So a few characters look funny, right?
Note: I don't see what you see because I see only what gVim shows me.
We discussed this, where I have always said a few things about this
problem, one of which is I don't do anything overt in the headers.
As you (and Andy) have astutely noted, I used to use STATIC RANDOM headers. But both of you are confusing STATIC RANDOM headers with RANDOM headers. They're not the same thing. They're not even close to the same thing.
Having explained that random headers is not the same thing as static random headers, after we discussed this, I *removed* the static random headers.
In fact, I removed the entire charset header line from my dictionaries.
So I don't set the charset header at all. It gets set by a binary I filter through, which works fine most of the time, as you can tell from my posts.
I run the entire post now thru a filter I picked up somewhere, where I
don't even remember where the binary came from, but it works generally.
That's why, likely, you see my last thousand posts (or so) have a header
that seems to fit the characters, which are almost always pure ASCII.
But sometimes things other than ASCII creep in, which I can't possibly
type, so they must be coming from the places I copy & paste from.
Most of the time when I'm heavily merging sources, I run the entire text through the well-discussed Notepad++ XML macro and that cleans it up.
But sometimes I forget.
When I forget, 99 out of 100 times the binary takes care of it.
But 1 out of 100 times it doesn't.
I don't know why.
Loop back to my statement that I really don't know how charset is set.
But let's get back to the topic at hand.
I think, after a few hours of testing, that I have proven the problem that the OP originally asked about isn't really with yt-dlp but with YouTube changing how they do things (whack-a-mole as someone described it).
We're all familiar with the whack-a-mole that companies like Google and
Apple play, where our job is to find a way around the next mole to whack.
I think having the latest yt-dlp is key (but it's almost a year old), but also having to run java runtimes is of critical importance nowadays.
Also having the latest ffmpeg seems to also be critical if we want, in the end, to be able to convert the Youtube video to an MP4 of decent quality.
I'm hoping someone will copy and paste the commands that I put into this thread so that the OP could test it out, so we can confirm that hypothesis.
Have you tried it on Windows or Linux yet?
C:\> yt-dlp -t mp4 https://www.youtube.com/watch?v=BY82T7Q8hiw
It looks like YouTube has blocked video downloads using yt-dlp. Has
anybody noticed this?
.....
[download] Got error: HTTP Error 403: Forbidden. Retrying fragment 1 (1/10)...
The whole battle between YouTube and dowloader developers makes me think
of Wack-a-Mole.
Andy Burns wrote:
Thanks for taking a stab at why sometimes the charset header is borked.
It's not so much a stab, as the definite cause; every message of yours
which includes "charset=big5" indicates to newsreaders that it's a
chinese message, and thunderbird (maybe others) has a different font
setting for chinese, which is why your messages look odd to us (the
microsoft SimSun and YaHei fonts render poorly).
Last time I mentioned it you claimed the message in question did not
have big5 in it, but it did. The message this time is yours with a
timestamp of
Fri, 16 Jan 2026 22:08:20 -0500
about 7 levels up from this one. You've described how your script picks
random headers from a "pool" of possible values, all it would take is
for you to exclude any messages with big5 from that source pool ...
Hi Andy,
I forgot that after our last discussion (oh, maybe a month or so ago?)
I had completely wiped out the dictionary of static random charset headers.
That header is no longer static nor random (I misspoke in a previous post).
I repeat for effect:
I removed all charset headers (every single one!) from my dictionaries.
So I don't physically set *any* charset header when I post from gVim.
The header runs through an old filter I picked up from the 1990s though.
That old binary is what is doing the big5 garbage in 1 out of 500 posts.
I think.
The reason I say "I think" is because not only do I not set the charsset header myself, nor do I even see it, but I don't really understand it.
Specifically, I don't understand why it is required in the first place.
I think I only have two choices:
a. Remove the charset altogether in every case, or,
b. Find a better (more modern) filter that knows how to set it better
On 17/1/2026 8:20 am, Simon wrote:
It looks like YouTube has blocked video downloads using yt-dlp. Has
anybody noticed this?
.....
[download] Got error: HTTP Error 403: Forbidden. Retrying fragment 1
(1/10)...
Did you download the latest yt-dlp?
You have to get the most recent version because of changes made by YouTube.
(How _legal_ such downloading is is open to question of course, but the
same applies to any such you may do from YouTube itself.)
On 18.01.2026 09:26, Maria Sophia wrote:
Schugo wrote:
The correct headers would be:
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Without, the unicode chars look broken.
I have to manually switch Decoding->Western(default) to Decoding->Unicode OR >>>Autodetect->Japanese.
Thanks for that as I read using the same newsreader that I write, both of >>which use gVim as the media so I don't see what you see, I don't think.
But may I ask if I statically set the headers to what you suggest, would >>the problem disappear when I pasted characters that don't fit UTF-8?
ASCII 7bit (0-127) are identical. Maybe you have to convert ASCII 128-255
to their UTF8 equivalent. Never seen UTF16 or 32 in mail or web.
Andy Burns wrote:
By avoiding setting content type and encoding, I suppose you'll get
varying results depending on which newsreader is interpreting it, for me
it doesn't look bad, so thunderbird is presumably guessing at UTF-8
Hi Andy,
Thanks for confirming that removing all characterset encoding from "my" outgoing headers allowed "your" incoming headers to guess at "UTF-8".
In my recent response to Carlos, I learned from his kind help that gVIM may have been creating mojibake (UTF-8 interpreted as Latin-1).
The correct headers would be:
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Thanks for that suggestion where this post will have the following set:
Content-Type: text/plain; charset=UTF-8; format=flowed
Without, the unicode chars look broken.
I have to manually switch Decoding->Western(default) to Decoding->Unicode OR
Autodetect->Japanese.
That option (for the user to override the encoding of received messages)
was removed a long time ago.
This is great that you're helping Schugo where to let you know that I've inserted the charset headers above, I've cloned Schugo's newsreader line:
User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:128.0) Gecko/20100101 Firefox/128.0 SeaMonkey/2.53.22
If you see *that* line in my headers, that means I'm autoamtically
inserting the UTF-8 (instead of not inserting any charset header).
Let's repeat the test where this is generated by an LLM for me:
a. This line includes a Chinese character: ? (U+6C49)
b. This line includes a Japanese character: ? (U+3042)
c. Funky German characters: (U+00E4), (U+00F6), (U+00FC), (U+00DF),
(U+00C4), (U+00D6), (U+00DC)
d. Non-ASCII currencies: ? (U+20AC), (U+00A3), (U+00A5), ? (U+20A9), ? (U+20B9)
e. Curly quotes: ? (U+201C) and ? (U+201D)
f. Dashes: ? (U+2014), ? (U+2013), ? (U+2012)
g. Ellipses: ? (U+2026)
h. Non-breaking space between these words: hello world (NBSP U+00A0)
i. Zero-width non-joiner between these letters: a?b (ZWNJ U+200C)
j. Zero-width space between these letters: x?y (ZWSP U+200B)
k. A few extra oddballs: ? (U+00A9), ? (U+00AE), ? (U+2020), ? (U+2021), ? (U+00B6), ? (U+00A7)
Mr. Man-wai Chang wrote:
(How _legal_ such downloading is is open to question of course, but the
same applies to any such you may do from YouTube itself.)
As long as you don't make money out of it nor re-publish it nor claim
ownership.
You are merely recording TV shows using a (software) VCR recorder. :)
With yt-dlp, we're also not agreeing to anything on Google's YouTube site!
If other folks can test this for the OP, we can figure out all the issues, where I tested it on Windows and found that a few tweaks fixed everything. ...
Windows:
yt-dlp --js-runtime node ^
-f "bv*+ba/b" ^
--merge-output-format mp4 ^
https://www.youtube.com/watch?v=BY82T7Q8hiw
Well that depends, in the early 80's we had to deal with printers that >didn't want to print GBP currency symbols but could be configured to >substitute a hash symbol "#" to a pound symbol instead.
Schugo <schugo@schugo.de> wrote:
On 18.01.2026 09:26, Maria Sophia wrote:
Schugo wrote:
The correct headers would be:
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Without, the unicode chars look broken.
I have to manually switch Decoding->Western(default) to Decoding->Unicode OR
Autodetect->Japanese.
Thanks for that as I read using the same newsreader that I write, both of >>>which use gVim as the media so I don't see what you see, I don't think.
But may I ask if I statically set the headers to what you suggest, would >>>the problem disappear when I pasted characters that don't fit UTF-8?
ASCII 7bit (0-127) are identical. Maybe you have to convert ASCII 128-255 >>to their UTF8 equivalent. Never seen UTF16 or 32 in mail or web.
128-255 are in the 8-bit range and are not ASCII. There are any number
of different 8-bit character sets in use. These must be declared. The
user must tell the system which one was used as there's no way to parse.
On 2026-01-18 05:34, Maria Sophia wrote:[...]
Specifically, I don't understand why it is required in the first place.
I think I only have two choices:
a. Remove the charset altogether in every case, or,
And then the usenet server will either refuse your post or add one.
Adam H. Kerman wrote:
Andy Burns <usenet@andyburns.uk> wrote:
Well that depends, in the early 80's we had to deal with printers that >>>didn't want to print GBP currency symbols but could be configured to >>>substitute a hash symbol "#" to a pound symbol instead.
I thought teletypewriters performed "L" backspace overstrike "B".
I recognise L = from DEC compose sequences, not heard of L B
Traditionally, the cent sign wasn't on a keyboard either.
Surely it was a dot matrix? You declared your code page, then got the >>needed symbol.
yes dot matrix, these were connected to CP/M type systems, they didn't >define codepages like DOS
Some of these code pages became ISO-8859-X character
sets.
Carlos E.R. <robin_listas@es.invalid> wrote:
On 2026-01-18 05:34, Maria Sophia wrote:[...]
Specifically, I don't understand why it is required in the first place.
I think I only have two choices:
a. Remove the charset altogether in every case, or,
And then the usenet server will either refuse your post or add one.
If the text is pure ASCII and no charset=...is declared, a news server should not set a charset=..., nor should it refuse the post, because if undeclared, us-ascii is the default character set.
Note that nearly all my posts - and probably/hopefully this one -
don't have any MIME headers (Mime-Version:, Content-Type:, Content-Transfer-Encoding:, etc.), because they're not needed.
[...]
Carlos E.R. wrote:
Loop back to my statement that I really don't know how charset is set.
But let's get back to the topic at hand.
Your vi editor will use some windows charset, always the same.
Some unicode. You just have to tell the script to that same code.
And stop playing, stop doing conversions with notepad.
Hi Carlos,
A lot of people on Usenet use only their keyboard to enter their articles.
If all I ever did was type using a keyboard, there will only be about 94 unique characters (letters, numbers, symbols, punctuation, etc.).
As for gVim, I have no idea what the characterset is so I just looked:
:set encoding? ==> this reported "encoding=latin1"
latin1 means: single-byte Western European encoding
:set fileencoding? ==> this reported nothing
nothing means the current file is using the same (latin1) encoding
:set fileencodings? ==> this reported "fileencodings=ucs-bom"
ucs-bom means UTF-16/UTF-32 with BOM (Byte Order Mark)
Apparently a BOM is a small marker placed at the beginning of a text file
that tells software which Unicode encoding the file uses.
So I found my vimrc and changed it to make it a more modern setup.
:echo $MYVIMRC --> C:\users\username\_vimrc
C:\> gvim C:\users\username\_vimrc
set encoding=utf-8
set fileencoding=utf-8
set fileencodings=utf-8,ucs-bom,latin1
Now ":set endoding" reports utf-8 by default so Latin1 fallback will happen only when gVim thinks it's necessary. Let's see if that helps or not.
I think, after a few hours of testing, that I have proven the problem that >>> the OP originally asked about isn't really with yt-dlp but with YouTube
changing how they do things (whack-a-mole as someone described it).
We're all familiar with the whack-a-mole that companies like Google and
Apple play, where our job is to find a way around the next mole to whack. >>>
I think having the latest yt-dlp is key (but it's almost a year old), but >>> also having to run java runtimes is of critical importance nowadays.
Also having the latest ffmpeg seems to also be critical if we want, in the >>> end, to be able to convert the Youtube video to an MP4 of decent quality. >>>
I'm hoping someone will copy and paste the commands that I put into this >>> thread so that the OP could test it out, so we can confirm that hypothesis. >>>
Have you tried it on Windows or Linux yet?
C:\> yt-dlp -t mp4 https://www.youtube.com/watch?v=BY82T7Q8hiw
cer@Telcontar:~/Videos/tmp> yt-dlp -t mp4 https://www.youtube.com/watch?v=BY82T7Q8hiw
[youtube] Extracting URL: https://www.youtube.com/watch?v=BY82T7Q8hiw
[youtube] BY82T7Q8hiw: Downloading webpage
WARNING: [youtube] No supported JavaScript runtime could be found. Only deno is enabled by default; to use another runtime add --js-runtimes RUNTIME[:PATH] to your command/config. YouTube extraction without a JS runtime has been deprecated, and some formats may be missing. See https://github.com/yt-dlp/yt-dlp/wiki/EJS for details on installing one
[youtube] BY82T7Q8hiw: Downloading android sdkless player API JSON
[youtube] BY82T7Q8hiw: Downloading web safari player API JSON
WARNING: [youtube] BY82T7Q8hiw: Some web_safari client https formats have been skipped as they are missing a url. YouTube is forcing SABR streaming for this client. See https://github.com/yt-dlp/yt-dlp/issues/12482 for more details
[youtube] BY82T7Q8hiw: Downloading m3u8 information
WARNING: [youtube] BY82T7Q8hiw: Some web client https formats have been skipped as they are missing a url. YouTube is forcing SABR streaming for this client. See https://github.com/yt-dlp/yt-dlp/issues/12482 for more details
[info] BY82T7Q8hiw: Downloading 1 format(s): 301
[download] Sleeping 7.00 seconds as required by the site...
[hlsnative] Downloading m3u8 manifest
[hlsnative] Total fragments: 399
[download] Destination: Denys Davydov-@DenysDavydov-Update from Ukraine ? Great! Ruzzian Army Failed Again, USA captured one more Rus Tanker-BY82T7Q8hiw.mp4
[download] Got error: HTTP Error 403: Forbidden. Retrying fragment 1 (1/10)...
[download] Got error: HTTP Error 403: Forbidden. Retrying fragment 1 (2/10)...
[download] Got error: HTTP Error 403: Forbidden. Retrying fragment 1 (3/10)...
... ... ... ... ...
-rw-r--r-- 1 cer users 0 Jan 18 13:39 Denys Davydov-@DenysDavydov-Update from Ukraine ? Great! Ruzzian Army Failed Again, USA captured one more Rus Tanker-BY82T7Q8hiw.mp4.part
-rw-r--r-- 1 cer users 50 Jan 18 13:39 Denys Davydov-@DenysDavydov-Update from Ukraine ? Great! Ruzzian Army Failed Again, USA captured one more Rus Tanker-BY82T7Q8hiw.mp4.ytdl
Thanks for running that test for the OP. It's very illustrative.
1. No supported JavaScript runtime could be found
yt-dlp can't get normal URLs so it has been falling back to HLS.
2. "Some formats have been skipped... missing a url."
YouTube is forcing SABR streaming
3. "YouTube is forcing SABR streaming for this client."
YouTube no longer provides direct MP4/WEBM URLs for many clients.
Only HLS or DASH fragments are available.
Those fragments require JS-executed signature logic.
Without JS, yt-dlp can't compute the signatures -> fragments 403
It seems maybe that you're dealing with the current YouTube "SABR-only" rollout colliding with an older yt-dlp build and the absence of a
JavaScript runtime. (Segmented Adaptive Bitrate)
Apparently YouTube is increasingly refusing to serve classic MP4/WEBM URLs and instead forcing HLS fragments that require JS-based decryption logic. Without a JS runtime, yt-dlp can't execute that logic, so it falls back to HLS... and then the fragments 403.
From what I've determined in my hours-long testing sequence, yt-dlp now requires a JS runtime (Node, Bun, or Java) for many YouTube formats.
Without it, yt-dlp can't run the decryption code that YouTube now embeds in JS. When I inserted Node, I avoided that specific error on Windows.
You only have deno, which, apparently, is not fully supported for YouTube's current EJS extraction.
If you want to fix it, you'll need to do what I did, which was install a
real JS runtime such as Node.js, Java (OpenJDK), Bun, or GraalJS.
Worse, apparently Google is changing things daily, so we need to update to the nightly yt-dlp builds to get the latest daily SABR-related fixes.
I had needed to update ffmpeg which is required for the muxing/merging.
For linux, I think this command will be better than the Windows command:
$ yt-dlp --js-runtime node -f "bv*+ba/b" https://www.youtube.com/watch?v=BY82T7Q8hiw
What we need, if we all want to use yt-dlp moving forward, is for
other kind helpful people like you are, to test this for the OP.
On 18.01.2026 18:06, Adam H. Kerman wrote:
Schugo <schugo@schugo.de> wrote:
. . .
ASCII 7bit (0-127) are identical. Maybe you have to convert ASCII 128-255 >>>to their UTF8 equivalent. Never seen UTF16 or 32 in mail or web.
128-255 are in the 8-bit range and are not ASCII. There are any number
of different 8-bit character sets in use. These must be declared. The
user must tell the system which one was used as there's no way to parse.
It's Extended ASCII. Most likely ISO 8859-1 ("ISO Latin 1"), except when >there are characters in the control codes range, then you can assume
Windows 1252. Kyrillic, Greek or Eastern languages are today all encoded
in UTF-8.
ciao..
Adam H. Kerman wrote:
There were any number of 8-bit character sets.
And a parser can't tell which set, since bytes are just bytes.
Andy Burns <usenet@andyburns.uk> wrote:
Adam H. Kerman wrote:
There were any number of 8-bit character sets.
And a parser can't tell which set, since bytes are just bytes.
Correct. Schugo was wrong. ISO-8859-1 should not be assumed. The user
needs to declare it.
On 19.01.2026 00:13, Adam H. Kerman wrote:
Andy Burns <usenet@andyburns.uk> wrote:
Adam H. Kerman wrote:
There were any number of 8-bit character sets.
And a parser can't tell which set, since bytes are just bytes.
Correct. Schugo was wrong. ISO-8859-1 should not be assumed. The user
needs to declare it.
and when it's not declared? better assume that than display nothing at all...
(How _legal_ such downloading is is open to question of course ...
On Sun, 18 Jan 2026 14:08:32 +0000, J. P. Gilliver wrote:
(How _legal_ such downloading is is open to question of course ...
I don?t see how it could be declared illegal, just based on the
website?s decree. After all, your web browser has to do exactly the
same kind of accesses in order to you let you view the video. Why
should access by one piece of software be ?legal? and other not?
....
Where js-runtime node tells yt-dlp to use Node.js
so YouTube's JS signature code can run.
-f "bv*+ba/b" selects best video + best audio,
or best single file if needed.
--merge-output-format mp4 forces the final output to be MP4
after merging.
Did I mention I never understood this character encoding stuff yet?
MIME rules say that if you don't specify a transfer encoding, the default
is 7 bit, so I'm gonna *add* a second line now to my static headers.
Content-Transfer-Encoding: 8bit
But I think someone said that the charset encoding header isn't even >required, so, since mine is "static" currently, why set it at all?
. . .
Carlos E.R. wrote:
latin1 means: single-byte Western European encoding
:set fileencoding? ==> this reported nothing
nothing means the current file is using the same (latin1) encoding
:set fileencodings? ==> this reported "fileencodings=ucs-bom"
ucs-bom means UTF-16/UTF-32 with BOM (Byte Order Mark)
Apparently a BOM is a small marker placed at the beginning of a
text file
that tells software which Unicode encoding the file uses.
So I found my vimrc and changed it to make it a more modern setup.
:echo $MYVIMRC --> C:\users\username\_vimrc
C:\> gvim C:\users\username\_vimrc
set encoding=utf-8
set fileencoding=utf-8
set fileencodings=utf-8,ucs-bom,latin1
Now ":set endoding" reports utf-8 by default so Latin1 fallback will
happen
only when gVim thinks it's necessary. Let's see if that helps or not.
With that, you should not need to use your notepad trick for conversion.
Thanks to you and Andy and others like Schugo and Adam Kerman I think we've made great progress in that both gVim and the charset encoding headers have been fixed as of today. Now, when I paste the prior list of funky stuff,
the gVim session reports utf-8 (so it will only get to latin-1 if needed).
Let's see how it goes with these explicit static utf-7 charset headers: Content-Type: text/plain; charset=UTF-8; format=flowed
As far as I can tell, you installed Node but it yt-dlp still had problems.
a. YouTube is not giving direct MP4/WEBM URLs for that situation
b. So yt-dlp is being forced into a SABR/HLS-only client profile
c. Where format 399 is an HLS/DASH segmented stream
d. Which requires signature decryption
e. But SABR extractor code changed in late 2024 & again in early 2025
f. Therefore, even with Node installed, your yt-dlp build is too old
to handle the new SABR logic
How can you fix this?
A. You must upgrade to the yt-dlp nightly build
<https://github.com/yt-dlp/yt-dlp-nightly-builds/releases>
B. Because the stable release is almost a year old & cannot handle SABR
C. Then run this command again.
yt-dlp --js-runtime node \
-f "bv*+ba/b" \
--merge-output-format mp4 \
https://www.youtube.com/watch?v=BY82T7Q8hiw
This command forces the Android client which often avoids SABR altogether yt-dlp \
--js-runtime node \
--extractor-args "youtube:player_client=android" \
-f "bv*+ba/b" \
--merge-output-format mp4 \
https://www.youtube.com/watch?v=BY82T7Q8hiw
This prints all available formats:
yt-dlp \
--js-runtime node \
-F \
https://www.youtube.com/watch?v=BY82T7Q8hiw
Andy Burns wrote:
Let's see how it goes with these explicit static utf-7 charset headers:
I presume that's a typo? utf-7 did/does exist, but it's a real basket
case ...
Hi Andy,
Thanks for catching that. Yes. That's a typo. But I looked it up based
on your query just now, and it appears that since I'm not specifying the encoding, the default becomes 7 bit for the encoding.
Not 8 bit. Sigh.
Did I mention I never understood this character encoding stuff yet?
MIME rules say that if you don't specify a transfer encoding, the
default is 7 bit, so I'm gonna *add* a second line now to my static
headers.
Content-Transfer-Encoding: 8bit
But I think someone said that the charset encoding header isn't even required, so, since mine is "static" currently, why set it at all?
Looking up "why set it at all", apparently if it's missing, the default
is US-ASCII per the MIME and Usenet RFCs apparently since Usenet
articles follow the same MIME rules as email (RFC 2045-2047) (note that apparently Format flowed is defined in RFC 3676 which apparently handles wrapping and quoting).
Without a charset, the default is US-ASCII (per RFC 1521 and RFC 2046)
but I could be wrong on that as others have seen that it's UTF-8 in
their chosen newsreaders. I want to avoid mojibake if/when at all possible.
To identify headers with the changes, I'll clone Paul's newsreader: User-Agent: Ratcatcher/2.0.0.25 (Windows/20130802)
So if you see that in my header, I'll have statically added these 2 lines: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit
As for the original problem, I think the solution is simple for all of us.
a. Google keeps changing the YouTube streaming API, so the old
extraction methods broke on yt-dlp in predictable ways.
b. Because of this, the official yt-dlp release is now too old.
Everyone has to use "a" relatively recent nightly build to keep up.
c. YouTube has moved more of its URL and signature logic into
JavaScript, so yt-dlp also now requires a JS runtime such as
Node.js, Java, Bun, or similar.
If others test this out for the OP, let me know if that's correct.
Carlos E.R. wrote:
But I think someone said that the charset encoding header isn't even
required, so, since mine is "static" currently, why set it at all?
If you don't set it up, clients make assumptions, which may or not be
correct.
One other line I could add, if it matters, is the MIME version line: Mime-Version: 1.0
But does it even matter?
Has anyone seen any other mime version used in Usenet headers than 1.0?
a. Google keeps changing the YouTube streaming API, so the old
? extraction methods broke on yt-dlp in predictable ways.
b. Because of this, the official yt-dlp release is now too old.
? Everyone has to use "a" relatively recent nightly build to keep up.
c. YouTube has moved more of its URL and signature logic into
? JavaScript, so yt-dlp also now requires a JS runtime such as
Node.js, Java, Bun, or similar.
If others test this out for the OP, let me know if that's correct.
I can not test nightly.
I apologize for not being clear.
You don't need to "test nightly" nor update nightly the yt-dlp binary.
You just need to grab "a" nightly build. Any one of them will work.
That's because the release build from March 2025 doesn't have what is
needed but *all* the recent nightly yt-dlp builds have what is needed.
The good news is we've "solved" the OP's problem with this information. Thanks for testing the yt-dlp on Linux for the team.
Can't. I'm using a distribution with "veteran" versions of things.
yt-dlp can not be updated directly, I have to wait till the
distribution updates it, solving the problems with Python.
Did I mention I never understood this character encoding stuff yet?
I recognise L = from DEC compose sequences ...
On Sun, 1/18/2026 11:35 PM, Lawrence D?Oliveiro wrote:
On Sun, 18 Jan 2026 14:08:32 +0000, J. P. Gilliver wrote:
(How _legal_ such downloading is is open to question of course ...
I don?t see how it could be declared illegal, just based on the
website?s decree. After all, your web browser has to do exactly the
same kind of accesses in order to you let you view the video. Why
should access by one piece of software be ?legal? and other not?
You would not think DMCA was a law, until you're brought up on
charges that is.
On Mon, 19 Jan 2026 04:01:37 -0500, Paul wrote:
On Sun, 1/18/2026 11:35 PM, Lawrence DOliveiro wrote:
On Sun, 18 Jan 2026 14:08:32 +0000, J. P. Gilliver wrote:
(How _legal_ such downloading is is open to question of course ...
I don't see how it could be declared illegal, just based on the
website's decree. After all, your web browser has to do exactly the
same kind of accesses in order to you let you view the video. Why
should access by one piece of software be legal and other not?
You would not think DMCA was a law, until you're brought up on
charges that is.
You seem to be saying that the illegality need not be in getting the
data from the site, but what software I run on my own computer to do
so.
Carlos E.R. wrote:
How can you fix this?
A. You must upgrade to the yt-dlp nightly build
<https://github.com/yt-dlp/yt-dlp-nightly-builds/releases>
Can't. I'm using a distribution with "veteran" versions of things. yt-dlp can not be updated directly, I have to wait till the distribution updates it, solving the problems with Python.
I would have to install in a virtual machine the Tumbleweed distro, an upstream rolling release distribution, and then I would get the most recent yt-dlp too.
Your distribution ships an older Python stack and an older yt-dlp.
Because of that, you cannot install a current yt-dlp without breaking
system Python or bypassing the package manager.
YouTube has changed its streaming API (SABR, new signatures, new JS
logic), and the old yt-dlp in your distro cannot handle these changes.
The only versions that work reliably today are the nightly builds,
On 20.01.2026 00:14, Maria Sophia wrote:
Carlos E.R. wrote:
How can you fix this?
A. You must upgrade to the yt-dlp nightly build
<https://github.com/yt-dlp/yt-dlp-nightly-builds/releases>
Can't. I'm using a distribution with "veteran" versions of things. yt-dlp can not be updated directly, I have to wait till the distribution updates it, solving the problems with Python.
I would have to install in a virtual machine the Tumbleweed distro, an upstream rolling release distribution, and then I would get the most recent yt-dlp too.
Your distribution ships an older Python stack and an older yt-dlp.
Because of that, you cannot install a current yt-dlp without breaking
system Python or bypassing the package manager.
YouTube has changed its streaming API (SABR, new signatures, new JS
logic), and the old yt-dlp in your distro cannot handle these changes.
The only versions that work reliably today are the nightly builds,
but then I wonder why it works fine with my stable 2025.12.08 on Win?
mp4 with audio and no 403 errors.
On Mon, 19 Jan 2026 04:01:37 -0500, Paul wrote:
On Sun, 1/18/2026 11:35 PM, Lawrence D?Oliveiro wrote:
On Sun, 18 Jan 2026 14:08:32 +0000, J. P. Gilliver wrote:
(How _legal_ such downloading is is open to question of course ...
I don?t see how it could be declared illegal, just based on the
website?s decree. After all, your web browser has to do exactly the
same kind of accesses in order to you let you view the video. Why
should access by one piece of software be ?legal? and other not?
You would not think DMCA was a law, until you're brought up on
charges that is.
You seem to be saying that the illegality need not be in getting the
data from the site, but what software I run on my own computer to do
so.
On 2026/1/20 6:21:0, Lawrence DOliveiro wrote:
On Mon, 19 Jan 2026 04:01:37 -0500, Paul wrote:
On Sun, 1/18/2026 11:35 PM, Lawrence DOliveiro wrote:
On Sun, 18 Jan 2026 14:08:32 +0000, J. P. Gilliver wrote:
(How _legal_ such downloading is is open to question of course ...
I don't see how it could be declared illegal, just based on the >>>>website's decree. After all, your web browser has to do exactly the >>>>same kind of accesses in order to you let you view the video. Why >>>>should access by one piece of software be legal and other not?
You would not think DMCA was a law, until you're brought up on
charges that is.
You seem to be saying that the illegality need not be in getting the
data from the site, but what software I run on my own computer to do
so.
It's not the getting, it's the keeping. If you do it their way, you only
view it as you download it (give or take buffering delays). [With ad.s
too, probably.]
J. P. Gilliver <G6JPG@255soft.uk> wrote:
It's not the getting, it's the keeping. If you do it their way, you only
view it as you download it (give or take buffering delays). [With ad.s
too, probably.]
It's the transferring of the downloaded copy to a third party that has
no right to receive the copy. You may download it for your later viewing pleasure.
You've obtained a copy legally. You may copy it without infringing upon copyright for your own use, but to transfer it legally, you transfer the
copy you obtained legally and then cannot retain any of the copies you
made for your own convenience.
a. YouTube can dislike downloading, but a ToS violation is not a crime
So the picture looks like this from the perspective of the user.
a. yt-dlp is not bypassing a TPM
b. DMCA does not apply without a TPM
c. Personal time shifting has long been treated as fair use
copyrighted works, not at someone saving a Ukraine video before it gets
On 2026/1/20 18:59:24, Adam H. Kerman wrote:
J. P. Gilliver <G6JPG@255soft.uk> wrote:
[]
It's not the getting, it's the keeping. If you do it their way, you only >>>view it as you download it (give or take buffering delays). [With ad.s >>>too, probably.]
It's the transferring of the downloaded copy to a third party that has
no right to receive the copy. You may download it for your later viewing >>pleasure.
You've obtained a copy legally. You may copy it without infringing upon >>copyright for your own use, but to transfer it legally, you transfer the >>copy you obtained legally and then cannot retain any of the copies you
made for your own convenience.
Don't some such websites have something in their terms of use
specifically prohibiting downloading, other than that intrinsically
involved in immediate or near-immediate viewing-at-time-of-downloading?
On 2026/1/20 18:28:31, Maria Sophia wrote:
[]
a. YouTube can dislike downloading, but a ToS violation is not a crime
Crime, as in criminal act, no. Probably doesn't stop them suing you for
brach of contract, though - probably at your expense (I haven't looked
at their ToS lately, but I'd be surprised if it doesn't include the
usual blank cheque for [their] lawyers]).
[]
So the picture looks like this from the perspective of the user.
a. yt-dlp is not bypassing a TPM
b. DMCA does not apply without a TPM
c. Personal time shifting has long been treated as fair use
Time shifting, for the purpose of watching once at a more convenient
time, on the whole, no. Downloading to keep - and watch as many times as
you like - may be seen as depriving the copyright holder of revenue;
this extends back to the days of video recorders, and even before that >recording audio.
. . .
Don't some such websites have something in their terms of use
specifically prohibiting downloading, other than that intrinsically
involved in immediate or near-immediate
viewing-at-time-of-downloading?
On Mon, 19 Jan 2026 23:16:08 +0100, Carlos E.R. wrote:
Can't. I'm using a distribution with "veteran" versions of things.
yt-dlp can not be updated directly, I have to wait till the
distribution updates it, solving the problems with Python.
I don?t bother running the distro package (Debian, in my case). I just
make a local copy of the Git source repo, and run it straight out of
there. No special installation needed, and keeping it up-to-date is
easy.
J. P. Gilliver <G6JPG@255soft.uk> wrote:
On 2026/1/20 18:59:24, Adam H. Kerman wrote:
J. P. Gilliver <G6JPG@255soft.uk> wrote:
[]
It's not the getting, it's the keeping. If you do it their way, you only >>>> view it as you download it (give or take buffering delays). [With ad.s >>>> too, probably.]
It's the transferring of the downloaded copy to a third party that has
no right to receive the copy. You may download it for your later viewing >>> pleasure.
You've obtained a copy legally. You may copy it without infringing upon
copyright for your own use, but to transfer it legally, you transfer the >>> copy you obtained legally and then cannot retain any of the copies you
made for your own convenience.
Don't some such websites have something in their terms of use
specifically prohibiting downloading, other than that intrinsically
involved in immediate or near-immediate viewing-at-time-of-downloading?
That's called an adhesion contract. You didn't negotiate nor agree to
its terms.
Despite what Big Content wants you to believe, your obligation is not to infringe upon copyright as that's the law. It is not your obligation to
abide by their marketing nor business model.
On 2026/1/20 21:19:34, Adam H. Kerman wrote:
J. P. Gilliver <G6JPG@255soft.uk> wrote:
On 2026/1/20 18:59:24, Adam H. Kerman wrote:
J. P. Gilliver <G6JPG@255soft.uk> wrote:
[]
It's not the getting, it's the keeping. If you do it their way, you only >>>>> view it as you download it (give or take buffering delays). [With ad.s >>>>> too, probably.]
It's the transferring of the downloaded copy to a third party that has >>>> no right to receive the copy. You may download it for your later viewing >>>> pleasure.
You've obtained a copy legally. You may copy it without infringing upon >>>> copyright for your own use, but to transfer it legally, you transfer the >>>> copy you obtained legally and then cannot retain any of the copies you >>>> made for your own convenience.
Don't some such websites have something in their terms of use
specifically prohibiting downloading, other than that intrinsically
involved in immediate or near-immediate viewing-at-time-of-downloading?
That's called an adhesion contract. You didn't negotiate nor agree to
its terms.
Despite what Big Content wants you to believe, your obligation is not to
infringe upon copyright as that's the law. It is not your obligation to
abide by their marketing nor business model.
There's copyright law, and contract law.
IANAL; my dim understanding is
it's only (in this context) copyright infringement that's a criminal
act, but civil cases can cost you as much or more - and by continuing to
use some websites, you in theory have accepted the contract terms. Which
may be that you only watch the content once (or again by visiting the
site again).
In practice, it's generally accepted that they don't go after
individuals, for reasons already discussed, but they _may_ be able to
and sometimes _do_ - and those with the deepest pockets to pay lawyers
will win. :-(
J. P. Gilliver <G6JPG@255soft.uk> wrote:[]
There's copyright law, and contract law.
You didn't negotiate that contract. Despite "click here to accept terms;
we now own all your body parts", it's not a contract without a meeting
of minds.
On 2026/1/21 5:46:5, Adam H. Kerman wrote:
J. P. Gilliver <G6JPG@255soft.uk> wrote:
There's copyright law, and contract law.
You didn't negotiate that contract. Despite "click here to accept terms;
we now own all your body parts", it's not a contract without a meeting
of minds.
If you didn't read the terms before clicking, more fool you.
. . .
The video is almost always copyrighted & we don't have permission, unless
I. The video is in the public domain, or,
II. The uploader explicitly licensed it for download, or,
III. We have a statutory exception (e.g., fair use)
Downloading for personal, non-commercial use can sometimes fall under fair >use, especially for:
a. Commentary
b. Criticism
c. Research
d. Archival purposes
e. Time-shifting (watching later)
But fair use is a defense, not a guaranteed right. It's case-by-case.
Uploading, redistributing, or re-posting the downloaded video is almost >always infringement unless we have permission.
. . .
Adam H. Kerman wrote:
Maria Sophia <pusvul@getTjewytR4so+mqe2.invalid> wrote:
The video is almost always copyrighted & we don't have permission, unless >>> I. The video is in the public domain, or,
II. The uploader explicitly licensed it for download, or,
III. We have a statutory exception (e.g., fair use)
Copyright exists as soon as something is in fixed form. That the viewer
has permission, it's still copyrighted.
Fair use is another issue, about the right to incorporate it into
something else. Writing criticism or review or analysis or history may >>require using a portion of the material that's copyrighted for context.
Downloading for personal, non-commercial use can sometimes fall under fair >>>use, especially for:
a. Commentary
b. Criticism
c. Research
d. Archival purposes
e. Time-shifting (watching later)
This applies to downloading. I don't agree it falls under fair use.
But fair use is a defense, not a guaranteed right. It's case-by-case.
Uploading, redistributing, or re-posting the downloaded video is almost >>>always infringement unless we have permission.
It depends on whether a legally-owned copy has been transferred. The
owner can transfer copies he owns as long as he doesn't retain his own >>copy. He is prohibited from copying to transfer then retaining a copy.
At point of transfer, he no longer owns it as personal property.
Thanks for that useful perspective. I think we may be approaching the same >set of questions from slightly different angles, so let me separate the >issues more clearly than I may have elucidated in my prior technical post.
1. Copyright exists the moment a work is fixed in a tangible medium.
We agree on that. But the tangent in this thread is not whether the video
is copyrighted. It is whether downloading a copy without permission is >automatically infringement. That is a different question.
2. Permission and copyright are not the same thing.
A work can be copyrighted and still be lawfully copied under a statutory >exception. That is the point of fair use. Fair use is not limited to >incorporating material into a new work. Courts have held that personal time >shifting, research, and archival copying can qualify depending on the
facts. It is not guaranteed but it is not excluded either.
4. Whether downloading a YouTube video is fair use depends on the purpose
and the circumstances.
It is not automatically fair use and it is not automatically infringement.
It is a fact-specific analysis. That is why I said it can sometimes fall >under fair use, not that it always does.
5. The first sale doctrine does not apply to digital copies you download.
First sale applies only to the distribution right and only to lawfully
made copies that you own as physical property. Downloading a file from >YouTube does not create a lawfully owned copy under first sale. Courts
have rejected attempts to apply first sale to digital transfers because
the transfer necessarily creates a new copy.
. . .
On 2026-01-20 05:45, Lawrence D?Oliveiro wrote:
On Mon, 19 Jan 2026 23:16:08 +0100, Carlos E.R. wrote:
Can't. I'm using a distribution with "veteran" versions of things.
yt-dlp can not be updated directly, I have to wait till the
distribution updates it, solving the problems with Python.
I don?t bother running the distro package (Debian, in my case). I just
make a local copy of the Git source repo, and run it straight out of
there. No special installation needed, and keeping it up-to-date is
easy.
But it assumes a Python version I don't and can't have.
Carlos E.R. wrote:
I don?t bother running the distro package (Debian, in my case). I just
make a local copy of the Git source repo, and run it straight out of
there. No special installation needed, and keeping it up-to-date is
easy.
But it assumes a Python version I don't and can't have.
You added Node.js and you ran the tests for the OP so you did all you could since you'd need to change your OS to go any further, which isn't
reasonable to ask of you.
I, for one, appreciate the efforts you invested to help the OP, where I
think we've 100% gotten to the bottom of what the problem & solution is.
Thanks for pitching in to help others!
and by continuing to use some websites, you in theory have accepted the contract
terms. Which may be that you only watch the content once (or again by visiting the
site again).
| Sysop: | Tetrazocine |
|---|---|
| Location: | Melbourne, VIC, Australia |
| Users: | 15 |
| Nodes: | 8 (0 / 8) |
| Uptime: | 06:35:09 |
| Calls: | 188 |
| Files: | 21,502 |
| Messages: | 81,815 |