1
0
mirror of https://github.com/yt-dlp/yt-dlp synced 2025-12-17 06:35:42 +07:00

Compare commits

..

796 Commits

Author SHA1 Message Date
github-actions[bot]
1a6ac547ea Release 2024.07.08
Created by: bashonly

:ci skip all :ci run dl
2024-07-08 22:19:18 +00:00
bashonly
4b50b292cc [ie/soundcloud] Fix rate-limit handling (#10389)
Authored by: bashonly
2024-07-08 22:09:08 +00:00
bashonly
297b0a3792 [ie/youtube] Fix JS n function name extraction (#10390)
Fixes nsig decoding for player b22ef6e7

Closes #10391
Authored by: bashonly, seproDev

Co-authored-by: sepro <4618135+seproDev@users.noreply.github.com>
2024-07-08 22:04:48 +00:00
Simon Sawicki
6c056ea7ae [jsinterp] Implement Function.prototype resolving for call and apply (#10392)
Authored by: Grub4K
2024-07-08 23:46:26 +02:00
github-actions[bot]
39bc699d2e Release 2024.07.07
Created by: bashonly

:ci skip all :ci run dl
2024-07-07 21:35:02 +00:00
bashonly
b337d2989c [cleanup] Misc (#10383)
Authored by: bashonly
2024-07-07 21:23:40 +00:00
Hardik Bhimani
f0f867f008 [ie/jiosaavn:playlist] Support featured playlists (#10382)
Closes #10369
Authored by: harbhim
2024-07-07 21:08:25 +00:00
DinhHuy2010
987a1f94c2 [ie/vtv] Add extractors (#10173)
Authored by: DinhHuy2010
2024-07-07 21:59:42 +02:00
sepro
4cdc976bd8 [ie/yle_areena] Fix metadata extraction (#10380)
Authored by: seproDev
2024-07-07 21:57:18 +02:00
Simon Sawicki
0d174e8bed [ie/yle_areena] Fix subtitle extraction (#10379)
Authored by: Grub4K
2024-07-07 21:21:00 +02:00
Dong Heon Hee
4862a29854 [ie/chzzk] Extract with API v3 (#10363)
Authored by: hui1601
2024-07-06 03:32:08 +00:00
bashonly
2469119490 [core] Address gaps in allowed extensions (#10362)
Adds some extensions missing in 5ce582448e

Closes #10360, Closes #10365
Authored by: bashonly
2024-07-05 23:17:47 +00:00
Sean Ellingham
00766ece0c [ie/vidyard] Add extractor (#10155)
Closes #4618
Authored by: exterrestris
2024-07-05 23:02:35 +00:00
middlingphys
2a1a1b8e67 [ie/abematv] Extract availability (#10348)
Authored by: middlingphys
2024-07-05 22:31:16 +00:00
bashonly
c1c9bb4adb [ie/vimeo] Fix password-protected video extraction (#10341)
Closes #6603
Authored by: bashonly
2024-07-05 18:32:53 +00:00
Thomas Gerbet
6075a029db [ie/douyutv] Do not use dangerous javascript source/URL (#10347)
Ref: https://sansec.io/research/polyfill-supply-chain-attack

Authored by: LeSuisse
2024-07-03 22:35:24 +00:00
bashonly
cc767e9490 [core] Fix --ignore-no-formats-error (#10345)
Fixes regression in 5ce582448e

Closes #10344
Authored by: Grub4K

Co-authored-by: Simon Sawicki <contact@grub4k.xyz>
2024-07-03 16:46:01 +00:00
github-actions[bot]
d28aa87e21 Release 2024.07.02
Created by: bashonly

:ci skip all :ci run dl
2024-07-02 23:13:48 +00:00
bashonly
93d33cb29a [cleanup] Misc (#10330)
Authored by: bashonly
2024-07-02 23:03:08 +00:00
Mozi
7799e51895 [ie/zaiko] Support JWT video URLs (#10130)
Closes #9798
Authored by: pzhlkj6612
2024-07-02 22:22:52 +00:00
Patryk Miś
7509791385 [ie/banbye] Fix extractor (#10332)
Closes #8584
Authored by: PatrykMis, seproDev

Co-authored-by: sepro <4618135+seproDev@users.noreply.github.com>
2024-07-02 21:51:07 +00:00
DrakoCpp
6403530e2d [ie/murrtube] Fix extractor (#9249)
Closes #7500
Authored by: DrakoCpp
2024-07-02 21:49:09 +00:00
bashonly
d502f4c6d9 [pp/embedthumbnail] Fix embedding with mutagen (#10337)
Fixes regression in f2a4ea1794

Closes #10335
Authored by: bashonly
2024-07-02 21:24:17 +00:00
bashonly
773bbb1815 [core] Fix --compat-opt allow-unsafe-ext (#10336)
Fixes bug in 5ce582448e

Authored by: bashonly, rdamas

Co-authored-by: Robert Damas <robert.damas@byom.de>
2024-07-02 21:17:06 +00:00
github-actions[bot]
cd68258225 Release 2024.07.01
Created by: Grub4K

:ci skip all :ci run dl
2024-07-01 23:01:05 +00:00
Simon Sawicki
5ce582448e [core] Disallow unsafe extensions (CVE-2024-38519)
Ref: https://github.com/yt-dlp/yt-dlp/security/advisories/GHSA-79w7-vh3h-8g4j

Authored by: Grub4K
2024-07-02 00:58:40 +02:00
bashonly
6aaf96a3d6 [cleanup] Misc (#10075)
Closes #10303
Authored by: bashonly, seproDev, jucor, c-basalt

Co-authored-by: sepro <4618135+seproDev@users.noreply.github.com>
Co-authored-by: Julien Cornebise <julien@cornebise.com>
Co-authored-by: c-basalt <117849907+c-basalt@users.noreply.github.com>
2024-07-01 22:51:27 +00:00
bashonly
d4b99a2333 [ie/vimeo] Support browser impersonation (#10327)
Closes #10325
Authored by: bashonly
2024-07-01 20:55:18 +00:00
c-basalt
1d6ab17d07 [ie/bilibili] Support legacy formats (#9117)
Adds extractor-arg `prefer_multi_flv`

Closes #6438, Closes #8525, Closes #8553, Closes #10243
Authored by: c-basalt, GD-Slime

Co-authored-by: GD-Slime <82302542+GD-Slime@users.noreply.github.com>
2024-07-01 20:22:49 +00:00
c-basalt
9200bc70c9 [ie/microsoftembed] Add extractors for dev materials (#9177)
Closes #7112
Authored by: c-basalt
2024-07-01 19:11:33 +02:00
DmitryScaletta
aefede2556 [ie/nuum] Fix formats extraction (#10316)
Pass referer header to m3u8 requests

Closes #10310
Authored by: DmitryScaletta
2024-07-01 17:01:51 +00:00
c-basalt
4f5d7be3c5 [ie/qqmusic] Fix extractors (#9768)
Closes #9336
Authored by: c-basalt
2024-07-01 16:54:15 +00:00
Thomas R
1d369b4096 [ie/graspop] Add extractor (#10268)
Authored by: Niluge-KiWi
2024-07-01 16:49:19 +00:00
bashonly
55e3e6fd21 Add playlist_channel and playlist_channel_id fields (#10266)
Authored by: bashonly
2024-07-01 16:48:11 +00:00
Alexander Pauls
36e8dd8325 [ie/pokergo] Make metadata extraction non-fatal (#10319)
Authored by: axpauls
2024-07-01 18:30:07 +02:00
sepro
e6a22834df [ie/orf:on] Allow downloading of video in segments (#10314)
Closes #10142
Authored by: seproDev
2024-07-01 12:43:52 +02:00
A. Sertaç Akkaya
b8da8a98f8 [ie/laracasts] Add extractors (#10055)
Authored by: ASertacAkkaya, seproDev

Co-authored-by: sepro <4618135+seproDev@users.noreply.github.com>
2024-07-01 12:14:44 +02:00
Marius Gedminas
24f3097ea9 [ie/youtube] Suppress "Unavailable videos are hidden" warning (#10159)
Authored by: mgedmin
2024-06-30 22:17:17 +00:00
Dong Heon Hee
054a3ba7d1 [ie/afreecatv:catchstory] Add extractor (#10235)
Closes #10112
Authored by: hui1601
2024-06-30 22:00:33 +00:00
Dong Heon Hee
e8352ad659 [ie/afreecatv] Support browser impersonation (#10174)
Closes #8187
Authored by: hui1601
2024-06-30 21:55:21 +00:00
tippfehlr
2a4f2e82db [ie/digitalconcerthall] Rework extractor (#10152)
Authored by: tippfehlr, seproDev

Co-authored-by: sepro <4618135+seproDev@users.noreply.github.com>
2024-06-30 22:48:54 +02:00
Varun Chopra
61714f4695 [ie/jiocinema:series] Fix extraction (#10139)
Authored by: varunchopra
2024-06-30 20:29:01 +00:00
bashonly
61edf57f8f [ie/mlbtv] Fix extraction (#10296)
Closes #10275
Authored by: bashonly
2024-06-29 15:43:55 +00:00
sepro
5b1a2aa978 [ie/bitchute] Fix extractors (#10301)
Closes #10293
Authored by: seproDev
2024-06-29 17:32:41 +02:00
sepro
7814c50948 [cleanup] Bump ruff to 0.5.x (#10282)
Authored by: seproDev
2024-06-29 17:30:57 +02:00
bashonly
54a63e80af [test:download] Raise on network errors (#10283)
Authored by: bashonly, seproDev
Co-authored-by: sepro <4618135+seproDev@users.noreply.github.com>
2024-06-28 00:23:44 +00:00
hafeoz
7a03f88c40 [ie/neteasemusic] Extract more formats from new API (#10258)
Closes #9196, Closes #10239
Authored by: hafeoz
2024-06-27 16:17:32 +00:00
Simon Sawicki
f2a4ea1794 [pp/embedthumbnail] Fix postprocessor (#10248)
* [compat] Improve `imghdr.what` detection
* [pp/embedthumbnail] Improve imghdr fail message
* [pp/embedthumbnail] Fix AtomicParsley error handling

Authored by: Grub4K
2024-06-27 16:12:19 +02:00
bashonly
0953209a85 [ie/mediasite] Fix extraction (#10273)
Fix regression in add96eb9f8

Closes #10270
Authored by: bashonly
2024-06-26 23:57:34 +00:00
Cæsim
b758877afa [ie/cloudycdn] Fix formats extraction (#10271)
Authored by: Caesim404
2024-06-26 23:56:44 +00:00
megumin
f3411af12e [ie/matchtv] Fix extractor (#10190)
Authored by: megumintyan
2024-06-25 00:49:09 +02:00
Peisen Wang
a8520244b8 [cookies] Fix --cookies-from-browser DE detection on Linux (#10237)
Align with chromium source by parsing every part of `XDG_CURRENT_DESKTOP`

Authored by: peisenwang
2024-06-22 23:25:16 +00:00
bashonly
8ca1d57ed0 [ie/facebook:reel] Fix extraction (#10232)
Closes #10227
Authored by: bashonly
2024-06-21 23:21:45 +00:00
bashonly
800ec085cc [ie/youtube] Skip formats if nsig decoding fails (#10223)
Ref: https://github.com/ytdl-org/youtube-dl/issues/32815

Authored by: bashonly
2024-06-21 23:19:59 +00:00
bashonly
96472d72f2 [ie/tiktok] Fix API extraction (#10216)
Closes #10213
Authored by: bashonly
2024-06-21 22:57:29 +00:00
bashonly
7aa322c02c [ie/cloudflarestream] Fix _VALID_URL and embed extraction (#10215)
Authored by: bashonly
2024-06-20 22:05:25 +00:00
Haxy
9bd8501993 [ie/youtube] Extract all formats from multi-language m3u8s (#9875)
Authored by: clienthax, bashonly

Co-authored-by: bashonly <88596187+bashonly@users.noreply.github.com>
2024-06-20 21:54:53 +00:00
bashonly
90c3721a32 [ie/brightcove] Upgrade requests to HTTPS (#10202)
Closes #10199
Authored by: bashonly
2024-06-17 16:37:12 +00:00
bashonly
d4b52ce3fc [ie/podbayfm] Fix extraction (#10195)
Authored by: bashonly, seproDev

Co-authored-by: sepro <4618135+seproDev@users.noreply.github.com>
2024-06-17 00:05:46 +00:00
bashonly
d6c2c2bc84 [ie/sproutvideo] Add extractors (#10098)
Closes #2933, Closes #8942
Authored by: bashonly, TheZ3ro

Co-authored-by: thezero <io@thezero.org>
2024-06-17 00:01:46 +00:00
bashonly
5dbac313ae [ie/generic] Add key_query extractor-arg
Authored by: bashonly
2024-06-15 18:38:02 -05:00
bashonly
ca8885edd9 [fd/hls] Apply extra_param_to_key_url from info dict
Authored by: bashonly
2024-06-15 18:38:02 -05:00
c-basalt
4093eb1fcc [ie/khanacademy] Fix extractors (#9136)
Closes #8775
Authored by: c-basalt
2024-06-15 21:51:27 +02:00
bashonly
a0d9967f68 [ie/youtube:tab] Fix channel metadata extraction (#10071)
Closes #9893, Closes #10090
Authored by: bashonly, shoxie007

Co-authored-by: shoxie007 <74592022+shoxie007@users.noreply.github.com>
2024-06-13 23:22:30 +00:00
bashonly
ea88129784 [ie/tiktok] Detect and raise when login is required (#10124)
Authored by: bashonly
2024-06-13 23:16:43 +00:00
garret1317
b8e2a5e0e1 [ie/NHKRadiru] Fix extractor (#10106)
Closes #10105
Authored by: garret1317
2024-06-13 23:08:40 +00:00
bashonly
e53e56b735 [ie/soundcloud] Fix download format extraction (#10125)
Authored by: bashonly
2024-06-13 23:01:19 +00:00
JSubelj
92a1c4abae [ie/rtvslo.si:show] Add extractor (#8418)
Authored by: JSubelj, seproDev

Co-authored-by: sepro <4618135+seproDev@users.noreply.github.com>
2024-06-14 00:51:12 +02:00
bashonly
3690c2f598 [ie/francetv] Detect and raise errors for DRM (#10165)
Closes #10163
Authored by: bashonly
2024-06-13 22:44:20 +00:00
bashonly
081708d607 [ie/francetv] Fix extractor (#10177)
Closes #10175
Authored by: bashonly
2024-06-13 22:31:13 +00:00
bashonly
d7d861811c [ie/tubitv:series] Fix extractor (#10116)
Closes #8563
Authored by: bashonly
2024-06-13 21:59:17 +00:00
bashonly
46c1b7cfec [build] Cache dependencies for macos job (#10088)
Authored by: bashonly
2024-06-13 21:13:08 +00:00
sepro
add96eb9f8 [cleanup] Add more ruff rules (#10149)
Authored by: seproDev

Reviewed-by: bashonly <88596187+bashonly@users.noreply.github.com>
Reviewed-by: Simon Sawicki <contact@grub4k.xyz>
2024-06-12 01:09:58 +02:00
bashonly
db50f19d76 [rh:requests] Bump minimum requests version to 2.32.2 (#10079)
Closes #10078
Authored by: bashonly
2024-06-01 18:57:23 +00:00
bashonly
2e5a47da40 [ie/PatreonCampaign] Fix campaign_id extraction (#10070)
Closes #10013
Authored by: bashonly
2024-05-30 23:04:27 +00:00
bashonly
5fdd13006a [build] Bump Pyinstaller to >=6.7.0 for all builds (#10069)
Ref: https://github.com/pyinstaller/pyinstaller/issues/8554

Authored by: bashonly, seproDev

Co-authored-by: sepro <4618135+seproDev@users.noreply.github.com>
2024-05-30 22:34:02 +00:00
bashonly
03334d639d [build] Use macos-12 image for yt-dlp_macos (#10063)
Ref: https://github.blog/changelog/2024-05-20-actions-upcoming-changes-to-github-hosted-macos-runners/

Authored by: bashonly
2024-05-30 18:53:37 +00:00
sepro
8b46ad4d8b [ie/orf:on] Support segmented episodes (#10053)
Closes #9930
Authored by: seproDev
2024-05-29 23:16:57 +02:00
Ben Galliart
bef9a9e536 [ie/TubiTv] Fix extractor (#9975)
Closes #9937
Authored by: chilinux
2024-05-29 04:25:05 +00:00
github-actions[bot]
111b61ddef Release 2024.05.27
Created by: bashonly

:ci skip all :ci run dl
2024-05-27 22:35:55 +00:00
trueauracoral
12b248ce60 [ie/peertube] Support livestreams (#10044)
Closes #2055
Authored by: trueauracoral, bashonly
2024-05-27 22:24:01 +00:00
bashonly
5e3e19c93c [cleanup] Misc (#10043)
Authored by: bashonly
2024-05-27 21:46:07 +00:00
bashonly
c53c2e40fd [ie/tiktok:user] Fix extraction loop (#10035)
Closes #10033
Authored by: bashonly
2024-05-27 04:22:46 +00:00
sepro
ae2194e1dd [ie/Piksel] Update domain (#9223)
Authored by: seproDev
2024-05-27 01:24:03 +02:00
sepro
26603d0b34 [ie] Fix parsing of base URL in SMIL manifest (#9225)
Authored by: seproDev
2024-05-27 00:06:34 +02:00
github-actions[bot]
ed274b60b1 Release 2024.05.26
Created by: bashonly

:ci skip all :ci run dl
2024-05-26 21:55:43 +00:00
bashonly
ae2af1104f [cleanup] Misc
Authored by: bashonly, seproDev, Grub4K
2024-05-26 16:52:42 -05:00
Simon Sawicki
5c019f6328 [misc] Cleanup (#9765)
Closes #9763
Authored by: bashonly, seproDev, Grub4K

Co-authored-by: bashonly <88596187+bashonly@users.noreply.github.com>
Co-authored-by: sepro <4618135+seproDev@users.noreply.github.com>
2024-05-26 21:37:49 +00:00
ocococococ
5a2eebc767 [ie/LCI] Fix extractor (#10025)
Authored by: ocococococ
2024-05-26 23:33:15 +02:00
imanoreotwe
119d41f270 [ie/tiktok:collection] Add extractor (#9986)
Closes #9984
Authored by: imanoreotwe, bashonly
2024-05-26 21:26:30 +00:00
bashonly
347f13dd9b [ie/tiktok:user] Fix extractor (#9661)
Closes #3776, Closes #4996
Authored by: bashonly
2024-05-26 21:16:36 +00:00
coletdjnz
96a134dea6 [ie/youtube] Extract upload timestamp if available (#9856)
Closes #4962, Closes #9829
Authored by: coletdjnz
2024-05-26 21:13:12 +00:00
Simon Sawicki
a4da9db87b Update to ytdl-commit-a08f2b7 (#10012)
[ie] Rework JWPlayer extraction
- f66372403f
[ie/gbnews] Add extractor
- 70f230f9cf
[ie/caffeinetv] Add extractor
- 40bd5c1815
[ie/youporn] Improve extraction
- 0b2ce3685e
[ie/youporn] Add playlist extractors
- 668332b973

Closes #9188, Closes #9523
Authored by: Grub4K, bashonly
2024-05-26 21:09:53 +00:00
Simon Sawicki
e897bd8292 [misc] Add hatch, ruff, pre-commit and improve dev docs (#7409)
Authored by: bashonly, seproDev, Grub4K

Co-authored-by: bashonly <88596187+bashonly@users.noreply.github.com>
Co-authored-by: sepro <4618135+seproDev@users.noreply.github.com>
2024-05-26 21:27:21 +02:00
HobbyistDev
a2e9031605 [ie/XiaoHongShu] Add extractor (#9646)
Closes #9529
Authored by: HobbyistDev
2024-05-26 01:54:17 +02:00
Finn R. Gärtner
3ba8de62d6 [ie/Piapro] Fix extractor (#9311)
Closes #9884
Authored by: FinnRG, seproDev
2024-05-26 01:40:35 +02:00
bashonly
0d067e77c3 [ie/dangalplay] Add extractors (#10021)
Closes #8258
Authored by: bashonly
2024-05-25 23:16:17 +00:00
bashonly
1463945ae5 [ie/jiocinema] Add extractors (#10026)
Closes #5563, Closes #7759, Closes #8679, Closes #9349
Authored by: bashonly
2024-05-25 23:03:05 +00:00
bashonly
c92e4e625e [ie/tele5] Overhaul extractor (#10024)
Closes #3051, Closes #7955, Closes #8501, Closes #9792
Authored by: bashonly
2024-05-25 23:00:33 +00:00
bashonly
90d2da311b [ie/DiscoveryPlus] Fix dmax.de and related extractors (#10020)
Closes #7530
Authored by: bashonly
2024-05-25 15:01:40 +00:00
sepro
3779f2a307 [ie/ORFTVthek] Remove extractor (#10011)
Authored by: seproDev
2024-05-23 22:18:20 +02:00
c-basalt
63b569bc5e [ie/taptap] Add extractors (#9776)
Closes #9643
Authored by: c-basalt
2024-05-23 20:15:56 +02:00
kclauhk
82f4f4444e [ie/reddit] Fix subtitles extraction (#10006)
Authored by: kclauhk
2024-05-23 16:26:24 +00:00
Mozi
eead3bbc01 [ie/brilliantpala] Fix login (#9788)
Closes #9771
Authored by: pzhlkj6612
2024-05-23 16:25:16 +00:00
BohwaZ
5bbfdb7c99 [ie/HearThisAt] Improve _VALID_URL (#9949)
Closes #9755
Authored by: bohwaz, seproDev

Co-authored-by: sepro <4618135+seproDev@users.noreply.github.com>
2024-05-23 06:30:21 +02:00
TuxCoder
0dd53faeca [ie/orf:on] Improve extraction (#9677)
Closes #9652
Authored by: TuxCoder
2024-05-23 06:25:16 +02:00
six
be7db1a5a8 [ie/NTSLive] Add extractor (#9641)
Closes #9640
Authored by: lostfictions
2024-05-23 06:13:00 +02:00
HobbyistDev
65e709d235 [ie/GodResource] Add extractor (#9629)
Closes #9551
Authored by: HobbyistDev
2024-05-23 06:09:21 +02:00
Amir Y. Perehodnik
06cb063839 [ie/Instagram] Support /reels/ URLs (#9539)
Closes #6689
Authored by: amir16yp
2024-05-23 06:07:20 +02:00
panatexxa
296df0da1d [ie/Moviepilot] Fix extractor (#9366)
Authored by: panatexxa
2024-05-23 06:03:55 +02:00
vtexier
7b5674949f [ie/ArteTV] Label forced subtitles (#9945)
Authored by: vtexier
2024-05-22 23:09:58 +00:00
bashonly
f2816634e3 [ie/crunchyroll] Fix stream extraction (#10005)
Closes #9994
Authored by: bashonly
2024-05-22 22:25:07 +00:00
bashonly
beaf832c7a [ie/soundcloud] Add formats extractor-arg (#10004)
Authored by: bashonly
2024-05-22 22:20:29 +00:00
bashonly
eef1e9f44f [ie/tiktok] Fix subtitles extraction (#9961)
Authored by: bashonly
2024-05-22 22:17:10 +00:00
bashonly
78c57cc0e0 [build] macos job requires setuptools<70 (#9993)
Authored by: bashonly
2024-05-22 14:30:25 +00:00
Simon Sawicki
3f7999533e [rh:requests] Patch support for requests 2.32.2+ (#9992)
Authored by: Grub4K
2024-05-22 16:22:25 +02:00
bashonly
4ccd73fea0 [ie/tiktok] Extract all web formats (#9960)
Closes #9506
Authored by: bashonly
2024-05-20 23:11:24 +00:00
bashonly
3584b8390b [ie/tiktok] Add device_id extractor-arg (#9951)
Authored by: bashonly
2024-05-20 23:09:28 +00:00
bashonly
6e36d17f40 [build] Exclude requests from py2exe (#9982)
Authored by: bashonly
2024-05-20 23:01:17 +00:00
coletdjnz
c36513f1be [rh:requests] Update to requests 2.32.0 (#9980)
Authored by: coletdjnz
2024-05-20 21:44:41 +00:00
bashonly
3e35aa32c7 [ie/twitter] Fix auth for x.com migration (#9952)
Authored by: bashonly
2024-05-18 18:33:30 +00:00
coletdjnz
53b4d44f55 [test] Fix connect timeout test (#9906)
Fixes https://github.com/yt-dlp/yt-dlp/issues/9659

Authored by: coletdjnz
2024-05-18 19:12:21 +12:00
bashonly
c999bac02c Bugfix for 61b17437dc
Authored by: bashonly
2024-05-17 23:44:11 -05:00
coletdjnz
12d8ea8246 [ie/youtube] Remove android from default clients (#9553)
Closes #9554
Authored by: coletdjnz, bashonly

Co-authored-by: bashonly <88596187+bashonly@users.noreply.github.com>
2024-05-17 16:03:02 +00:00
Justin Keogh
8e15177b41 [ie/youtube] Fix comments extraction (#9775)
Closes #9358
Authored by: jakeogh, minamotorin, shoxie007, bbilly1

Co-authored-by: minamotorin <76122224+minamotorin@users.noreply.github.com>
Co-authored-by: shoxie007 <74592022+shoxie007@users.noreply.github.com>
Co-authored-by: Simon <35427372+bbilly1@users.noreply.github.com>
2024-05-17 14:37:30 +00:00
Roeniss Moon
dd9ad97b1f [cookies] Add --cookies-from-browser support for Whale (#9649)
Closes #9307
Authored by: roeniss
2024-05-17 14:33:12 +00:00
minamotorin
61b17437dc [ie] Add POST data hash to --write-pages filenames (#9879)
Closes #9773
Authored by: minamotorin
2024-05-17 14:28:36 +00:00
kylegustavo
7975ddf245 [ie/bbc] Fix and extend extraction (#9705)
Closes #9701
Authored by: kylegustavo, dirkf, pukkandan
2024-05-17 06:20:13 +00:00
Podiumnoche
6d8a53d870 [ie/cda] Fix age-gated web extraction (#9939)
Closes #5980, Closes #6638
Authored by: Podiumnoche, Szpachlarz, dirkf, emqi
2024-05-16 22:41:34 +00:00
bashonly
4813173e45 [ie/twitter] Support x.com URLs (#9926)
Closes #9923
Authored by: bashonly
2024-05-16 22:36:56 +00:00
bashonly
41ba4a808b [ie/tiktok] Extract via mobile API only if app_info is passed (#9938)
Partially addresses #9506
Authored by: bashonly
2024-05-16 22:27:09 +00:00
Mozi
351dc0bc33 [ie/eplus] Handle URLs without videos (#9855)
Authored by: pzhlkj6612
2024-05-13 23:21:11 +00:00
feederbox826
518c1afc15 [ie/pornhub] Fix login by email address (#9914)
Closes #9717
Authored by: feederbox826
2024-05-13 23:18:14 +00:00
WyohKnott
85ec2a337a [ie/googledrive] Fix formats extraction (#9908)
Closes #8281
Authored by: WyohKnott
2024-05-12 23:05:47 +00:00
Jake Finley
b207d26f83 [ie/xvideos:quickies] Fix extractor (#9834)
Closes #6356
Authored by: JakeFinley96
2024-05-12 20:42:33 +00:00
sepro
01395a3434 [cleanup] Remove questionable extractors (#9911)
Closes #6279, Closes #6799
Authored by: seproDev
2024-05-12 22:12:11 +02:00
Haxy
cf212d0a33 [ie/youtube] Add mediaconnect client (#9546)
Authored by: clienthax
2024-05-12 16:03:36 +00:00
alard
6db96268c5 [ie/TV5Monde] Fix extractor (#9143)
Closes #9118
Authored by: alard, seproDev

Co-authored-by: sepro <4618135+seproDev@users.noreply.github.com>
2024-05-11 23:58:15 +02:00
Eric Lam
800a43983e [ie/EuroParlWebstream] Support new URL format (#9647)
Authored by: voidful, seproDev

Co-authored-by: sepro <4618135+seproDev@users.noreply.github.com>
2024-05-11 23:50:59 +02:00
DaPotato69
7e4259dff0 Better warning when requested subs format not found (#9873)
Closes #9760
Authored by: DaPotato69
2024-05-11 21:11:40 +00:00
Stefan Lobbenmeier
f1f158976e [cookies] Get chrome session cookies with --cookies-from-browser (#9747)
Partially addresses #5534
Authored by: StefanLobbenmeier
2024-05-11 17:25:39 +00:00
llamasblade
31b417e1d1 [ie/hytale] Use CloudflareStreamIE explicitly (#9672)
Authored by: llamasblade
2024-05-11 17:01:56 +00:00
Hugo Azevedo
fc2879ecb0 [ie/alura] Fix extractor (#9658)
Authored by: hugohaa
2024-05-11 16:54:29 +00:00
rrgomes
0a1a8e3005 [ie/nfb] Fix extractors (#9650)
Authored by: rrgomes
2024-05-11 16:38:41 +00:00
c-basalt
4cc99d7b6c [ie/BilibiliSpaceVideo] Fix extraction (#9905)
Closes #9892
Authored by: c-basalt
2024-05-10 22:34:53 +00:00
coletdjnz
3c7a287e28 [test] Add HTTP proxy tests (#9578)
Also fixes HTTPS proxies for curl_cffi

Authored by: coletdjnz
2024-05-11 10:06:58 +12:00
sepro
98d71d8c5e [ie/commonmistakes] Raise error on blob URLs (#9897)
Authored by: seproDev
2024-05-10 19:20:55 +02:00
kclauhk
00a9f2e1f7 [ie/canalalpha] Fix extractor (#9675)
Authored by: kclauhk
2024-05-10 19:19:57 +02:00
Mozi
73f12119b5 [ie/netease:program] Improve --no-playlist message (#9488)
Authored by: pzhlkj6612
2024-05-10 19:13:35 +02:00
Alexandre Huot
6b54cccdcb [ie/Qub] Fix extractor (#7019)
Closes #4989
Authored by: alexhuot1, dirkf
2024-05-08 22:10:06 +00:00
src-tinkerer
c4b87dd885 [ie/ZenYandex] Fix extractor (#9813)
Closes #9803
Authored by: src-tinkerer
2024-05-08 21:27:30 +00:00
fireattack
2338827072 [ie/bilibili] Fix --geo-verification-proxy support (#9817)
Closes #9797
Authored by: fireattack
2024-05-08 21:24:44 +00:00
fireattack
06d52c8731 [ie/BilibiliSpaceVideo] Better error message (#9839)
Closes #9528
Authored by: fireattack
2024-05-08 21:09:38 +00:00
sepro
df5c9e733a [ie/vk] Improve format extraction (#9885)
Closes #5675
Authored by: seproDev
2024-05-08 23:02:22 +02:00
Mozi
b38018b781 [ie/mixch] Extract comments (#9860)
Authored by: pzhlkj6612
2024-05-08 20:51:16 +00:00
Rasmus Antons
145dc6f656 [ie/boosty] Add cookies support (#9522)
Closes #9401
Authored by: RasmusAntons
2024-05-08 20:16:32 +00:00
bashonly
5904853ae5 [ie/crunchyroll] Support browser impersonation (#9857)
Closes #7442
Authored by: bashonly
2024-05-05 23:15:32 +00:00
Chris Caruso
c8bf48f3a8 [ie/cbc.ca:player] Improve _VALID_URL (#9866)
Closes #9825
Authored by: carusocr
2024-05-05 23:02:24 +00:00
The-MAGI
351368cb9a [ie/youporn] Fix extractor (#8827)
Closes #7967
Authored by: The-MAGI
2024-05-05 22:57:38 +00:00
sepro
96da952504 [core] Warn if lack of ffmpeg alters format selection (#9805)
Authored by: seproDev, pukkandan
2024-05-05 00:44:08 +02:00
bashonly
bec9a59e8e [networking] Add extensions attribute to Response (#9756)
CurlCFFIRH now provides an `impersonate` field in its responses' extensions

Authored by: bashonly
2024-05-04 22:19:42 +00:00
bashonly
036e0d92c6 [ie/patreon] Extract multiple embeds (#9850)
Closes #9848
Authored by: bashonly
2024-05-04 22:11:11 +00:00
bashonly
cb2fb4a643 [ie/crunchyroll] Always make metadata available (#9772)
Closes #9750
Authored by: bashonly
2024-05-04 16:15:44 +00:00
bashonly
231c2eacc4 [ie/soundcloud] Extract genres (#9821)
Authored by: bashonly
2024-05-04 16:14:36 +00:00
bashonly
c4853655cb [ie/wrestleuniverse] Avoid partial stream formats (#9800)
Authored by: bashonly
2024-05-04 16:07:15 +00:00
Simon Sawicki
ac817bc83e [build] Migrate linux_exe to static musl builds (#9811)
Authored by: Grub4K, bashonly

Co-authored-by: bashonly <88596187+bashonly@users.noreply.github.com>
2024-04-28 22:19:25 +00:00
bashonly
1a366403d9 [build] Run macos_legacy job on macos-12 (#9804)
`macos-latest` has been bumped to `macos-14-arm64` which breaks the builds

Authored by: bashonly
2024-04-28 15:35:17 +00:00
Simon Sawicki
7e26bd53f9 [core/windows] Fix tests for sys.executable with spaces (Fix for 64766459e3)
Authored by: Grub4K
2024-04-28 15:47:55 +02:00
Simon Sawicki
64766459e3 [core/windows] Improve shell quoting and tests (#9802)
Authored by: Grub4K
2024-04-27 10:37:26 +02:00
bashonly
89f535e265 [ci] Fix curl-cffi installation (Bugfix for 02483bea1c)
Authored by: bashonly
2024-04-22 20:36:01 +00:00
bashonly
ff38a011d5 [ie/crunchyroll] Fix auth and remove cookies support (#9749)
Closes #9745
Authored by: bashonly
2024-04-21 22:41:40 +00:00
bashonly
8056a3026e [ie/theatercomplextown] Fix extractors (#9754)
Authored by: bashonly
2024-04-21 16:05:42 +00:00
Simon Sawicki
3ee1194288 [ie] Make _search_nextjs_data non fatal (#8937)
Authored by: Grub4K
2024-04-21 13:40:38 +02:00
bashonly
e3b42d8b1b [ie/facebook] Fix DASH formats extraction (#9734)
Closes #9720
Authored by: bashonly
2024-04-20 10:23:12 +00:00
bashonly
c9ce57d9bf [ie/patreon] Fix Vimeo embed extraction (#9712)
Fixes regression in 36b240f9a7

Closes #9709
Authored by: bashonly
2024-04-18 23:18:56 +00:00
bashonly
02483bea1c [build] Normalize curl_cffi group to curl-cffi (#9698)
Closes #9682
Authored by: bashonly
2024-04-18 23:11:12 +00:00
bashonly
315b354429 [ie/afreecatv:live] Add cdn extractor-arg (#9666)
Closes #6497
Authored by: bashonly
2024-04-13 16:40:53 +00:00
bashonly
0c21c53885 [ie/jiosaavn] Extract via API and fix playlists (#9656)
Closes #9648
Authored by: bashonly
2024-04-13 16:08:25 +00:00
github-actions[bot]
168e72dcd3 Release 2024.04.09
Created by: Grub4K

:ci skip all :ci run dl
2024-04-09 17:03:28 +00:00
Simon Sawicki
ff07792676 [core] Prevent RCE when using --exec with %q (CVE-2024-22423)
The shell escape function now properly escapes `%`, `\\` and `\n`. `utils.Popen` as well as `%q` output template expansion have been patched accordingly.

Prior to this fix using `--exec` together with `%q` when on Windows could cause remote code to execute. See https://github.com/yt-dlp/yt-dlp/security/advisories/GHSA-hjq6-52gw-2g7p for more details.

Authored by: Grub4K
2024-04-09 18:36:13 +02:00
bashonly
216f6a3cb5 [cleanup] Misc (#9426)
Authored by: bashonly, pukkandan
2024-04-09 16:12:26 +00:00
bashonly
b19ae095fd [build] Do not include curl_cffi in macos_legacy (#9653)
Authored by: bashonly
2024-04-08 23:20:58 +00:00
Simon Sawicki
9590cc6b47 Add new option --progress-delta (#9082)
Authored by: Grub4K
2024-04-08 22:47:38 +02:00
luiso1979
79a451e576 [networking] Respect SSLKEYLOGFILE environment variable (#9543)
Authored by: luiso1979
2024-04-08 21:53:30 +02:00
Leo Heitmann Ruiz
df0e138fc0 [docs] Various manpage fixes
Authored by: leoheitmannruiz
2024-04-08 21:24:58 +02:00
bashonly
2e94602f24 [ie/jiosaavn] Support playlists (#9622)
Closes #9616
Authored by: bashonly
2024-04-07 20:55:46 +00:00
bashonly
4af9d5c2f6 [ie/nhk] Fix NHK World extractors (#9623)
Closes #9513
Authored by: bashonly
2024-04-07 16:59:38 +00:00
John Victor
36b240f9a7 [ie/patreon] Do not extract dead embed URLs (#9613)
Closes #8702
Authored by: johnvictorfs
2024-04-07 16:26:44 +00:00
bashonly
fc53ec13ff [ie/tiktok] Restore carrier_region API parameter (#9637)
Avoids some geo-blocks

Authored by: bashonly
2024-04-07 15:32:11 +00:00
Dmitry Meyer
2ab2651a4a [cookies] Add --cookies-from-browser support for Firefox Flatpak (#9619)
Authored by: un-def
2024-04-07 15:28:59 +00:00
bashonly
b15b0c1d21 [ie/vkplay] Fix _VALID_URL (#9636)
Closes #9635
Authored by: bashonly
2024-04-06 20:42:51 +00:00
bashonly
c8a61a9100 [ie/kick] Support browser impersonation (#9611)
Closes #6748
Authored by: bashonly
2024-04-06 17:42:32 +00:00
Mozi
f2fd449b46 [ie/joqrag] Fix live status detection (#9624)
Authored by: pzhlkj6612
2024-04-06 17:34:51 +00:00
Tomoka1
9415f1a5ef [ie/afreecatv] Overhaul extractor (#9566)
Closes #4592, Closes #8862, Closes #9544
Authored by: bashonly, Tomoka1

Co-authored-by: bashonly <88596187+bashonly@users.noreply.github.com>
2024-04-06 17:23:16 +00:00
bashonly
a48cc86d6f [ie/dropbox] Fix formats extraction (#9627)
Closes #9533
Authored by: bashonly
2024-04-06 17:19:44 +00:00
bytedream
954e57e405 [ie/crunchyroll] Fix extractor (#9615)
Authored by: bytedream
2024-04-06 12:53:20 +02:00
Dong Heon Hee
9073ae6458 [ie/afreecatv:live] Fix extractor (#9348)
Closes #4466, Closes #9345
Authored by: hui1601
2024-04-04 16:48:05 +00:00
Offert4324
4cd9e251b9 [ie/medici] Fix extractor (#9518)
Closes #8813
Authored by: Offert4324
2024-04-04 16:45:19 +00:00
bashonly
0ae16ceb18 [ie/jiosaavn] Extract artists (#9612)
Closes #9607
Authored by: bashonly
2024-04-03 23:23:04 +00:00
bashonly
443e206ec4 [ie/jiosaavn] Fix format extensions (#9609)
Authored by: bashonly
2024-04-03 23:21:28 +00:00
bashonly
4c3b7a0769 [ie/mixch] Fix extractor (#9608)
Closes #9536
Authored by: bashonly, nipotan
2024-04-03 22:53:42 +00:00
bashonly
16be117729 Add option --no-break-on-existing (#9610)
Authored by: bashonly
2024-04-03 22:51:41 +00:00
trainman261
b49d5ffc53 [ie/cbc.ca:player] Support new URL format (#9561)
Closes #9534
Authored by: trainman261
2024-04-03 19:11:13 +00:00
HobbyistDev
36baaa10e0 [ie/Radio1Be] Add extractor (#9122)
Closes #8707
Authored by: HobbyistDev
2024-04-03 18:51:14 +00:00
Kacper Michajłow
02f93ff51b [ie/twitch] Extract AV1 and HEVC formats (#9158)
Authored by: kasper93
2024-04-03 18:38:51 +00:00
Mozi
c59de48e2b [ie/mixch:archive] Fix extractor (#8761)
Closes #2373
Authored by: pzhlkj6612
2024-04-01 22:41:09 +00:00
Mozi
0284f1fee2 [ie/asobistage] Add extractor (#8735)
Authored by: pzhlkj6612
2024-04-01 22:29:14 +00:00
bashonly
e8032503b9 [build] Print SHA sums to GHA logs (#9582)
Authored by: bashonly
2024-04-01 17:02:25 +00:00
bashonly
97362712a1 [ie/soundcloud] Support cookies (#9586)
Closes #997
Authored by: bashonly
2024-04-01 16:58:48 +00:00
bashonly
246571ae1d [ie/soundcloud] Support retries for API rate-limit (#9585)
Authored by: bashonly
2024-04-01 16:21:46 +00:00
Simon Sawicki
32abfb00bd [utils] traverse_obj: Convenience improvements (#9577)
Add support for:
- `http.cookies.Morsel`
- Multi type filters (`{type, type}`)

Authored by: Grub4K
2024-04-01 02:12:03 +02:00
pukkandan
c305a25c1b [cleanup] Standardize import datetime as dt (#8978) 2024-04-01 05:32:15 +05:30
pukkandan
e3a3ed8a98 [ie, cleanup] No from stdlib imports in extractors (#8978) 2024-04-01 05:31:09 +05:30
pukkandan
a25a424323 [ie/youtube] Calculate more accurate filesize
YouTube provides slightly different duration for each format.
Calculating file-size based on this duration instead of the
video duration gives more accurate results.

Ref: https://github.com/yt-dlp/yt-dlp/issues/1400#issuecomment-2007441207
2024-04-01 04:56:09 +05:30
sepro
86e3b82261 [core] Fix filesize_approx calculation (#9560)
Reverts 22e4dfacb6

Despite being documented as `Kbit/s`, the extractors/manifests were returning bitrates in SI units of kilobits/sec.

Authored by: seproDev, pukkandan
2024-04-01 04:47:24 +05:30
pukkandan
e7b17fce14 [ie/youtube] Update android params
Discovered by LuanRT - https://github.com/LuanRT/YouTube.js/pull/624

Closes #9554
2024-04-01 01:31:53 +05:30
bashonly
a2d0840739 [ie/soundcloud] Adjust format sorting (#9584)
- Adapt to 86a972033e

Authored by: bashonly
2024-03-31 20:01:33 +00:00
pukkandan
86a972033e Infer acodec for single-codec containers 2024-03-31 22:50:21 +05:30
bashonly
50c2935231 [ie] Add extractor impersonate API (#9474)
Authored by: bashonly, Grub4K, pukkandan
2024-03-30 23:18:07 +00:00
bashonly
0df63cce69 [ie/thisoldhouse] Support Brightcove embeds (#9576)
Closes #9570
Authored by: bashonly
2024-03-30 23:06:20 +00:00
bashonly
63f685f341 [ie/tiktok] Prefer non-bytevc2 formats (#9575)
Closes #9567
Authored by: bashonly
2024-03-30 22:54:00 +00:00
Simon Sawicki
3699eeb67c [utils] traverse_obj: Allow unbranching using all and any (#9571)
Authored by: Grub4K
2024-03-30 19:54:43 +01:00
Simon Sawicki
979ce2e786 [test] traversal: Separate traversal tests (#9574)
Authored by: Grub4K
2024-03-30 19:32:07 +01:00
bashonly
58dd0f8d1e [build] Optional dependencies cleanup (#9550)
Authored by: bashonly
2024-03-29 23:24:40 +00:00
bashonly
cb61e20c26 [ie/tiktok] Fix API extraction (#9548)
Closes #9506
Authored by: bashonly, Grub4K

Co-authored-by: Simon Sawicki <contact@grub4k.xyz>
2024-03-29 23:20:14 +00:00
bashonly
9c42b7eef5 [fd/ffmpeg] Accept output args from info dict (#9278)
Authored by: bashonly
2024-03-29 23:16:46 +00:00
coletdjnz
e5d4f11104 [rh:websockets] Workaround race condition causing issues on PyPy (#9514)
Authored by: coletdjnz
2024-03-23 11:27:10 +13:00
src-tinkerer
bc2b8c0596 [ie/fathom] Add extractor (#9495)
Closes #8541
Authored by: src-tinkerer
2024-03-22 14:31:01 +00:00
sta1us
aa7e9ae4f4 [ie/xvideos] Support new URL format (#9493) (#9502)
Closes #9493
Authored by: sta1us
2024-03-22 14:28:09 +00:00
Shreyas Minocha
07f5b2f757 [ie/box] Support URLs without file IDs (#9504)
Authored by: shreyasminocha
2024-03-20 23:26:37 +00:00
Daniel Vogt
ff349ff94a [ie/sharepoint] Add extractor (#6531)
Authored by: C0D3D3V, bashonly

Co-authored-by: bashonly <88596187+bashonly@users.noreply.github.com>
2024-03-20 23:20:50 +00:00
Hasan Rüzgar
f859ed3ba1 [ie/loom] Add extractors (#8686)
Closes #3715
Authored by: bashonly, hruzgar

Co-authored-by: bashonly <88596187+bashonly@users.noreply.github.com>
2024-03-20 23:14:37 +00:00
Aron Buzinkay
17d248a587 [ie/youtube:search] Fix params for uncensored results (#9456)
Closes #9156
Authored by: alb, pukkandan
2024-03-19 23:25:04 +00:00
sepro
388c979ac6 [docs] Update yt-dlp tagline (#9481)
Authored by: seproDev, bashonly, coletdjnz, Grub4K, pukkandan
2024-03-19 18:14:04 +01:00
sepro
22e4dfacb6 [ie/youtube] Fix tbr calculation (#9489)
Authored by: pukkandan

Co-authored-by: pukkandan <pukkandan.ytdlp@gmail.com>
2024-03-18 18:07:22 +01:00
Trustin
86d2f4d248 [ie/imgur] Fix extraction (#9471)
Closes #9458
Authored by: trwstin
2024-03-17 05:04:55 +00:00
coletdjnz
52f5be1f1e [rh:curlcffi] Add support for curl_cffi
Authored by: coletdjnz, Grub4K, pukkandan, bashonly

Co-authored-by: Simon Sawicki <contact@grub4k.xyz>
Co-authored-by: pukkandan <pukkandan.ytdlp@gmail.com>
Co-authored-by: bashonly <bashonly@protonmail.com>
2024-03-16 23:15:11 -05:00
coletdjnz
0b81d4d252 Add new options --impersonate and --list-impersonate-targets
Authored by: coletdjnz, Grub4K, pukkandan, bashonly

Co-authored-by: Simon Sawicki <contact@grub4k.xyz>
Co-authored-by: pukkandan <pukkandan.ytdlp@gmail.com>
Co-authored-by: bashonly <bashonly@protonmail.com>
2024-03-16 23:14:13 -05:00
coletdjnz
f849d77ab5 [test] Workaround websocket server hanging (#9467)
Authored by: coletdjnz
2024-03-16 16:57:21 +13:00
bashonly
f2868b26e9 [ie/SonyLIVSeries] Fix season extraction (#9423)
Authored by: bashonly
2024-03-14 23:21:27 +00:00
bashonly
be77923ffe [ie/crunchyroll] Extract vo_adaptive_hls formats by default (#9447)
Closes #9439
Authored by: bashonly
2024-03-14 21:42:35 +00:00
bashonly
8c05b3ebae [ie/tiktok] Update API hostname (#9444)
Closes #9441
Authored by: bashonly
2024-03-14 21:35:46 +00:00
jazz1611
0da66980d3 [ie/gofile] Fix extractor (#9446)
Authored by: jazz1611
2024-03-14 21:34:10 +00:00
bashonly
17b96974a3 [build] Update changelog for tarball and sdist (#9425)
Closes #9417
Authored by: bashonly
2024-03-14 21:10:20 +00:00
github-actions[bot]
8463fb510a Release 2024.03.10
Created by: Grub4K

:ci skip all :ci run dl
2024-03-10 19:40:56 +00:00
pukkandan
615a84447e [cleanup] Misc (#8968)
Authored by: pukkandan, bashonly, seproDev
2024-03-11 00:52:28 +05:30
pukkandan
ed3bb2b0a1 [cleanup] Remove unused code (#8968)
Authored by: pukkandan, seproDev
2024-03-11 00:52:20 +05:30
pukkandan
45491a2a30 [utils] Improve repr of DateRange, match_filter_func 2024-03-11 00:51:39 +05:30
sepro
a687226b48 [cleanup, ie] Match both http and https in _VALID_URL (#8968)
Except for Vimeo, since that causes matching collisions.

Authored by: seproDev
2024-03-11 00:51:38 +05:30
pukkandan
93240fc184 [cleanup] Fix misc bugs (#8968)
Closes #8816

Authored by: bashonly, seproDev, pukkandan, Grub4k
2024-03-11 00:51:26 +05:30
pukkandan
47ab66db0f [docs] Misc Cleanup (#8977)
Closes #8355, #8944

Authored by: bashonly, Grub4k, Arthurszzz, seproDev, pukkandan

Co-authored-by: sepro <4618135+seproDev@users.noreply.github.com>
Co-authored-by: bashonly <bashonly@protonmail.com>
Co-authored-by: Arthurszzz <minecraftgamerarthur@gmail.com>
Co-authored-by: Simon Sawicki <accounts@grub4k.xyz>
Co-authored-by: bashonly <88596187+bashonly@users.noreply.github.com>
2024-03-11 00:48:47 +05:30
bashonly
0abf2f1f15 [build] Add transitional setup.py and pyinst.py (#9296)
Authored by: bashonly, Grub4K, pukkandan

Co-authored-by: Simon Sawicki <contact@grub4k.xyz>
Co-authored-by: pukkandan <pukkandan.ytdlp@gmail.com>
2024-03-10 19:04:30 +00:00
Peter Hosey
2d91b98456 [fd/http] Reset resume length to handle FileNotFoundError (#8399)
Closes #4521
Authored by: boredzo
2024-03-10 15:35:20 +00:00
x11x
8828f4576b [ie/youtube:tab] Fix tags extraction (#9413)
Closes #9412
Authored by: x11x
2024-03-10 15:20:48 +00:00
Simon Sawicki
dbd8b1bff9 Improve 069b2aedae
Authored by: Grub4k
2024-03-10 20:44:53 +05:30
Bl4Cc4t
8993721ecb [ie/roosterteeth] Support bonus features (#9406)
Authored by: Bl4Cc4t
2024-03-10 15:11:25 +00:00
bashonly
263a4b55ac [core] Handle --load-info-json format selection errors (#9392)
Closes #9388
Authored by: bashonly
2024-03-09 23:10:10 +00:00
bashonly
b136e2af34 Bugfix for 104a7b5a46 (#9394)
Authored by: bashonly
2024-03-09 23:07:59 +00:00
bashonly
b2cc150ad8 [ie/roosterteeth] Add Brightcove fallback (#9403)
Authored by: bashonly
2024-03-09 23:05:33 +00:00
Xpl0itU
785ab1af7f [ie/crtvg] Fix _VALID_URL (#9404)
Authored by: Xpl0itU
2024-03-09 23:03:18 +00:00
bashonly
7aad06541e [ie/youtube] Further bump client versions (#9395)
Authored by: bashonly
2024-03-09 15:51:20 +00:00
DmitryScaletta
d3d4187da9 [ie/duboku] Fix m3u8 formats extraction (#9161)
Closes #9159
Authored by: DmitryScaletta
2024-03-09 15:46:11 +00:00
sepro
c8c9039e64 [ie/generic] Follow https redirects properly (#9121)
Authored by: seproDev
2024-03-09 01:16:04 +01:00
sepro
df773c3d5d [cleanup] Mark broken and remove dead extractors (#9238)
Authored by: seproDev
2024-03-09 01:02:45 +01:00
sepro
f4f9f6d00e [cleanup] Fix infodict returned fields (#8906)
Authored by: seproDev
2024-03-08 23:36:41 +01:00
bashonly
dfd8c0b696 [ie/roosterteeth] Extract release date and timestamp (#9393)
Authored by: bashonly
2024-03-08 21:18:27 +00:00
James Martindale
dd29e6e5fd [ie/roosterteeth] Extract ad-free streams (#9355)
Closes #7647
Authored by: jkmartindale
2024-03-08 20:55:39 +00:00
bashonly
96f3924bac [ie/craftsy] Fix extractor (#9384)
Closes #9383
Authored by: bashonly
2024-03-07 23:12:43 +00:00
Simon Sawicki
0fcefb92f3 [ie/newgrounds] Fix login and clean up extraction (#9356)
Authored by: mrmedieval, Grub4K
2024-03-07 21:37:13 +01:00
bashonly
e4fbe5f886 [ie/francetv] Fix DAI livestreams (#9380)
Closes #9382
Authored by: bashonly
2024-03-07 18:03:24 +00:00
SirElderling
cd7086c0d5 [ie/RideHome] Add extractor (#8875)
Authored by: SirElderling
2024-03-06 19:04:48 +01:00
bashonly
cf91400a1d [build] Add default optional dependency group (#9295)
Authored by: bashonly, Grub4K

Co-authored-by: Simon Sawicki <contact@grub4k.xyz>
2024-03-04 23:19:37 +00:00
sepro
ac340d0745 [test:websockets] Fix timeout test on Windows (#9344)
Authored by: seproDev
2024-03-04 17:47:38 +01:00
Raphaël Droz
11ffa92a61 [ie/dailymotion] Support search (#8292)
Closes #6126
Authored by: drzraf, seproDev

Co-authored-by: sepro <4618135+seproDev@users.noreply.github.com>
2024-03-04 17:42:46 +01:00
bashonly
ede624d1db [ie/francetv] Fix m3u8 formats extraction (#9347)
Authored by: bashonly
2024-03-03 23:19:52 +00:00
Mozi
40966e8da2 Bugfix for aa13a8e3dd (#9338)
Closes #9351
Authored by: pzhlkj6612
2024-03-03 23:14:54 +00:00
Roy
eedb38ce40 [ie/dumpert] Improve _VALID_URL (#9320)
Authored by: rvsit
2024-03-03 23:12:16 +00:00
src-tinkerer
6ad11fef65 [ie/CCTV] Fix extraction (#9325)
Closes #9299
Authored by: src-tinkerer
2024-03-02 00:50:23 +00:00
Mozi
f0426e9ca5 [ie/vimeo] Extract live_status and release_timestamp (#9290)
Authored by: pzhlkj6612
2024-03-02 00:41:32 +00:00
bashonly
d9b4154cbc [ie/tiktok] Fix webpage extraction (#9327)
Closes #4992, Closes #8620
Authored by: bashonly
2024-03-02 00:36:07 +00:00
bashonly
9749ac7fec [ie/francetv] Fix extractors (#9333)
Closes #9323
Authored by: bashonly
2024-03-02 00:32:29 +00:00
bashonly
413d367580 [ie/youtube] Bump Android and iOS client versions (#9317)
Closes #9316
Authored by: bashonly
2024-02-29 23:02:50 +00:00
Mozi
aa13a8e3dd [ie/niconico] Support DMS formats (#9282)
Closes #8389, Closes #8758, Closes #9254
Authored by: pzhlkj6612, xpadev-net
2024-02-29 22:55:44 +00:00
nixxo
8f423cf805 [ie/rai] Fix m3u8 formats extraction (#9291)
Closes #887
Authored by: nixxo
2024-02-29 22:49:25 +00:00
Dong Heon Hee
804f236611 [ie/chzzk:live] Support --wait-for-video (#9309)
Authored by: hui1601
2024-02-29 11:42:20 +00:00
SirElderling
f00c0def74 [ie/zenporn] Add extractor (#8509)
Closes #8398
Authored by: SirElderling
2024-02-29 11:06:59 +00:00
bashonly
e546e5d3b3 Bugfix for 9ff9466455
Closes #9322
Authored by: bashonly
2024-02-29 04:40:45 -06:00
bashonly
4170b3d712 [ie/MujRozhlas] Fix extraction (#9306)
Closes #9304
Authored by: bashonly
2024-02-28 03:41:51 +00:00
114514ns
9ff9466455 [ie/Douyin] Fix extractor (#9239)
Closes #7854, Closes #7941
Authored by: 114514ns, bashonly

Co-authored-by: bashonly <88596187+bashonly@users.noreply.github.com>
2024-02-28 02:30:58 +00:00
marcdumais
e28e135d6f [ie/altcensored:channel] Fix playlist extraction (#9297)
Authored by: marcdumais
2024-02-25 23:21:08 +00:00
Tobias Gruetzmacher
f1570ab84d Bugfix for 1713c88273 (#9298)
Authored by: TobiX
2024-02-25 23:11:47 +00:00
pukkandan
069b2aedae Create ydl._request_director when needed 2024-02-25 06:06:42 +05:30
Simon Sawicki
5eedc208ec [ie/youtube] Better error when all player responses are skipped (#9083)
Authored by: Grub4K, pukkandan

Co-authored-by: pukkandan <pukkandan.ytdlp@gmail.com>
2024-02-24 23:20:22 +00:00
bashonly
464c919ea8 [ie/CloudflareStream] Improve embed detection (#9287)
Partially addresses #7858
Authored by: bashonly
2024-02-24 23:13:26 +00:00
bashonly
3894ab9574 [ie/archiveorg] Fix format URL encoding (#9279)
Closes #9173
Authored by: bashonly
2024-02-24 23:12:04 +00:00
bashonly
b05640d532 [ie/swearnet] Raise for login required (#9281)
Closes #9110
Authored by: bashonly
2024-02-24 23:11:28 +00:00
bashonly
7a29cbbd5f [ie/ntvru] Fix extraction (#9276)
Closes #8347
Authored by: bashonly, dirkf

Co-authored-by: dirkf <fieldhouse@gmx.net>
2024-02-24 23:10:37 +00:00
bashonly
2e8de097ad [ie/vimeo] Fix login (#9274)
Closes #9273
Authored by: bashonly
2024-02-24 23:09:04 +00:00
bashonly
f3d5face83 [ie/CloudflareStream] Improve _VALID_URL (#9280)
Closes #9171
Authored by: bashonly
2024-02-24 22:02:13 +00:00
bashonly
eabbccc439 [build] Support failed build job re-runs (#9277)
Authored by: bashonly
2024-02-24 17:00:27 +00:00
sepro
0de09c5b9e [ie/nebula] Support podcasts (#9140)
Closes #8838
Authored by: seproDev, c-basalt

Co-authored-by: c-basalt <117849907+c-basalt@users.noreply.github.com>
2024-02-24 17:08:47 +01:00
sepro
6a6cdcd182 [core] Warn user when not launching through shell on Windows (#9250)
Authored by: seproDev, Grub4K

Co-authored-by: Simon Sawicki <contact@grub4k.xyz>
2024-02-24 12:58:03 +01:00
J. Gonzalez
998dffb5a2 [ie/cnbc] Overhaul extractors (#8741)
Closes #5871, Closes #8378
Authored by: gonzalezjo, Noor-5, zhijinwuu, ruiminggu, seproDev

Co-authored-by: Noor Mostafa <93787875+Noor-5@users.noreply.github.com>
Co-authored-by: zhijinwuu <zhijinw@andrew.cmu.edu>
Co-authored-by: ruiminggu <ruimingg@andrew.cmu.edu>
Co-authored-by: sepro <4618135+seproDev@users.noreply.github.com>
2024-02-23 17:07:35 +01:00
sepro
29a74a6126 [ie/NerdCubedFeed] Overhaul extractor (#9269)
Authored by: seproDev
2024-02-23 16:59:13 +01:00
bashonly
55f1833376 [ie/twitter] Extract numeric channel_id (#9263)
Authored by: bashonly
2024-02-22 00:49:21 +00:00
gmes78
3d9dc2f359 [ie/Rule34Video] Extract creators (#9258)
Authored by: gmes78
2024-02-22 00:48:49 +00:00
bashonly
28e53d60df [ie/twitter] Extract bitrate for HLS audio formats (#9257)
Closes #9202
Authored by: bashonly
2024-02-21 08:39:10 +00:00
fireattack
f591e605df [ie/openrec] Pass referer for m3u8 formats (#9253)
Closes #6946
Authored by: fireattack
2024-02-21 03:46:55 +00:00
Jade Laurence Empleo
9a8afadd17 [plugins] Handle PermissionError (#9229)
Authored by: syntaxsurge, pukkandan
2024-02-20 14:37:37 +05:30
Lev
104a7b5a46 [ie] Migrate commonly plural fields to lists (#8917)
Authored by: llistochek, pukkandan
Related: #3944
2024-02-20 12:49:24 +05:30
alard
7e90e34fa4 [extractor/goplay] Fix extractor (#6654)
Authored by: alard
Closes #6235
2024-02-20 03:00:14 +05:30
Alard
4ce57d3b87 [ie] Support multi-period MPD streams (#6654) 2024-02-20 02:54:01 +05:30
pukkandan
ffff1bc659 Fix 3725b4f0c9 2024-02-20 02:31:56 +05:30
DmitryScaletta
4f04347909 [ie/FlexTV] Add extractor (#9178)
Closes #9175
Authored by: DmitryScaletta
2024-02-19 00:40:34 +00:00
garret
4392447d94 [ie/NhkRadiru] Extract extended description (#9162)
Authored by: garret1317
2024-02-19 00:32:44 +00:00
bashonly
43cfd462c0 Bugfix for 775cde82dc (#9241)
Authored by: bashonly
2024-02-18 20:33:23 +00:00
Mozi
974d444039 [ie/niconico] Remove legacy danmaku extraction (#9209)
Closes #8684
Authored by: pzhlkj6612
2024-02-17 22:51:43 +00:00
Elan Ruusamäe
80ed8bdeba [ie/ERRJupiter] Improve _VALID_URL (#9218)
Authored by: glensc
2024-02-17 22:48:18 +00:00
feederbox826
de954c1b4d [ie/pornhub] Fix login support (#9227)
Closes #7981
Authored by: feederbox826
2024-02-17 22:46:05 +00:00
coletdjnz
0085e2bab8 [rh] Remove additional logging handlers on close (#9032)
Fixes https://github.com/yt-dlp/yt-dlp/issues/8922

Authored by: coletdjnz
2024-02-18 11:32:34 +13:00
bashonly
73fcfa39f5 Bugfix for beaa1a4455 (#9235)
[build:Makefile] Restore compatibility with GNU Make <4.0

- The != variable assignment operator is not supported by GNU Make <4.0
- $(shell) is a no-op in BSD Make, assigns an empty string to the var
- Try to assign with != and fallback to $(shell) if not assigned (?=)

- Old versions of BSD find have different -exec behavior
- Pipe to `sed` instead of using `find ... -exec dirname {}`

- BSD tar does not support --transform, --owner or --group
- Allow user to specify path to GNU tar by passing GNUTAR variable

- pandoc vars are immediately evaluated with != in gmake>=4 and bmake
- Suppress stderr output for pandoc -v in case it is not installed
- Use string comparison instead of int comparison for pandoc version

Authored by: bashonly
2024-02-17 21:23:54 +00:00
DmitryScaletta
41d6b61e98 [ie/Utreon] Support playeur.com (#9182)
Closes #9180
Authored by: DmitryScaletta
2024-02-17 21:39:48 +01:00
sepro
0bee29493c [ie/Screencastify] Update _VALID_URL (#9232)
Authored by: seproDev
2024-02-17 20:49:10 +01:00
sepro
644738ddaa [ie/OneFootball] Fix extractor (#9222)
Authored by: seproDev
2024-02-17 20:48:15 +01:00
sepro
c168d8791d [ie/Nova] Fix embed extraction (#9221)
Authored by: seproDev
2024-02-17 20:47:19 +01:00
diman8
ddd4b5e10a [ie/SVTPage] Fix extractor (#8938)
Closes #8930
Authored by: diman8
2024-02-16 16:59:25 +00:00
nixxo
f788149237 [ie/rai] Filter unavailable formats (#9189)
Closes #9154
Authored by: nixxo
2024-02-16 00:20:58 +00:00
barsnick
017adb28e7 [ie/LinkedIn] Fix metadata and extract subtitles (#9056)
Closes #9003
Authored by: barsnick
2024-02-16 00:19:00 +00:00
ringus1
2e30b5567b [ie/facebook] Improve extraction
Partially addresses #4311

Authored by: jingtra, ringus1

Co-authored-by: Jing Kjeldsen <jingtra@gmail.com>
2024-02-15 16:51:43 -06:00
bashonly
beaa1a4455 [build:Makefile] Ensure compatibility with BSD make (#9210)
Authored by: bashonly
2024-02-15 22:42:43 +00:00
Florian Meißner
fb44020fa9 [build:Makefile] Fix man pages generated by pandoc>=3 (#7047)
Closes #7046, Closes #8481
Authored by: t-nil
2024-02-14 21:12:34 +00:00
sepro
3dc9232e1a [ie/MagellanTV] Support episodes (#9199)
Authored by: seproDev
2024-02-13 20:53:17 +01:00
sepro
9401736fd0 [ie/LeFigaroVideoEmbed] Fix extractor (#9198)
Authored by: seproDev
2024-02-13 20:52:41 +01:00
sepro
cd0443fb14 [ie/Funk] Fix extractor (#9194)
Authored by: seproDev
2024-02-13 04:12:17 +01:00
sepro
03536126d3 [ie/CrooksAndLiars] Fix extractor (#9192)
Authored by: seproDev
2024-02-13 04:11:40 +01:00
sepro
1ed5ee2f04 [ie/Ant1NewsGrEmbed] Fix extractor (#9191)
Authored by: seproDev
2024-02-13 04:11:17 +01:00
bashonly
3876429d72 [build] Bump actions/upload-artifact to v4 and adjust workflows
Authored by: bashonly
2024-02-11 19:09:03 +01:00
bashonly
b0059f0413 [build] Bump conda-incubator/setup-miniconda to v3
Authored by: bashonly
2024-02-11 19:09:03 +01:00
bashonly
b14e818b37 [ci] Bump actions/setup-python to v5
Authored by: bashonly
2024-02-11 19:09:03 +01:00
bashonly
867f637b95 [cleanup] Build files cleanup
- Fix `AUTHORS` file by doing an unshallow checkout
- Update triggers for nightly/master release

Authored by: bashonly
2024-02-11 19:09:03 +01:00
bashonly
920397634d [build] Fix secretstorage for ARM builds
Authored by: bashonly
2024-02-11 19:09:03 +01:00
bashonly
b8a433aaca [devscripts] install_deps: Add script and migrate to it
Authored by: bashonly
2024-02-11 19:09:03 +01:00
Simon Sawicki
fd647775e2 [devscripts] tomlparse: Add makeshift toml parser
Authored by: Grub4K
2024-02-11 19:09:02 +01:00
bashonly
775cde82dc [build] Migrate to pyproject.toml and hatchling
Authored by: bashonly
2024-02-11 19:09:02 +01:00
bashonly
868d2f60a7 [build:Makefile] Add automated CODE_FOLDERS and CODE_FILES
Authored by: bashonly
2024-02-11 19:08:55 +01:00
bashonly
a1b7784289 [build] Move bundle scripts into bundle submodule
Authored by: bashonly
2024-02-11 18:17:24 +01:00
lauren n. liberda
882e3b753c [ie/tvp] Support livestreams (#8860)
Closes #8824
Authored by: selfisekai
2024-02-10 00:11:34 +01:00
Dmitry Meyer
540b682981 [ie/Boosty] Add extractor (#9144)
Closes #5900, Closes #8704
Authored by: un-def
2024-02-09 16:34:56 +01:00
SirElderling
05420227aa [ie/nytimes] Extract timestamp (#9142)
Authored by: SirElderling
2024-02-05 20:39:07 +00:00
Chocobozzz
35d96982f1 [ie/peertube] Update instances (#9070)
Authored by: Chocobozzz
2024-02-05 20:58:32 +01:00
DmitryScaletta
acaf806c15 [ie/nuum] Add extractors (#8868)
Authored by: DmitryScaletta, seproDev

Co-authored-by: sepro <4618135+seproDev@users.noreply.github.com>
2024-02-05 03:17:39 +01:00
SirElderling
07256b9fee [ie/nytimes] Overhaul extractors (#9075)
Closes #2899, Closes #8605
Authored by: SirElderling
2024-02-05 00:35:52 +00:00
c-basalt
e439693f72 [ie/bilibili] Support --no-playlist (#9139)
Addresses #8499
Authored by: c-basalt
2024-02-04 23:28:45 +00:00
Michal
96d0f8c1cb [ie/eporner] Extract AV1 formats (#9028)
Authored by: michal-repo
2024-02-04 23:25:13 +00:00
YoshichikaAAA
e3ce2b385e [ie/radiko] Extract more metadata (#9115)
Authored by: YoshichikaAAA
2024-02-03 18:44:17 +00:00
sepro
4253e3b7f4 [ie/CCMA] Extract 1080p DASH formats (#9130)
Closes #5755
Authored by: seproDev
2024-02-03 15:59:43 +01:00
bashonly
8e765755f7 [ie/vimeo] Fix API headers (#9125)
Closes #9124
Authored by: bashonly
2024-02-02 21:15:04 +00:00
c-basalt
ffa017cfc5 [ie/BiliBiliSearch] Set cookie to fix extraction (#9119)
Closes #5083
Authored by: c-basalt
2024-02-02 21:08:29 +00:00
HobbyistDev
a0d50aabc5 [ie/orf:on] Add extractor (#9113)
Closes #8903
Authored by: HobbyistDev
2024-02-02 20:57:53 +00:00
HobbyistDev
2f4b575946 [ie/zetland] Add extractor (#9116)
Closes #9024
Authored by: HobbyistDev
2024-02-02 20:56:29 +00:00
garret
fc2cc626f0 [ie/cineverse] Detect when login required (#9081)
Partially addresses #9072
Authored by: garret1317
2024-01-31 20:21:59 +00:00
columndeeply
a2bac6b7ad [ie/PrankCastPost] Add extractor (#8933)
Authored by: columndeeply
2024-01-31 20:16:07 +00:00
rrgomes
4b8b0dded8 [ie/nfb] Add support for onf.ca and series (#8997)
Closes #8198
Authored by: bashonly, rrgomes

Co-authored-by: bashonly <88596187+bashonly@users.noreply.github.com>
2024-01-31 18:00:15 +00:00
jazz1611
4a6ff0b47a [ie/redtube] Support redtube.com.br URLs (#9103)
Authored by: jazz1611
2024-01-31 17:56:29 +00:00
Radu Manole
62c65bfaf8 [ie/NinaProtocol] Add extractor (#8946)
Closes #8709, Closes #8764
Authored by: RaduManole, seproDev

Co-authored-by: sepro <4618135+seproDev@users.noreply.github.com>
2024-01-31 18:41:31 +01:00
bashonly
d63eae7e7f [core] Don't select storyboard formats as fallback
Closes #7715
Authored by: bashonly
2024-01-31 03:17:51 -06:00
Simon Sawicki
2792092afd [cookies] Improve error message for Windows --cookies-from-browser chrome issue (#9080)
Authored by: Grub4K
2024-01-31 09:56:14 +01:00
Simon Sawicki
cbed249aaa [cookies] Fix --cookies-from-browser for snap Firefox (#9016)
Authored by: Grub4K
2024-01-31 09:43:52 +01:00
Simon Sawicki
3725b4f0c9 [core] Add --compat-options 2023 (#9084)
Authored by: Grub4K
2024-01-31 09:35:35 +01:00
sepro
67bb70cd70 [ie/Vbox7] Fix extractor (#9100)
Closes #1098, Closes #5661
Authored by: seproDev
2024-01-29 21:16:46 +01:00
kclauhk
9b5efaf86b [ie/facebook] Support events (#9055)
Closes #5355
Authored by: kclauhk
2024-01-29 19:43:41 +00:00
sepro
999ea80beb [ie/art19] Add extractors (#9099)
Authored by: seproDev
2024-01-29 20:38:25 +01:00
Nur Mahmud Ul Alam Tasin
41b6cdb419 [ie/viewlift] Add support for chorki.com (#9095)
Closes #3369
Authored by: NurTasin
2024-01-28 22:33:44 +00:00
Danish Humair
02e343f6ef [ie/MedalTV] Fix extraction (#9098)
Closes #8766
Authored by: Danish-H
2024-01-28 21:23:52 +00:00
Elan Ruusamäe
a514cc2feb [ie/ERRJupiter] Add extractor (#8549)
Authored by: glensc
2024-01-28 19:58:34 +01:00
kclauhk
87286e93af [ie/facebook] Support permalink URLs (#9061)
Authored by: kclauhk
2024-01-28 18:50:03 +00:00
kclauhk
3c4d3ee491 [ie/facebook] Improve thumbnail extraction (#9060)
Authored by: kclauhk
2024-01-28 18:41:56 +00:00
kclauhk
5b68c478fb [ie/facebook] Set format HTTP chunk size (#9058)
Closes #8197
Authored by: bashonly, kclauhk
2024-01-28 18:39:14 +00:00
Christopher Schreiner
9526b1f179 [ie/adn] Improve auth error handling (#9068)
Closes #9067
Authored by: infanf
2024-01-28 16:03:19 +00:00
vista-narvas
0023af81fb [ie/RumbleChannel] Fix extractor (#9092)
Closes #8782
Authored by: vista-narvas, Pranaxcau
2024-01-28 15:32:19 +00:00
Christian Kündig
cae6e46107 [ie/PlaySuisse] Add login support (#9077)
Closes #7974
Authored by: chkuendig
2024-01-28 02:19:54 +00:00
jazz1611
c91d8b1899 [ie/redtube] Fix formats extraction (#9076)
Authored by: jazz1611
2024-01-28 02:15:29 +00:00
jazz1611
77c2472ca1 [ie/Gofile] Fix extraction (#9074)
Closes #9073
Authored by: jazz1611
2024-01-28 02:12:40 +00:00
shmohawk
d79c7e9937 [ie/Txxx] Extract thumbnails (#9063)
Authored by: shmohawk
2024-01-28 02:10:20 +00:00
Caesim404
5dda3b291f [ie/lsm,cloudycdn] Add extractors (#8643)
Closes #2978
Authored by: Caesim404
2024-01-28 02:02:09 +00:00
Simon Sawicki
5f25f348f9 [ie/pr0gramm] Enable POL filter and provide tags without login (#9051)
Authored by: Grub4K
2024-01-23 23:20:13 +01:00
kclauhk
a40b0070c2 [ie/facebook:ads] Add extractor (#8870)
Closes #8083
Authored by: kclauhk
2024-01-22 06:28:11 +00:00
chtk
9cd9044790 [ie/Floatplane] Improve metadata extraction (#8934)
Authored by: chtk
2024-01-22 06:57:52 +01:00
John Victor
f0e8bc7c60 [ie/patreon] Fix embedded HLS extraction (#8993)
Closes #8973
Authored by: johnvictorfs
2024-01-21 22:36:59 +00:00
Stefan Lobbenmeier
c099ec9392 [ie/ard:mediathek] Support cookies to verify age (#9037)
Closes #9035
Authored by: StefanLobbenmeier
2024-01-21 20:54:11 +00:00
gmes78
c0ecceeefe [ie/Rule34Video] Fix _VALID_URL (#9044)
Authored by: gmes78
2024-01-21 18:56:01 +00:00
u-spec-png
3e083191cd [ie/Newgrounds:user] Fix extractor (#9046)
Closes #7308
Authored by: u-spec-png
2024-01-21 18:50:14 +00:00
dasidiot
9f1e9dab21 [ie/motherless] Support uploader playlists (#8994)
Authored by: dasidiot
2024-01-21 02:46:53 +00:00
Martin Renold
5a63454b36 [ie/mx3] Add extractors (#8736)
Authored by: martinxyz
2024-01-21 03:45:38 +01:00
lauren n. liberda
fcaa2e735b [ie/Sejm,RedCDNLivx] Add extractors (#8676)
Authored by: selfisekai
2024-01-21 03:22:26 +01:00
coletdjnz
35f4f764a7 [rh:requests] Apply remove_dot_segments to absolute redirect locations
Fixes https://github.com/yt-dlp/yt-dlp/issues/9020

Authored by: coletdjnz
2024-01-21 10:03:33 +13:00
sepro
f24e44e8cb [webvtt] Don't parse single fragment files (#9034)
Partially addresses #5804
Authored by: seproDev
2024-01-20 06:08:55 +01:00
coletdjnz
811d298b23 [networking] Remove _CompatHTTPError (#8871)
Use `yt_dlp.networking.exceptions.HTTPError`.
`_CompatHTTPError` was to help with transition to the networking framework.

Authored by: coletdjnz
2024-01-20 15:26:50 +13:00
coletdjnz
69d3191495 [test] Skip source address tests if the address cannot be bound to (#8900)
Fixes https://github.com/yt-dlp/yt-dlp/issues/8890

Authored by: coletdjnz
2024-01-20 10:39:49 +13:00
HobbyistDev
50e06e21a6 [ie/MLBArticle] Fix extractor (#9021)
Closes #8682
Authored by: HobbyistDev
2024-01-19 20:31:06 +00:00
divStar
4310b6650e [ie/getcourseru] Add extractors (#8873)
Authored by: divStar, seproDev

Co-authored-by: sepro <4618135+seproDev@users.noreply.github.com>
2024-01-19 20:27:16 +00:00
SirElderling
1713c88273 [ie/bilibili] Add referer header and fix metadata extraction (#8832)
Closes #6640
Authored by: SirElderling
2024-01-19 20:11:00 +00:00
Alexey Neyman
4a07a455bb [ie/GoPro] Fix extractor (#9019)
Authored by: stilor
2024-01-19 17:49:15 +01:00
Christopher Schreiner
5eb1458be4 [ie/adn] Add support for German site (#8708)
- Add extractor for seasons

Closes #6643, Closes #8945
Authored by: infanf
2024-01-19 17:38:21 +01:00
SirElderling
1a36dbad71 [ie/RinseFMArtistPlaylist] Add extractor (#8794)
Authored by: SirElderling
2024-01-19 17:29:48 +01:00
Snack
12f0427405 [ie/asobichannel] Add extractors (#8700)
Authored by: Snack-X
2024-01-19 17:16:07 +01:00
alien-developers
5154dc0a68 [ie/JioSaavnSong] Support more bitrates (#8834)
Authored by: alien-developers, bashonly

Co-authored-by: bashonly <bashonly@protonmail.com>
2024-01-19 16:48:45 +01:00
ufukk
8ab8465083 [ie/TrtWorld] Add extractor (#8701)
Closes #8455
Authored by: ufukk
2024-01-19 16:38:39 +01:00
ArnauvGilotra
e641aab7a6 [ie/AmadeusTV] Add extractor (#8744)
Closes #8155
Authored by: ArnauvGilotra
2024-01-19 16:27:34 +01:00
DmitryScaletta
20cdad5a2c [ie/KukuluLive] Add extractor (#8877)
Closes #8865
Authored by: DmitryScaletta
2024-01-19 16:21:25 +01:00
SirElderling
43694ce13c [ie/NineNews] Add extractor (#8840)
Closes #8831
Authored by: SirElderling
2024-01-19 16:19:09 +01:00
sefidel
8226a3818f [ie/abematv] Support login for playlists (#8901)
Authored by: sefidel
2024-01-19 09:50:16 +00:00
sefidel
c51316f8a6 [ie/abematv] Fix extraction with cache (#8895)
Closes #6532
Authored by: sefidel
2024-01-19 09:43:13 +00:00
sepro
a281beba8d [ie/naver] Fix extractors (#8883)
Closes #8850, Closes #8692
Authored by: seproDev
2024-01-19 05:41:10 +01:00
DmitryScaletta
ba6b0c8261 [ie/chzzk] Add extractors (#8887)
Closes #8804
Authored by: DmitryScaletta
2024-01-19 04:16:21 +01:00
Karavellas
6171b050d7 [ie/ElementorEmbed] Add extractor (#8948)
Authored by: pompos02, seproDev

Co-authored-by: sepro <4618135+seproDev@users.noreply.github.com>
2024-01-19 04:00:49 +01:00
Giulio Muscarello
aa5dcc4ee6 [ie/IlPost] Add extractor (#9001)
Authored by: CapacitorSet
2024-01-19 03:51:53 +01:00
Philipp Waldhauer
5e2e24b2c5 [ie/MagentaMusik] Add extractor (#7790)
Authored by: pwaldhauer, seproDev

Co-authored-by: sepro <4618135+seproDev@users.noreply.github.com>
2024-01-19 00:52:13 +01:00
gmes78
fee2d8d9c3 [ie/Rule34Video] Extract more metadata (#7416)
Closes #7233
Authored by: gmes78
2024-01-19 00:41:28 +01:00
Akmal
cf9af2c7f1 [ie/Facebook] Add new ID format (#3824)
Closes #3496
Authored by: Wikidepia, kclauhk

Co-authored-by: kclauhk <78251477+kclauhk@users.noreply.github.com>
2024-01-19 00:40:08 +01:00
HobbyistDev
cf6413e840 [ie/BiliIntl] Fix and improve subtitles extraction (#7077)
Closes #7075, Closes #6664
Authored by: HobbyistDev, itachi-19, dirkf, seproDev

Co-authored-by: itachi-19 <16500619+itachi-19@users.noreply.github.com>
Co-authored-by: dirkf <fieldhouse@gmx.net>
Co-authored-by: sepro <4618135+seproDev@users.noreply.github.com>
2024-01-19 00:27:25 +01:00
jazz1611
5498729c59 [ie/GoogleDrive] Fix source file extraction (#8990)
Closes #8976
Authored by: jazz1611
2024-01-19 00:24:34 +01:00
Nicolas Appriou
393b487a4e [ie/ArteTV] Separate closed captions (#8231)
Authored by: Nicals, seproDev

Co-authored-by: sepro <4618135+seproDev@users.noreply.github.com>
2024-01-19 00:23:29 +01:00
Bibhav48
4d9dc0abe2 [ie/cloudflarestream] Extract subtitles (#9007)
Closes #8830
Authored by: Bibhav48
2024-01-18 21:20:04 +00:00
Andrew Gibson
014cb5774d [ie/aenetworks] Rating should be optional for AP extraction (#9005)
Authored by: agibson-fl
2024-01-18 21:18:04 +00:00
Finn R. Gärtner
8e6e365172 [ie/Piapro] Improve _VALID_URL (#8999)
Authored by: FinnRG
2024-01-14 18:28:03 +00:00
Max
95e82347b3 [ie/Viously] Add extractor (#8927)
Replaces Turbo extractor

Authored by: nbr23, seproDev

Co-authored-by: sepro <4618135+seproDev@users.noreply.github.com>
2024-01-09 04:11:52 +01:00
DmitryScaletta
5b8c69ae04 [ie/twitch] Fix m3u8 extraction (#8960)
Closes #8958
Authored by: DmitryScaletta
2024-01-09 02:47:13 +00:00
garret
5af1f19787 [ie/NhkRadiruLive] Make metadata extraction non-fatal (#8956)
Authored by: garret1317
2024-01-08 17:59:44 +00:00
Simon Sawicki
b6951271ac [ie/ard:mediathek] Revert to using old id (#8916)
Authored by: Grub4K
2024-01-05 21:34:38 +01:00
Simon Sawicki
ffbd4f2a02 [utils] traverse_obj: Support xml.etree.ElementTree.Element (#8911)
Authored by: Grub4K
2024-01-05 21:26:17 +01:00
mara004
292d60b1ed [cleanup] Fix typo in README.md (#8894)
Authored by: antonkesy
2024-01-05 18:13:46 +01:00
Ralph Drake
85b33f5c16 [cookies] Fix --cookies-from-browser with macOS Firefox profiles (#8909)
Ref: https://support.mozilla.org/en-US/kb/profile-manager-create-remove-switch-firefox-profiles#firefox:mac

Closes #8898
Authored by: RalphORama
2024-01-02 00:58:36 +00:00
DmitryScaletta
85a2d07c1f [ie/Bigo] Fix JSON extraction (#8893)
Closes #8852
Authored by: DmitryScaletta
2023-12-31 13:04:11 +00:00
github-actions[bot]
9f40cd2896 Release 2023.12.30
Created by: bashonly

:ci skip all :ci run dl
2023-12-30 21:43:13 +00:00
bashonly
f10589e345 [docs] Update youtube-dl merge commit in README.md
Authored by: bashonly
2023-12-30 15:39:06 -06:00
Simon Sawicki
f9fb3ce86e [cleanup] Misc (#8598)
Authored by: bashonly, pukkandan, seproDev, Grub4K

Co-authored-by: bashonly <bashonly@protonmail.com>
Co-authored-by: pukkandan <pukkandan.ytdlp@gmail.com>
Co-authored-by: sepro <4618135+seproDev@users.noreply.github.com>
2023-12-30 22:27:36 +01:00
sepro
5f009a094f [ie/ARD] Overhaul extractors (#8878)
Closes #8731, Closes #6784, Closes #2366, Closes #2975, Closes #8760
Authored by: seproDev
2023-12-30 21:44:32 +01:00
Simon Sawicki
225cf2b830 Fix 2d1d683a54
Authored by: Grub4K
2023-12-26 20:07:09 +01:00
Simon Sawicki
2d1d683a54 [devscripts] run_tests: Create Python script (#8720)
Authored by: Grub4K
2023-12-26 18:30:04 +01:00
Simon Sawicki
65de7d204c Update to ytdl-commit-be008e6 (#8836)
- [utils] Make restricted filenames ignore some Unicode categories (by dirkf)
- [ie/telewebion] Fix extraction (by Grub4K)
- [ie/imgur] Overhaul extractor (by bashonly, Grub4K)
- [ie/EpidemicSound] Add extractor (by Grub4K)

Authored by: bashonly, dirkf, Grub4K

Co-authored-by: bashonly <bashonly@protonmail.com>
2023-12-26 01:40:24 +01:00
kclauhk
c39358a54b [ie/Facebook] Fix Memories extraction (#8681)
- Support group /posts/ URLs
- Raise a proper error message if no formats are found

Closes #8669
Authored by: kclauhk
2023-12-24 23:43:35 +01:00
Lars Strojny
1f8bd8eba8 [ie/ARDBetaMediathek] Fix series extraction (#8687)
Closes #7666
Authored by: lstrojny
2023-12-24 23:38:21 +01:00
Simon Sawicki
00cdda4f6f [core] Fix format selection parse error for CPython 3.12 (#8797)
Authored by: Grub4K
2023-12-24 22:09:01 +01:00
bashonly
116c268438 [ie/twitter] Work around API rate-limit (#8825)
Closes #8762
Authored by: bashonly
2023-12-24 16:41:28 +00:00
bashonly
e7d22348e7 [ie/twitter] Prioritize m3u8 formats (#8826)
Closes #8117
Authored by: bashonly
2023-12-24 16:40:50 +00:00
bashonly
50eaea9fd7 [ie/instagram] Fix stories extraction (#8843)
Closes #8290
Authored by: bashonly
2023-12-24 16:40:03 +00:00
bashonly
f45c4efcd9 [ie/litv] Fix premium content extraction (#8842)
Closes #8654
Authored by: bashonly
2023-12-24 16:33:16 +00:00
Simon Sawicki
13b3cb3c2b [ci] Run core tests only for core changes (#8841)
Authored by: Grub4K
2023-12-24 00:11:10 +01:00
Nicolas Dato
0d531c35ec [ie/RudoVideo] Add extractor (#8664)
Authored by: nicodato
2023-12-22 22:52:07 +01:00
barsnick
bc4ab17b38 [cleanup] Fix spelling of IE_NAME (#8810)
Authored by: barsnick
2023-12-22 02:32:29 +01:00
bashonly
632b8ee54e [core] Release workflow and Updater cleanup (#8640)
- Only use trusted publishing with PyPI and remove support for PyPI tokens from release workflow
- Clean up improper actions syntax in the build workflow inputs
- Refactor Updater to allow for consistent unit testing with `UPDATE_SOURCES`

Authored by: bashonly
2023-12-21 21:06:26 +00:00
barsnick
c919b68f7e [ie/bbc] Extract more formats (#8321)
Closes #4902
Authored by: barsnick, dirkf
2023-12-21 20:47:32 +00:00
bashonly
19741ab8a4 [ie/bbc] Fix JSON parsing bug
Authored by: bashonly
2023-12-21 14:46:00 -06:00
bashonly
37755a037e [test:networking] Update tests for OpenSSL 3.2 (#8814)
Authored by: bashonly
2023-12-20 19:03:54 +00:00
coletdjnz
196eb0fe77 [networking] Strip whitespace around header values (#8802)
Fixes https://github.com/yt-dlp/yt-dlp/issues/8729
Authored by: coletdjnz
2023-12-20 19:15:38 +13:00
Mozi
db8b4edc7d [ie/JoqrAg] Add extractor (#8384)
Authored by: pzhlkj6612
2023-12-19 14:21:47 +00:00
bashonly
1c54a98e19 [ie/twitter] Extract stale tweets (#8724)
Closes #8691
Authored by: bashonly
2023-12-19 13:24:55 +00:00
Simon Sawicki
00a3e47bf5 [ie/bundestag] Add extractor (#8783)
Authored by: Grub4K
2023-12-18 21:32:08 +01:00
Amir Y. Perehodnik
c5f01bf7d4 [ie/Maariv] Add extractor (#8331)
Authored by: amir16yp
2023-12-18 16:52:43 +01:00
Tristan Charpentier
c91af948e4 [ie/RinseFM] Add extractor (#8778)
Authored by: hashFactory
2023-12-17 14:07:55 +00:00
Pandey Ganesha
6b5d93b0b0 [ie/youtube] Fix like_count extraction (#8763)
Closes #8759
Authored by: Ganesh910
2023-12-13 07:04:12 +00:00
pukkandan
298230e550 [webvtt] Fix 15f22b4880 2023-12-13 05:11:45 +05:30
Mozi
d5d1517e7d [ie/eplus] Add login support and DRM detection (#8661)
Authored by: pzhlkj6612
2023-12-12 00:29:36 +00:00
trainman261
7e09c147fd [ie/theplatform] Extract more metadata (#8635)
Authored by: trainman261
2023-12-12 00:00:35 +00:00
Benjamin Krausse
e370f9ec36 [ie] Add media_type field
Authored by: trainman261
2023-12-11 17:57:41 -06:00
SirElderling
b1a1ec1540 [ie/bitchute] Fix and improve metadata extraction (#8507)
Closes #8492
Authored by: SirElderling
2023-12-11 23:56:01 +00:00
Simon Sawicki
0b6f829b1d [utils] traverse_obj: Move is_user_input into output template (#8673)
Authored by: Grub4K
2023-12-06 21:46:45 +01:00
Simon Sawicki
f98a3305eb [ie/pr0gramm] Support variant formats and subtitles (#8674)
Authored by: Grub4K
2023-12-06 21:44:54 +01:00
sepro
04a5e06350 [ie/ondemandkorea] Fix upgraded format extraction (#8677)
Closes #8675
Authored by: seproDev
2023-12-06 18:58:00 +01:00
Nicolas Cisco
b03c89309e [ie/mediastream] Fix authenticated format extraction (#8657)
Authored by: NickCis
2023-12-06 18:55:38 +01:00
Pierrick Guillaume
71f28097fe [ie/francetv] Improve metadata extraction (#8409)
Authored by: Fymyte
2023-12-06 16:10:11 +01:00
pukkandan
044886c220 [ie/youtube] Return empty playlist when channel/tab has no videos
Closes #8634
2023-12-06 03:44:13 +05:30
pukkandan
993edd3f6e [outtmpl] Support multiplication
Related: #8683
2023-12-06 03:44:11 +05:30
OIRNOIR
6a9c7a2b52 [ie/youtube] Support cf.piped.video (#8514)
Authored by: OIRNOIR
Closes #8457
2023-11-29 18:18:58 +05:30
pukkandan
a174c453ee Let read_stdin obey --quiet
Closes #8668
2023-11-29 05:48:40 +05:30
TSRBerry
15f22b4880 [webvtt] Allow spaces before newlines for CueBlock (#7681)
Closes #7453

Ref: https://www.w3.org/TR/webvtt1/#webvtt-cue-block
2023-11-29 04:50:06 +05:30
sepro
9751a457cf [cleanup] Remove dead extractors (#8604)
Closes #1609, Closes #3232, Closes #4763, Closes #6026, Closes #6322, Closes #7912
Authored by: seproDev
2023-11-26 03:09:59 +00:00
bashonly
5a230233d6 [ie/box] Fix formats extraction (#8649)
Closes #5098
Authored by: bashonly
2023-11-26 02:50:23 +00:00
bashonly
4903f452b6 [ie/bfmtv] Fix extractors (#8651)
Closes #8425
Authored by: bashonly
2023-11-26 02:49:18 +00:00
bashonly
ff2fde1b8f [ie/TwitCastingUser] Fix extraction (#8650)
Closes #8653
Authored by: bashonly
2023-11-26 02:47:48 +00:00
bashonly
deeb13eae8 [pp/FFmpegMetadata] Embed stream metadata in single format downloads (#8647)
Closes #8568
Authored by: bashonly
2023-11-26 02:40:09 +00:00
bashonly
bb5a54e6db [ie/youtube] Improve detection of faulty HLS formats (#8646)
Closes #7747
Authored by: bashonly
2023-11-26 02:21:29 +00:00
sepro
628fa244bb [ie/floatplane] Add extractors (#8639)
Closes #5877, Closes #5912
Authored by: seproDev
2023-11-26 02:20:10 +00:00
kclauhk
9cafb9ff17 [ie/facebook] Improve subtitles extraction (#8296)
Authored by: kclauhk
2023-11-26 02:17:16 +00:00
sepro
1732eccc0a [core] Parse release_year from release_date (#8524)
Closes #7263
Authored by: seproDev
2023-11-26 02:12:05 +00:00
pk
a0b19d319a [core] Support NO_COLOR environment variable (#8385)
Authored by: prettykool, Grub4K
2023-11-20 23:43:52 +01:00
middlingphys
cc07f5cc85 [ie/abematv] Fix season metadata (#8607)
Authored by: middlingphys
2023-11-20 22:39:12 +00:00
coletdjnz
ccfd70f4c2 [rh:websockets] Migrate websockets to networking framework (#7720)
* Adds a basic WebSocket framework
* Introduces new minimum `websockets` version of 12.0
* Deprecates `WebSocketsWrapper`

Fixes https://github.com/yt-dlp/yt-dlp/issues/8439

Authored by: coletdjnz
2023-11-20 08:04:04 +00:00
sepro
45d82be65f [ie/nebula] Overhaul extractors (#8566)
Closes #4300, Closes #5814, Closes #7588, Closes #6334, Closes #6538
Authored by: elyse0, pukkandan, seproDev

Co-authored-by: Elyse <26639800+elyse0@users.noreply.github.com>
Co-authored-by: pukkandan <pukkandan.ytdlp@gmail.com>
2023-11-20 01:03:33 +00:00
Safouane Aarab
3237f8ba29 [ie/allstar] Add extractors (#8274)
Closes #6917
Authored by: S-Aarab
2023-11-20 00:07:19 +00:00
Kyraminol Endyeran
1725e943b0 [ie/vvvvid] Set user-agent to fix extraction (#8615)
Authored by: Kyraminol
2023-11-19 21:30:21 +00:00
c-basalt
9f09bdcfcb [ie/bilibili] Support courses and interactive videos (#8343)
Closes #6135, Closes #8428
Authored by: c-basalt
2023-11-19 21:26:46 +00:00
Simon Sawicki
f124fa4588 [ci] Concurrency optimizations (#8614)
Authored by: Grub4K
2023-11-19 16:05:13 +01:00
JC-Chung
585d0ed9ab [ie/twitcasting] Detect livestreams via API and show page (#8601)
Authored by: JC-Chung, bashonly
2023-11-18 22:14:45 +00:00
SirElderling
1fa3f24d4b [ie/theguardian] Add extractors (#8535)
Closes #8520
Authored by: SirElderling
2023-11-18 21:54:00 +00:00
sepro
ddb2d7588b [ie] Extract from media elements in SMIL manifests (#8504)
Authored by: seproDev
2023-11-18 21:51:18 +00:00
qbnu
f223b1b078 [ie/vocaroo] Do not use deprecated getheader (#8606)
Authored by: qbnu
2023-11-18 21:49:23 +00:00
Berkay
6fe82491ed [ie/twitter:broadcast] Extract concurrent_view_count (#8600)
Authored by: sonmezberkay
2023-11-18 21:46:22 +00:00
sepro
34df1c1f60 [ie/vidly] Add extractor (#8612)
Authored by: seproDev
2023-11-18 20:28:25 +00:00
Simon Sawicki
1d24da6c89 [ie/nintendo] Fix Nintendo Direct extraction (#8609)
Authored by: Grub4K
2023-11-18 21:04:42 +01:00
Elan Ruusamäe
66a0127d45 [ie/duoplay] Add extractor (#8542)
Authored by: glensc
2023-11-16 22:46:29 +00:00
Raphaël Droz
3f90813f06 [ie/altcensored] Add extractor (#8291)
Authored by: drzraf
2023-11-16 22:24:12 +00:00
Ha Tien Loi
64de1a4c25 [ie/zingmp3] Add support for radio and podcasts (#7189)
Authored by: hatienl0i261299
2023-11-16 22:08:00 +00:00
sepro
f96ab86cd8 [ie/drtv] Set default ext for m3u8 formats (#8590)
Closes #8589
Authored by: seproDev
2023-11-16 20:46:13 +00:00
bashonly
f4b95acafc Remove Python 3.7 support (#8361)
Closes #7803
Authored by: bashonly
2023-11-16 18:39:00 +00:00
github-actions[bot]
fe6c82ccff Release 2023.11.16
Created by: bashonly

:ci skip all :ci run dl
2023-11-16 00:01:38 +00:00
bashonly
24f827875c [build] Make secretstorage an optional dependency (#8585)
Authored by: bashonly
2023-11-15 23:31:32 +00:00
bashonly
15cb3528cb [ie/abc.net.au:iview:showseries] Fix extraction (#8586)
Closes #8554, Closes #8572
Authored by: bashonly
2023-11-15 23:24:55 +00:00
JC-Chung
2325d03aa7 [ie/twitcasting] Fix livestream detection (#8574)
Authored by: JC-Chung
2023-11-15 23:23:18 +00:00
aarubui
e569c2d1f4 [ie/njpwworld] Remove (#8570)
Authored by: aarubui
2023-11-15 23:21:33 +00:00
TravisDupes
a489f07150 [ie/dailymotion] Improve _VALID_URL (#7692)
Closes #7601
Authored by: TravisDupes
2023-11-15 23:19:34 +00:00
Boris Nagaev
5efe68b73c [ie/ZenYandex] Fix extraction (#8454)
Closes #8275
Authored by: starius
2023-11-15 23:16:54 +00:00
Awal Garg
b530118e7f [ie/JioSaavn] Add extractors (#8307)
Authored by: awalgarg
2023-11-15 23:15:06 +00:00
Eze Livinsky
dcfad52812 [ie/eltrecetv] Add extractor (#8216)
Authored by: elivinsky
2023-11-15 23:13:05 +00:00
almx
0783fd558e [ie/DRTV] Fix extractor (#8484)
Closes #8298
Authored by: almx, seproDev

Co-authored-by: sepro <4618135+seproDev@users.noreply.github.com>
2023-11-15 22:42:18 +00:00
FrankZ85
0f634dba3a [ie/tv5mondeplus] Extract subtitles (#4209)
Closes #4205
Authored by: FrankZ85
2023-11-15 22:38:52 +00:00
sepro
21dc069bea [ie/beatbump] Update _VALID_URL (#8576)
Authored by: seproDev
2023-11-15 14:34:39 +00:00
github-actions
5d3a3cd493 Release 2023.11.14
Created by: Grub4K

:ci skip all :ci run dl
2023-11-14 22:09:25 +00:00
bashonly
a9d3f4b20a [cleanup] Fix changelog typo
Authored by: bashonly
2023-11-14 15:58:49 -06:00
Simon Sawicki
b012271d01 [cleanup] Misc (#8510)
Authored by: bashonly, coletdjnz, dirkf, gamer191, seproDev, Grub4K
2023-11-14 22:40:38 +01:00
bashonly
f04b5bedad [ie] Do not smuggle http_headers
See: https://github.com/yt-dlp/yt-dlp/security/advisories/GHSA-3ch3-jhc6-5r8x

Authored by: coletdjnz
2023-11-14 22:04:25 +01:00
bashonly
d4f14a72dc [ie] Do not test truth value of xml.etree.ElementTree.Element (#8582)
Testing the truthiness of an `xml.etree.ElementTree.Element` instance is deprecated in py3.12

Authored by: bashonly
2023-11-14 20:28:18 +00:00
bashonly
87264d4fda [test:update] Implement simple updater unit tests
Authored by: bashonly
2023-11-12 18:30:55 -06:00
bashonly
a00af29853 [cleanup] Update documentation for master and nightly channels
Authored by: bashonly, Grub4K

Co-authored-by: Simon Sawicki <contact@grub4k.xyz>
2023-11-12 18:30:24 -06:00
bashonly
0b6ad22e6a [update] Overhaul self-updater
Authored by: bashonly, Grub4K

Co-authored-by: Simon Sawicki <contact@grub4k.xyz>
2023-11-12 18:30:14 -06:00
bashonly
5438593a35 [ci] Bump actions/checkout to v4
Authored by: bashonly
2023-11-12 18:30:01 -06:00
bashonly
9970d74c83 [build] Include secretstorage in Linux builds
Authored by: bashonly
2023-11-12 18:29:19 -06:00
bashonly
20314dd46f [core] Include build origin in verbose output
Authored by: bashonly, Grub4K

Co-authored-by: Simon Sawicki <contact@grub4k.xyz>
2023-11-12 18:29:19 -06:00
bashonly
1d03633c5a [build] Overhaul and unify release workflow
Authored by: bashonly, Grub4K

Co-authored-by: Simon Sawicki <contact@grub4k.xyz>
2023-11-12 18:29:19 -06:00
Frank Aurich
8afd9468b0 [ie/n-tv.de] Fix extractor (#8414)
Closes #3179
Authored by: 1100101
2023-11-11 21:00:06 +00:00
SirElderling
ef12dbdcd3 [ie/radiocomercial] Add extractors (#8508)
Authored by: SirElderling
2023-11-11 20:10:19 +00:00
LoserFox
46acc418a5 [ie/neteasemusic] Improve metadata extraction (#8531)
Closes #8530
Authored by: LoserFox
2023-11-11 20:08:53 +00:00
Esokrates
6ba3085616 [ie/orf:podcast] Add extractor (#8486)
Closes #5265
Authored by: Esokrates
2023-11-11 20:06:25 +00:00
bashonly
f6e97090d2 [ie/twitter:broadcast] Support --wait-for-video (#8475)
Closes #8473
Authored by: bashonly
2023-11-11 20:05:07 +00:00
bashonly
2863fcf2b6 [ie/theatercomplextown] Add extractors (#8560)
Closes #8491
Authored by: bashonly
2023-11-11 20:04:29 +00:00
bashonly
c76c96677f [ie/thisoldhouse] Add login support (#8561)
Closes #8257
Authored by: bashonly
2023-11-11 20:03:50 +00:00
c-basalt
15b252dfd2 [ie/weibo] Fix extraction (#8463)
Closes #8445
Authored by: c-basalt
2023-11-11 20:02:59 +00:00
Aniol Pagès
312a2d1e8b [ie/LaXarxaMes] Add extractor (#8412)
Authored by: aniolpages
2023-11-11 20:00:31 +00:00
garret
54579be436 [ie/nhk] Improve metadata extraction (#8388)
Authored by: garret1317
2023-11-11 19:59:01 +00:00
sepro
05adfd883a [ie/ondemandkorea] Overhaul extractor (#8386)
Closes #8374
Authored by: seproDev
2023-11-11 19:57:56 +00:00
Martin Pecka
3ff494f6f4 [ie/NovaEmbed] Improve _VALID_URL (#8368)
Authored by: peci1
2023-11-11 19:56:29 +00:00
Mozi
9b5bedf13a [ie/brilliantpala] Fix cookies support (#8352)
Authored by: pzhlkj6612
2023-11-11 19:54:53 +00:00
bashonly
cb480e390d [ie/thisav] Remove (#8346)
Authored by: bashonly
2023-11-11 19:53:59 +00:00
sepro
25a4bd345a [ie/sbs.co.kr] Add extractors (#8326)
Authored by: seproDev
2023-11-11 19:53:10 +00:00
Tom
3906de0755 [ie/zoom] Extract combined view formats (#7847)
Authored by: Mipsters
2023-11-11 19:51:54 +00:00
HitomaruKonpaku
7d337ca977 [ie/twitter:broadcast] Improve metadata extraction (#8383)
Authored by: HitomaruKonpaku
2023-11-11 01:34:22 +00:00
bashonly
10025b715e [core] Add --compat-option manifest-filesize-approx (#8356)
Closes #7623
Authored by: bashonly
2023-11-07 23:10:01 +00:00
bashonly
595ea4a99b [core] Fix format sorting with --load-info-json (#8521)
Closes #7971
Authored by: bashonly
2023-11-07 22:48:15 +00:00
bashonly
2622c804d1 [fd/dash] Force native downloader for --live-from-start (#8339)
Closes #8212
Authored by: bashonly
2023-11-07 21:28:34 +00:00
bashonly
fd8fcf8f4f Revert 39abae2354
The iOS client is not subject to integrity checks and is likely to be a more stable choice going forward

Authored by: bashonly
2023-11-07 14:55:12 -06:00
CrendKing
21b25281c5 [fd/aria2c] Remove duplicate --file-allocation=none (#8332)
Authored by: CrendKing
2023-11-07 17:18:19 +01:00
sepro
4a601c9eff [ie/weverse] Fix login error handling (#8458)
Authored by: seproDev
2023-10-28 15:53:24 +00:00
Shubham
464327acdb [ie/polskieradio:audition] Fix playlist extraction (#8459)
Closes #8419
Authored by: shubhexists
2023-10-28 15:50:08 +00:00
bashonly
ef79d20dc9 [ie/youtube] Check newly uploaded iOS HLS formats (#8336)
Closes #7747
Authored by: bashonly
2023-10-28 08:02:13 +00:00
bashonly
39abae2354 [ie/youtube] Deprioritize iOS client formats (#8337)
Authored by: bashonly
2023-10-28 08:01:31 +00:00
bashonly
4ce2f29a50 [ie/generic] Improve direct video link ext detection (#8340)
Closes #8265
Authored by: bashonly
2023-10-28 00:35:37 +00:00
bashonly
177f0d963e [ie/QDance] Update _VALID_URL (#8426)
Authored by: bashonly
2023-10-28 00:01:31 +00:00
Bart Broere
8e02a4dcc8 [ie/npo] Send POST request to streams API endpoint (#8413)
Closes #6398
Authored by: bartbroere
2023-10-28 00:00:12 +00:00
saintliao
7b8b1cf5eb [ie/twitcasting] Fix livestream extraction (#8427)
Closes #8431
Authored by: JC-Chung, saintliao

Co-authored-by: JC-Chung <52159296+JC-Chung@users.noreply.github.com>
2023-10-27 23:59:13 +00:00
bashonly
a40e0b37df [core] Only ensure playlist thumbnail dir if writing thumbs (#8373)
Bugfix for 2acd1d555e

Closes #8372
Authored by: bashonly
2023-10-22 23:05:22 +00:00
Simon Sawicki
4e38e2ae9d [rh:requests] Handle both bytes and int for IncompleteRead.partial (Fix 8a8b54523a) (#8348)
Authored by: bashonly, coletdjnz, Grub4K
2023-10-15 10:54:38 +02:00
coletdjnz
8a8b54523a [rh:requests] Add handler for requests HTTP library (#3668)
Adds support for HTTPS proxies and persistent connections (keep-alive)

Closes https://github.com/yt-dlp/yt-dlp/issues/1890
Resolves https://github.com/yt-dlp/yt-dlp/issues/4070
Resolves https://github.com/ytdl-org/youtube-dl/issues/32549
Resolves https://github.com/ytdl-org/youtube-dl/issues/14523
Resolves https://github.com/ytdl-org/youtube-dl/issues/13734

Authored by: coletdjnz, Grub4K, bashonly
2023-10-13 23:33:00 +00:00
bashonly
700444c23d [ci] Run core tests with dependencies
Authored by: bashonly, coletdjnz
2023-10-13 18:02:06 -05:00
github-actions
b73c409318 Release 2023.10.13
Created by: bashonly

:ci skip all :ci run dl
2023-10-13 22:22:31 +00:00
bashonly
b634ba742d [cleanup] Misc (#8338)
Authored by: bashonly, gamer191
2023-10-13 22:15:35 +00:00
Riteo
2acd1d555e [core] Ensure thumbnail output directory exists (#7985)
Closes #8203
Authored by: Riteo
2023-10-13 20:01:39 +00:00
sepro
b286ec68f1 [ie/jtbc] Add extractors (#8314)
Authored by: seproDev
2023-10-13 19:30:24 +00:00
sepro
e030b6b6fb [ie/mbn] Add extractor (#8312)
Authored by: seproDev
2023-10-13 19:29:56 +00:00
bashonly
b931664231 [ie/radiko] Fix bug with downloader_options
Closes #8333
Authored by: bashonly
2023-10-13 14:23:39 -05:00
Simon Sawicki
feebf6d02f [ie/youtube] Fix bug with --extractor-retries inf (#8328)
Authored by: Grub4K
2023-10-12 12:20:52 +02:00
bashonly
84e26038d4 [utils] write_xattr: Use os.setxattr if available (#8205)
Closes #8193
Authored by: bashonly, Grub4K

Co-authored-by: Simon Sawicki <contact@grub4k.xyz>
2023-10-09 18:30:36 +00:00
garret
4de94b9e16 [ie/nhk] Fix Japanese-language VOD extraction (#8309)
Closes #8303
Authored by: garret1317
2023-10-09 18:00:26 +00:00
Midnight Veil
88a99c87b6 [ie/tenplay] Add support for seasons (#7939)
Closes #7744
Authored by: midnightveil
2023-10-09 17:55:46 +00:00
Stefan Lobbenmeier
09f815ad52 [ie/ArteTV] Support age-restricted content (#8301)
Closes #7782
Authored by: StefanLobbenmeier
2023-10-09 17:51:37 +00:00
naginatana
b7098d46b5 [ie/youku] Improve tudou.com support (#8160)
Authored by: naginatana
2023-10-09 17:46:16 +00:00
Simon Sawicki
1c51c520f7 [fd/fragment] Improve progress calculation (#8241)
This uses the download speed from all threads and also adds smoothing to speed and eta

Authored by: Grub4K
2023-10-08 02:01:01 +02:00
Awal Garg
9d7ded6419 [utils] js_to_json: Fix Date constructor parsing (#8295)
Authored by: awalgarg, Grub4K
2023-10-08 01:57:23 +02:00
github-actions
4392c4680c Release 2023.10.07
Created by: Grub4K

:ci skip all :ci run dl
2023-10-07 01:28:34 +00:00
Simon Sawicki
377e85a179 [cleanup] Misc (#8300)
* Simplify nuxt regex
* Fix tmz quotes and tests
* Update test python versions

Authored by: dirkf, gamer191, Grub4K
2023-10-07 03:02:45 +02:00
bashonly
03e85ea99d [ie/youtube] Fix heatmap extraction (#8299)
Closes #8189
Authored by: bashonly
2023-10-06 20:00:15 -05:00
Aleri Kaisattera
792f1e64f6 [ie/theta] Remove extractors (#8251)
Authored by: alerikaisattera
2023-10-06 23:56:47 +00:00
trainman261
19c90e405b [cleanup] Update extractor tests (#7718)
Authored by: trainman261
2023-10-06 23:56:19 +00:00
garret
e831c80e8b [ie/nhk] Fix VOD extraction (#8249)
Closes #8242
Authored by: garret1317
2023-10-06 23:05:48 +00:00
Raphaël Droz
0e722f2f3c [ie/lbry] Extract uploader_id (#8244)
Closes #123
Authored by: drzraf
2023-10-06 22:59:42 +00:00
Esme
47c598783c [ie/erocast] Add extractor (#8264)
Closes #4001
Authored by: madewokherd
2023-10-06 22:58:28 +00:00
AS6939
35d9cbaf96 [ie/iq.com] Fix extraction and subtitles (#8260)
Closes #7734, Closes #8123
Authored by: AS6939
2023-10-06 22:56:12 +00:00
garret
2ad3873f0d [ie/radiko] Improve extraction (#8221)
Authored by: garret1317
2023-10-06 22:53:11 +00:00
Umar Getagazov
2f2dda3a7e [ie/substack] Fix download cookies bug (#8219)
Authored by: handlerug
2023-10-06 22:48:54 +00:00
Umar Getagazov
fbcc299bd8 [ie/substack] Fix embed extraction (#8218)
Authored by: handlerug
2023-10-06 22:45:46 +00:00
Raphaël Droz
48cceec1dd [ie/lbry] Add playlist support (#8213)
Closes #5982, Closes #8204
Authored by: drzraf, bashonly, Grub4K
2023-10-06 22:38:26 +00:00
xofe
a9efb4b8d7 [ie/abc.net.au:iview] Improve episode extraction (#8201)
Authored by: xofe
2023-10-06 22:35:11 +00:00
c-basalt
f980df734c [ie/neteasemusic] Fix extractors (#8181)
Closes #4388
Authored by: c-basalt
2023-10-06 22:31:33 +00:00
gillux
91a670a4f7 [ie/LiTV] Fix extractor (#7785)
Closes #5456
Authored by: jiru
2023-10-06 22:27:54 +00:00
bashonly
b095fd3fa9 [ie/WrestleUniverseVOD] Call API with device ID (#8272)
Closes #8271
Authored by: bashonly
2023-10-04 18:01:52 +00:00
bashonly
0730d5a966 [ie/gofile] Fix token cookie bug
Authored by: bashonly
2023-10-04 13:00:33 -05:00
Simon Sawicki
cc8d844152 [ie/xhamster:user] Support creator urls (#8232)
Authored by: Grub4K
2023-10-03 11:33:40 +02:00
coletdjnz
eb5bdbfa70 [ie/youtube] Raise a warning for Incomplete Data instead of an error (#8238)
Closes https://github.com/yt-dlp/yt-dlp/issues/8206

Adds `raise_incomplete_data` extractor arg to revert this behaviour and raise an error.

Authored by: coletdjnz
Co-authored-by: Simon Sawicki <contact@grub4k.xyz>
2023-10-03 06:42:30 +00:00
github-actions
c54ddfba0f Release 2023.09.24
Created by: Grub4K

:ci skip all :ci run dl
2023-09-24 00:38:42 +00:00
Simon Sawicki
088add9567 [cleanup] Misc
Authored by: Grub4K
2023-09-24 02:35:23 +02:00
Simon Sawicki
de015e9307 [core] Prevent RCE when using --exec with %q (CVE-2023-40581)
The shell escape function is now using `""` instead of `\"`. `utils.Popen` has been patched to properly quote commands.

Prior to this fix using `--exec` together with `%q` when on Windows could cause remote code to execute. See https://github.com/yt-dlp/yt-dlp/security/advisories/GHSA-42h4-v29r-42qg for reference.

Authored by: Grub4K
2023-09-24 02:29:01 +02:00
Simon Sawicki
61bdf15fc7 [core] Raise minimum recommended Python version to 3.8 (#8183)
Authored by: Grub4K
2023-09-24 02:24:47 +02:00
bashonly
1eaca74bc2 [ie/nfl.com:plus:replay] Fix extractor (#7838)
Closes #7836
Authored by: bashonly
2023-09-23 23:47:14 +00:00
Mozi
92feb5654c [ie/brilliantpala] Add extractors (#6680)
Authored by: pzhlkj6612
2023-09-23 23:42:29 +00:00
Mozi
698beb9a49 [ie/niconicochannelplus] Add extractors (#5686)
Closes #2537
Authored by: pzhlkj6612
2023-09-23 22:36:34 +00:00
garret
15591940ff [ie/cineverse] Add extractors (#8146)
Also removes AsianCrushIE and AsianCrushPlaylistIE (URLs do not work anymore & old IDs are unavailable).

Closes #8109
Authored by: garret1317
2023-09-23 22:27:13 +00:00
Mozi
6636021206 [ie/PIAULIZAPortal] Add extractor (#7903)
Authored by: pzhlkj6612
2023-09-23 22:15:01 +00:00
garret
eaee21bf71 [ie/Monstercat] Add extractor (#8133)
Closes #8067
Authored by: garret1317
2023-09-23 22:13:48 +00:00
bashonly
5ca095cbcd [cleanup] Misc (#8182)
Closes #7796, Closes #8028
Authored by: barsnick, sqrtNOT, gamer191, coletdjnz, Grub4K, bashonly
2023-09-23 20:00:31 +00:00
bashonly
c2da0b5ea2 [ie/ArteTV] Fix HLS formats extraction
Closes #8156
Authored by: bashonly
2023-09-23 14:54:00 -05:00
Atsushi Watanabe
c1d71d0d9f [ie/twitcasting] Support --wait-for-video (#7975)
Authored by: at-wat
2023-09-21 23:04:05 +00:00
bashonly
661c9a1d02 [test:download] Test for expected_exception
Authored by: at-wat

Co-authored-by: Atsushi Watanabe <atsushi.w@ieee.org>
2023-09-21 17:48:57 -05:00
std-move
568f080518 [ie/iprima] Fix extractor (#7216)
Closes #7229
Authored by: std-move
2023-09-21 22:20:52 +00:00
bashonly
904a19ee93 [ie] Make _search_nuxt_data more lenient
Authored by: std-move

Co-authored-by: std-move <26625259+std-move@users.noreply.github.com>
2023-09-21 16:54:57 -05:00
bashonly
52414d64ca [utils] js_to_json: Handle Array objects
Authored by: Grub4K, std-move

Co-authored-by: std-move <26625259+std-move@users.noreply.github.com>
Co-authored-by: Simon Sawicki <accounts@grub4k.xyz>
2023-09-21 16:51:57 -05:00
std-move
2269065ad6 [ie/NovaEmbed] Fix extractor (#7910)
Closes #8025
Authored by: std-move
2023-09-21 18:19:52 +00:00
kylegustavo
a5e264d74b [ie/Expressen] Improve _VALID_URL (#8153)
Closes #8141
Authored by: kylegustavo
2023-09-21 17:46:49 +00:00
ClosedPort22
b84fda7388 [ie/bilibili] Extract Dolby audio formats (#8142)
Closes #4050
Authored by: ClosedPort22
2023-09-21 17:45:18 +00:00
Simon
5fccabac27 [ie/rbgtum] Fix extraction and support new URL format (#7690)
Authored by: simon300000
2023-09-21 17:37:58 +00:00
c-basalt
21f40e75df [ie/douyutv] Fix extractors (#7652)
Closes #2494, Closes #7295
Authored by: c-basalt
2023-09-21 17:34:35 +00:00
Elyse
b3febedbeb [ie/Canal1,CaracolTvPlay] Add extractors (#7151)
Closes #5826
Authored by: elyse0
2023-09-21 17:30:32 +00:00
Mozi
295fbb3ae3 [ie/eplus:inbound] Add extractor (#5782)
Authored by: pzhlkj6612
2023-09-21 17:28:20 +00:00
bashonly
35f9a306e6 [dependencies] Handle deprecation of sqlite3.version (#8167)
Closes #8152
Authored by: bashonly
2023-09-21 15:58:53 +00:00
coletdjnz
9d6254069c Update to ytdl-commit-66ab08 (#8128)
[utils] Revert bbd3e7e, updating docstring, test instead
 66ab0814c4

Authored by: coletdjnz
2023-09-20 19:14:10 +00:00
Simon Sawicki
b532556d0a [ie/pr0gramm] Rewrite extractor (#8151)
Authored by: Grub4K
2023-09-19 21:52:44 +02:00
Rohan Dey
cf11b40ac4 [ie/media.ccc.de:lists] Fix extraction (#8144)
Closes #8138
Authored by: Rohxn16
2023-09-18 23:39:20 +00:00
niemands
40999467f7 [ie/pornbox] Add extractor (#7386)
Authored by: niemands
2023-09-18 23:37:17 +00:00
u-spec-png
8ac5b6d96a [ie/N1Info:article] Fix extractor (#7373)
Authored by: u-spec-png
2023-09-18 23:36:10 +00:00
c-basalt
69b03f84f8 [ie/weibo] Fix extractor and support user extraction (#7657)
Closes #3964, Closes #4673, Closes #6979
Authored by: c-basalt
2023-09-18 23:06:36 +00:00
c-basalt
9e68747f96 [ie/bilibili] Add support for series, favorites and watch later (#7518)
Closes #6719
Authored by: c-basalt
2023-09-18 23:02:00 +00:00
Elyse
ba8e9eb2c8 [ie/radiofrance] Add support for livestreams, podcasts, playlists (#7006)
Closes #4282
Authored by: elyse0
2023-09-18 21:08:40 +00:00
coletdjnz
20fbbd9249 [networking] Fix various socks proxy bugs (#8065)
- Fixed support for IPv6 socks proxies
- Fixed support for IPv6 over socks5
- Fixed --source-address not being obeyed for socks4 and socks5
- Fixed socks4a when the destination address is an IPv4 address

Closes https://github.com/yt-dlp/yt-dlp/issues/7959
Fixes https://github.com/ytdl-org/youtube-dl/issues/15368

Authored by: coletdjnz
Co-authored-by: Simon Sawicki <accounts@grub4k.xyz>
Co-authored-by: bashonly <bashonly@bashonly.com>
2023-09-18 07:33:26 +00:00
Sebastian Koch
81f46ac573 [ie/massengeschmack.tv] Fix title extraction (#7813)
Authored by: sb0stn
2023-09-17 20:54:00 +00:00
aky-01
63e0c5748c [ie/IndavideoEmbed] Fix extraction (#8129)
Closes #7190
Authored by: aky-01
2023-09-17 15:16:11 +00:00
Simon
efa2339502 [ie/lecturio] Improve _VALID_URL (#7649)
Authored by: simon300000
2023-09-17 15:11:22 +00:00
soundchaser128
58493923e9 [ie/rule34video] Extract tags (#7117)
Authored by: soundchaser128
2023-09-17 15:09:42 +00:00
Simon Sawicki
30ba233d4c [devscripts] make_changelog: Fix changelog grouping and add networking group (#8124)
Authored by: Grub4K
2023-09-17 13:22:04 +02:00
Simon Sawicki
836e06d246 [core] Fix support for upcoming Python 3.12 (#8130)
This also adds the following test runners:
- `3.12-dev` on `ubuntu-latest`
- `3.12-dev` on `windows-latest`
- `pypy-3.10` on `ubuntu-latest`

Authored by: Grub4K
2023-09-17 12:56:50 +02:00
bashonly
94389b225d [ie/RTVSLO] Fix format extraction (#8131)
Closes #8020
Authored by: bashonly
2023-09-17 02:42:42 +00:00
bashonly
9652bca1bd [ie/web.archive:vlive] Remove extractor (#8132)
Closes #8122
Authored by: bashonly
2023-09-17 00:38:09 +00:00
bashonly
538d37671a [ie/AmazonMiniTV] Fix extractors
Closes #7817
Authored by: GautamMKGarg, bashonly

Co-authored by: GautamMKGarg <GautamMKgarg@gmail.com>
2023-09-16 19:03:30 -05:00
bashonly
2da7bcca16 Revert 9d376c4dae
Authored by: bashonly
2023-09-16 18:57:14 -05:00
garret
eda0e415d2 [ie/bbc] Extract tracklist as chapters (#7788)
Authored by: garret1317
2023-09-16 22:47:49 +00:00
bashonly
20c3c9b433 [ie/reddit] Extract subtitles
Closes #7814
Authored by: bashonly
2023-09-16 16:23:54 -05:00
bashonly
635ae31f68 [ie/mediastream] Make embed extraction non-fatal
Authored by: bashonly
2023-09-16 16:22:21 -05:00
bashonly
5367585219 [ie/generic] Fix KVS thumbnail extraction
Closes #8045
Authored by: bashonly
2023-09-16 16:20:34 -05:00
fireattack
308936619c [ie/facebook] Improve format sorting (#8074)
Authored by: fireattack
2023-09-16 21:18:04 +00:00
c-basalt
5be7e97886 [ie/sohu] Fix extractor (#7628)
Closes #1667, Closes #7463
Authored by: c-basalt, bashonly
2023-09-16 21:13:04 +00:00
barsnick
b4c1c408c6 [ie/Bild.de] Extract HLS formats (#8032)
Closes #7951
Authored by: barsnick
2023-09-16 21:11:05 +00:00
Tristan Lee
23d829a342 [ie/Rumble] Fix embed extraction (#8035)
Authored by: trislee
2023-09-16 21:08:15 +00:00
04-pasha-04
0ce1f48bf1 [ie/funker530] Fix extraction (#8040)
Authored by: 04-pasha-04
2023-09-16 21:06:00 +00:00
Mozi
ecef42c3ad [ie/zaiko] Improve thumbnail extraction (#8054)
Authored by: pzhlkj6612
2023-09-16 21:04:10 +00:00
ApoorvShah111
a83da3717d [ie/nitter] Fix title extraction fallback (#8102)
Closes #7575
Authored by: ApoorvShah111
2023-09-16 21:01:26 +00:00
Aniruddh Joshi
9d376c4dae [ie/AmazonMiniTV] Fix extractor (#8103)
Closes #7817
Authored by: Aniruddh-J
2023-09-16 20:58:21 +00:00
c-basalt
5336bf57a7 [ie/bilibili] Extract format_id (#7555)
Authored by: c-basalt
2023-09-16 20:53:57 +00:00
makeworld
9bf14be775 [ie/cbc] Ignore any 426 from API (#7689)
Closes #7477
Authored by: makew0rld
2023-09-16 20:49:43 +00:00
c-basalt
cebbd33b1c [ie/twitcasting] Improve _VALID_URL (#8120)
Closes #7597
Authored by: c-basalt
2023-09-16 20:43:12 +00:00
bashonly
069cbece9d [ie/tiktok] Fix webpage extraction
Closes #8089
Authored by: bashonly
2023-09-16 13:28:14 -05:00
Simon Sawicki
f659e64394 [ie/bpb] Overhaul extractor (#8119)
Authored by: Grub4K
2023-09-16 17:50:06 +02:00
Jérôme Duval
7d3d658f4c [ie/TV5MondePlus] Fix extractor (#7952)
Closes #4978
Authored by: korli, dirkf
2023-09-16 14:24:11 +00:00
hatsomatt
98eac0e6ba [ie/videa] Fix extraction (#8003)
Closes #7427
Authored by: hatsomatt, aky-01

Co-authored-by: aky-01 <65510015+aky-01@users.noreply.github.com>
2023-09-16 14:02:37 +00:00
zhallgato
6e07e4bc7e [ie/mediaklikk] Fix extractor (#8086)
Fixes https://github.com/yt-dlp/yt-dlp/issues/8053

Authored by: bashonly, zhallgato
2023-09-16 10:12:18 +00:00
barsnick
aee6b9b88c [ie/Axs] Add extractor (#8094)
Authored by: barsnick
2023-09-16 10:04:08 +00:00
Kshitiz Gupta
578a82e497 [ie/banbye] Support video ids containing a hyphen (#8059)
Fixes https://github.com/yt-dlp/yt-dlp/issues/7895

Authored by: kshitiz305
2023-09-16 09:43:05 +00:00
SevenLives
497bbbbd73 [ie/abematv] Fix proxy handling (#8046)
Fixes https://github.com/yt-dlp/yt-dlp/issues/8036

Authored by: SevenLives
2023-09-16 09:37:04 +00:00
garret
7b71643cc9 [ie/mixcloud] Update API URL (#8114)
Closes #8104
Authored by: garret1317
2023-09-15 17:18:51 +00:00
bashonly
66cc64ff66 [ie/zoom] Extract duration
Closes #8080
Authored by: bashonly
2023-09-11 09:51:39 -05:00
bashonly
a006ce2b27 [ie/twitter] Fix retweet extraction and syndication API (#8016)
Authored by: bashonly
2023-09-09 15:14:49 +00:00
Szaby Grünwald
5d0395498d [ie/wdr] Fix extraction (#7979)
Closes #7461
Authored by: szabyg
2023-09-08 12:54:41 +00:00
ifan-t
fe371dcf0b [ie/S4C] Add series support and extract subs/thumbs (#7776)
Authored by: ifan-t
2023-09-08 12:25:43 +00:00
ringus1
d3d81cc98f [ie/facebook] Fix webpage extraction (#7890)
Closes #7901
Authored by: ringus1
2023-09-05 20:35:23 +00:00
bashonly
99c99c7185 [ie/gofile] Update token
Closes #7235
Authored by: bashonly
2023-09-05 14:58:02 -05:00
bashonly
c6ef553792 [ie/twitter:spaces] Pass referer header to downloader
Closes #8029
Authored by: bashonly
2023-09-05 01:54:14 -05:00
bashonly
69dbfe01c4 Bugfix for bae4834245
Authored by: bashonly
2023-09-04 11:18:59 -05:00
Mattias Wadman
2301b5c1b7 [ie/SVTPlay] Fix extraction (#7789)
Closes #5595
Authored by: wader, dirkf
2023-09-02 14:40:11 +00:00
Simon Sawicki
77bff23ee9 Bugfix for 59e92b1f18
Closes #8012

Authored by: Grub4K
2023-09-02 15:18:04 +02:00
Rajeshwaran
7237c8dca0 [ie/hotstar] Extract release_year (#7869)
Authored by: Rajeshwaran2001
2023-08-31 20:48:52 +00:00
bashonly
30ea88591b [ie/hotstar] Make metadata extraction non-fatal
Authored by: bashonly
2023-08-31 15:45:11 -05:00
Grabien
630a55df8d [ie/Mediaite] Fix extraction (#7923)
Authored by: Grabien
2023-08-30 23:49:42 +00:00
RedDeffender
bae4834245 [ie/NoodleMagazine] Fix extraction (#7830)
Closes #7917
Authored by: RedDeffender
2023-08-30 23:26:45 +00:00
bashonly
099fb1b35c Bugfix for b9f2bc2dbe
Authored by: bashonly
2023-08-29 08:06:02 -05:00
Omar Atef
4b3a6ef1b3 [ie/hungama] Overhaul extractors (#7757)
Closes #7754
Authored by: Yalab7, bashonly
2023-08-29 00:49:29 +00:00
Stavros Ntentos
665876034c [ie/antenna] Support antenna.gr (#7584)
Authored by: stdedos
2023-08-29 00:05:49 +00:00
Nathan Touzé
b9f2bc2dbe [ie/Dropbox] Fix extractor (#7926)
Closes #7005, Closes #7696
Authored by: nathantouze, bashonly, denhotte
2023-08-28 21:33:48 +00:00
sepro
c2d8ee0000 [ie/weverse] Support extraction without auth (#7924)
Authored by: seproDev
2023-08-28 21:09:14 +00:00
bashonly
56b3dc0335 [ie/StagePlus] Fix m3u8 extraction (#7929)
Closes #7928
Authored by: bashonly
2023-08-27 23:33:25 +00:00
bashonly
d7aee8e310 [ie/Mzaalo] Improve _VALID_URL
Authored by: bashonly
2023-08-27 18:08:36 -05:00
Simon Sawicki
59e92b1f18 [rh/urllib] Simplify gzip decoding (#7611)
Authored by: Grub4K
2023-08-27 00:13:30 +02:00
Simon Sawicki
1be0a96a4d [docs] Update collaborators
Authored by: Grub4K
2023-08-26 22:29:56 +02:00
coletdjnz
fcd6a76adc [tests] Add tests for socks proxies (#7908)
Authored by: coletdjnz
2023-08-25 07:10:44 +00:00
Davin Kevin
7cccab79e7 [ie/wat.tv] Fix extraction (#7898)
Closes #7303
Authored by: davinkevin
2023-08-20 17:25:49 +00:00
trainman261
ed71189781 [ie/CBCPlayerPlaylist] Add extractor (#7870)
Authored by: trainman261
2023-08-20 16:35:57 +00:00
bashonly
a0de8bb860 [ie/zee5] Update access token endpoint (#7914)
Closes #7911
Authored by: bashonly
2023-08-20 16:10:15 +00:00
garret
876b70c8ed [ie/tbsjp] Add episode, program, playlist extractors (#7765)
Authored by: garret1317
2023-08-14 18:29:04 +00:00
trainman261
339c339fec [ie/CBCPlayer] Extract HLS formats and subtitles (#7484)
Authored by: trainman261
2023-08-12 23:58:55 +00:00
bashonly
dab87ca236 [cookies] Containers JSON should be opened as utf-8 (#7800)
Closes #7797
Authored by: bashonly
2023-08-12 21:30:23 +00:00
coletdjnz
378ae9f9fb [ie/youtube] Fix consent cookie (#7774)
Fixes #7594

Authored by: coletdjnz
2023-08-12 04:26:08 +00:00
coletdjnz
db7b054a61 [networking] Add request handler preference framework (#7603)
Preference functions that take a request and a request handler instance can be registered to prioritize different request handlers per request.

Authored by: coletdjnz
Co-authored-by: pukkandan <pukkandan.ytdlp@gmail.com>
2023-08-04 22:17:48 +00:00
Franklin Lee
db97438940 [ie/PicartoVod] Fix extractor (#7727)
Closes #2926
Authored by: Frankgoji
2023-08-01 18:21:16 +00:00
ifan-t
b9de629d78 [ie/S4C] Add extractor (#7730)
Authored by: ifan-t
2023-08-01 18:01:59 +00:00
ringus1
a854fbec56 [ie/facebook] Add dash manifest URL (#7743)
Fixes #7742
Authored by: ringus1
2023-08-01 19:43:54 +05:30
ischmidt20
30b29f3715 [ie/fox] Support foxsports.com (#7724)
Authored by: ischmidt20
2023-08-01 12:54:04 +05:30
Steve
6d6081dda1 [extractor/pbs] Add extractor PBSKidsIE (#7602)
Authored by: snixon
Fixes #2440
2023-07-31 22:38:37 +05:30
bashonly
6014355c61 [ie/twitter] Add fallback, improve error handling (#7621)
Closes #7579, Closes #7625
Authored by: bashonly
2023-07-29 23:37:06 +00:00
pukkandan
f73c118035 FFmpegFixupM3u8PP may need to run with ffmpeg
Bug in 62b5c94cad
Closes #7725
2023-07-30 04:24:46 +05:30
coletdjnz
546b2c28a1 [ie/youtube] Fix player_params arg being converted to lowercase
Fix bug in ba06d77a31

Authored by: coletdjnz
2023-07-30 10:50:25 +12:00
pukkandan
6148833f5c [cleanup] Misc 2023-07-30 04:06:18 +05:30
pukkandan
8cb7fc44db Fix --check-formats
Bug in bc344cd456
2023-07-30 03:23:13 +05:30
pukkandan
3f7965105d [utils] HTTPHeaderDict: Handle byte values 2023-07-30 03:18:10 +05:30
pukkandan
de20687ee6 [test] Fix test_load_certifi
Closes #7688, #7675
2023-07-29 21:53:00 +05:30
bashonly
b09bd0c196 [ie/tiktok] Fix audio-only format extraction (#7712)
Closes #6608
Authored by: bashonly
2023-07-29 16:14:16 +00:00
bashonly
127a224606 [ie/LBRY] Fix original format extraction (#7711)
Authored by: bashonly
2023-07-29 16:01:43 +00:00
bashonly
86eeb044c2 [ie/hotstar] Support /clips/ URLs (#7710)
Closes #7699
Authored by: bashonly
2023-07-29 15:47:43 +00:00
bashonly
9a04113dfb [ie/Reddit] Fix thumbnail extraction
Authored by: bashonly
2023-07-29 10:30:32 -05:00
coletdjnz
ba06d77a31 [ie/youtube] Add player_params extractor arg (#7719)
Authored by: coletdjnz
2023-07-29 06:20:42 +00:00
coletdjnz
4bf912282a [networking] Remove dot segments during URL normalization (#7662)
This implements RFC3986 5.2.4 remove_dot_segments during the URL normalization process.

Closes #3355, #6526

Authored by: coletdjnz
2023-07-28 22:40:20 +00:00
nnoboa
a15fcd299e [ie/Wimbledon] Add extractor (#7551)
Closes #7462
Authored by: nnoboa
2023-07-28 18:52:07 +00:00
Amirreza Aflakparast
c03a58ec99 [ie/MotorTrendOnDemand] Update _VALID_URL (#7683)
Closes #7680
Authored by: AmirAflak
2023-07-28 18:51:16 +00:00
coletdjnz
bbeacff7fc [networking] Ignore invalid proxies in env (#7704)
Authored by: coletdjnz
2023-07-27 20:26:02 +05:30
bashonly
dae349da97 [ie/WrestleUniversePPV] Fix HLS AES key extraction
Fix bug in ef8fb7f029

Closes #7708
Authored by: bashonly
2023-07-27 09:53:22 -05:00
coletdjnz
95abea9a03 [test] Fix httplib_validation_errors test for old Python versions (#7677)
Fixes https://github.com/yt-dlp/yt-dlp/issues/7674

Authored by: coletdjnz
2023-07-24 19:18:52 +00:00
bashonly
550e65410a [ie] Extract subtitles from SMIL manifests (#7667)
Authored by: bashonly, pukkandan
2023-07-24 00:09:52 +00:00
bashonly
39837ae319 [ie/triller] Fix unlisted video extraction (#7670)
Authored by: bashonly
2023-07-23 23:29:45 +00:00
coletdjnz
86aea0d3a2 [networking] Add strict Request extension checking (#7604)
Authored by: coletdjnz
Co-authored-by: pukkandan <pukkandan.ytdlp@gmail.com>
2023-07-23 05:17:15 +00:00
bashonly
11de6fec9c [ie/PatreonCampaign] Fix extraction (#7664)
Authored by: bashonly
2023-07-22 13:10:25 +00:00
pukkandan
a250b24733 [compat] Ensure submodules are imported correctly
Closes #7663
2023-07-22 18:10:35 +05:30
pukkandan
25b6e8f946 Fix e0c4db04dc for pypy 2023-07-22 10:17:36 +05:30
pukkandan
e705738338 [ie/unsupported] List more sites with DRM
Closes #7323, #3072, #5740, #5767, #6125
2023-07-22 09:56:56 +05:30
pukkandan
62b5c94cad [cleanup] Misc fixes
Closes #7528
2023-07-22 09:09:52 +05:30
pukkandan
e0c4db04dc [compat] Add types.NoneType 2023-07-22 09:00:45 +05:30
pukkandan
81b4712bca [extractor] Fix --load-pages 2023-07-22 09:00:44 +05:30
pukkandan
994f7ef8e6 [ie/generic] Fix generic title for embeds
Closes #7067
2023-07-22 08:57:44 +05:30
pukkandan
a264433c9f [outtmpl] Fix replacement for playlist_index 2023-07-22 08:57:43 +05:30
pukkandan
9f66247289 [ie/abematv] Temporary fix for protocol handler
Closes #7622
2023-07-22 08:57:42 +05:30
bashonly
e57eb98222 [fd/external] Fix ffmpeg input from stdin (#7655)
Bugfix for 1ceb657bdd

Authored by: bashonly
2023-07-22 02:32:49 +00:00
Simon Sawicki
9b16762f48 [ie/crunchyroll] Remove initial state extraction (#7632)
Authored by: Grub4K
2023-07-20 22:09:52 +02:00
bashonly
65cfa2b057 [ie/MuseAI] Add extractor (#7614)
Closes #7543
Authored by: bashonly
2023-07-20 14:15:21 +00:00
bashonly
f4ea501551 [ie/MagellanTV] Add extractor (#7616)
Closes #7529
Authored by: bashonly
2023-07-20 14:02:50 +00:00
bashonly
af86873218 [utils] Improve parse_duration
Authored by: bashonly
2023-07-20 08:40:31 -05:00
bashonly
75dc8e673b [networking] Fix --legacy-server-connect (#7645)
Bugfix for 227bf1a33b

Authored by: bashonly
2023-07-20 13:31:17 +00:00
bashonly
71baa490eb [networking] Fix POST requests with zero-length payloads (#7648)
Bugfix for 227bf1a33b

Authored by: bashonly
2023-07-20 13:23:30 +00:00
bashonly
613dbce177 [ie/twitter:spaces] Fix format protocol (#7550)
Closes #7536
Authored by: bashonly
2023-07-15 21:10:12 +00:00
Văn Anh
bb5d84c9d2 [ie/facebook:reel] Fix extraction (#7564)
Closes #7469
Authored by: demon071, bashonly
2023-07-15 21:03:23 +00:00
zhong-yiyu
1d3d579c21 [ie/pornhub] Update access cookies for UK (#7591)
Closes #7590
Authored by: zhong-yiyu
2023-07-15 20:54:19 +00:00
bashonly
42ded0a429 [fd/external] Fixes to cookie handling
- Fix bug in `axel` Cookie header arg
- Pass cookies to `curl` as strings
- Write session cookies for `aria2c` and `wget`

Closes #7539
Authored by: bashonly
2023-07-15 15:25:51 -05:00
bashonly
6c5211cebe [core] Fix HTTP headers and cookie handling
- Remove `Cookie` header from `http_headers` immediately after loading into cookiejar
- Restore compat for `--load-info-json` cookies
- Add more tests
- Fix improper passing of Cookie header by `MailRu` extractor

Closes #7558
Authored by: bashonly, pukkandan
2023-07-15 15:25:45 -05:00
Aaruni Kaushik
2b029ca0a9 [cleanup] Add color to download-archive message (#5138)
Authored by: aaruni96, Grub4K, pukkandan
Closes #4913
2023-07-16 00:45:08 +05:30
pukkandan
131d132da5 [build] Make sure deprecated modules are added 2023-07-15 16:47:55 +05:30
coletdjnz
3d2623a898 [compat, networking] Deprecate old functions (#2861)
Authored by: coletdjnz, pukkandan
2023-07-15 16:18:35 +05:30
coletdjnz
227bf1a33b [networking] Rewrite architecture (#2861)
New networking interface consists of a `RequestDirector` that directs
each `Request` to appropriate `RequestHandler` and returns the
`Response` or raises `RequestError`. The handlers define adapters to
transform its internal Request/Response/Errors to our interfaces.

User-facing changes:
- Fix issues with per request proxies on redirects for urllib
- Support for `ALL_PROXY` environment variable for proxy setting
- Support for `socks5h` proxy
   - Closes https://github.com/yt-dlp/yt-dlp/issues/6325, https://github.com/ytdl-org/youtube-dl/issues/22618, https://github.com/ytdl-org/youtube-dl/pull/28093
- Raise error when using `https` proxy instead of silently converting it to `http`

Authored by: coletdjnz
2023-07-15 16:18:35 +05:30
pukkandan
c365dba843 [networking] Add module (#2861)
No actual changes - code is only moved around
2023-07-15 16:18:34 +05:30
pukkandan
1b392f905d [utils] Add temporary shim for logging
Related: #5680, #7517
2023-07-15 16:18:34 +05:30
coletdjnz
1ba6fe9db5 [ie/youtube:tab] Detect looping feeds (#6621)
Closes https://github.com/yt-dlp/yt-dlp/issues/5555

Note: the first page may still be repeated, however this is better than nothing.

Authored by: coletdjnz
2023-07-15 03:20:24 +00:00
Finn R. Gärtner
1bcb9fe871 [ie/piapro] Support /content URL (#7592)
Authored by: FinnRG
2023-07-14 23:39:02 +05:30
Neurognostic
8a4cd12c8f [pp/EmbedThumbnail] Support m4v (#7583)
Authored by: Neurognostic
2023-07-14 02:09:21 +05:30
Aleri Kaisattera
2cfe221fbb [ie/streamanity] Remove (#7571)
Service is dead
Authored by: alerikaisattera
2023-07-13 19:47:05 +05:30
Mahmoud Abdel-Fattah
2af4eeb772 [utils] clean_podcast_url: Handle more trackers (#7556)
Authored by: mabdelfattah, bashonly
Closes #7544
2023-07-11 06:30:38 +05:30
Zprokkel
325191d0c9 [ie/vrt] Update token signing key (#7519)
Authored by: Zprokkel
2023-07-10 13:15:47 +00:00
GD-Slime
bdd0b75e3f [ie/BiliBiliBangumi] Fix extractors (#7337)
- Overhaul BiliBiliBangumi extractor for the site's new API
- Add BiliBiliBangumiSeason extractor
- Refactor BiliBiliBangumiMedia extractor

Closes #6701, Closes #7400
Authored by: GD-Slime
2023-07-08 22:26:03 +00:00
bashonly
92315c0377 [extractor/twitter] Fix GraphQL and legacy API (#7516)
Authored by: bashonly
2023-07-06 19:39:51 +00:00
pukkandan
b03fa78345 Revert 49296437a8 2023-07-06 14:19:32 -05:00
github-actions
cc0619f62d Release 2023.07.06
Created by: pukkandan

:ci skip all :ci run dl
2023-07-06 18:57:59 +00:00
pukkandan
b532a34810 [docs] Minor fixes
Closes #7515
2023-07-06 23:32:19 +05:30
Simon Sawicki
3121512228 [core] Change how Cookie headers are handled
Cookies are now saved and loaded under `cookies` key in the info dict
instead of `http_headers.Cookie`. Cookies passed in headers are
auto-scoped to the input URLs with a warning.

Ref: https://github.com/yt-dlp/yt-dlp/security/advisories/GHSA-v8mc-9377-rwjj

Authored by: Grub4K
2023-07-06 23:14:39 +05:30
coletdjnz
f8b4bcc0a7 [core] Prevent Cookie leaks on HTTP redirect
Ref: https://github.com/yt-dlp/yt-dlp/security/advisories/GHSA-v8mc-9377-rwjj

Authored by: coletdjnz
2023-07-06 23:14:39 +05:30
bashonly
1ceb657bdd [fd/external] Scope cookies
- ffmpeg: Calculate cookies from cookiejar and pass with `-cookies` arg instead of `-headers`
- aria2c, curl, wget: Write cookiejar to file and use external FD built-in cookiejar support
- httpie: Calculate cookies from cookiejar instead of `http_headers`
- axel: Calculate cookies from cookiejar and disable http redirection if cookies are passed
    - May break redirects, but axel simply don't have proper cookie support

Ref: https://github.com/yt-dlp/yt-dlp/security/advisories/GHSA-v8mc-9377-rwjj

Authored by: bashonly, coletdjnz
2023-07-06 23:14:38 +05:30
pukkandan
ad8902f616 [ie/vidlii] Handle relative URLs
Closes #7480
2023-07-06 21:40:09 +05:30
pukkandan
94ed638a43 [ie/youtube] Avoid false DRM detection (#7396)
Some master manifests contain a mix of DRM and non-DRM formats
2023-07-06 21:40:07 +05:30
pukkandan
bc344cd456 [core] Allow extractors to mark formats as potentially DRM (#7396)
This is useful for HLS where detecting whether the format is
actually DRM requires the child manifest to be downloaded.

Makes the error message when using `--test` inconsistent,
but doesn't really matter.
2023-07-06 21:40:01 +05:30
pukkandan
906c0bdcd8 [formats] Fix best fallback for storyboards
Partial fix for #7478
2023-07-06 21:39:58 +05:30
pukkandan
337734d4a8 [cleanup] Misc 2023-07-06 21:39:55 +05:30
pukkandan
fa44802809 [devscripts/make_changelog] Skip reverted commits 2023-07-06 20:22:04 +05:30
pukkandan
47bcd43724 [outtmpl] Pad playlist_index etc even when with internal formatting
Closes #7501
2023-07-06 20:22:03 +05:30
pukkandan
662ef1e910 [downloader/http] Avoid infinite loop when no data is received
Closes #7504
2023-07-06 20:22:00 +05:30
Jorge
6355b5f1e1 [misc] Add CodeQL workflow (#7497) 2023-07-06 20:21:46 +05:30
coletdjnz
90db9a3c00 [extractor/youtube:stories] Remove (#7459)
YouTube killed them

https://web.archive.org/web/20230630153050/https://support.google.com/youtube/thread/217640760
2023-07-06 19:02:41 +05:30
bashonly
49296437a8 [extractor/twitter] Fix unauthenticated extraction (#7476)
Closes #7473
Authored by: bashonly
2023-07-05 16:27:36 +00:00
bashonly
1cffd621cb [extractor/twitter:spaces] Fix extraction (#7512)
Closes #7455
Authored by: bashonly
2023-07-05 03:05:52 +00:00
RfadnjdExt
3b7f5300c5 [extractor/googledrive] Fix source format extraction (#7395)
Closes #7344
Authored by: RfadnjdExt
2023-07-05 02:17:13 +00:00
coletdjnz
4dc4d8473c [extractor/youtube] Ignore incomplete data for comment threads by default (#7475)
For both `--ignore-errors` and `--ignore-errors only_download`. Pass `--no-ignore-errors` to not ignore.

Closes https://github.com/yt-dlp/yt-dlp/issues/7474

Authored by: coletdjnz
2023-07-03 10:47:10 +00:00
c-basalt
8776349ef6 [extractor/vk] VKPlay, VKPlayLive: Add extractors (#7358)
Closes #7107
Authored by: c-basalt
2023-07-02 19:31:00 +00:00
urectanc
af1fd12f67 [extractor/stacommu] Add extractors (#7432)
Authored by: urectanc
2023-06-30 18:27:07 +00:00
coletdjnz
fcbc9ed760 [extractor/youtube:tab] Support shorts-only playlists (#7425)
Fixes https://github.com/yt-dlp/yt-dlp/issues/7424

Authored by: coletdjnz
Co-authored-by: pukkandan <pukkandan.ytdlp@gmail.com>
2023-06-29 23:26:27 +00:00
bashonly
a2be9781fb [extractor/Douyin] Fix extraction from webpage
Closes #7431
Authored by: bashonly
2023-06-27 16:50:02 -05:00
Xiao Han
8f05fbae2a [extractor/abc] Fix extraction (#7434)
Closes #6433
Authored by: meliber
2023-06-27 21:16:57 +00:00
Aman Salwan
5b4b92769a [extractor/crunchyroll:music] Fix _VALID_URL (#7439)
Closes #7419
Authored by: AmanSal1, rdamas

Co-authored-by: Robert Damas <robert.damas@byom.de>
2023-06-27 20:28:23 +00:00
pukkandan
91302ed349 [utils] clean_podcast_url: Handle protocol in redirect URL
Closes #7430
2023-06-26 16:19:49 +05:30
pukkandan
f393bbe724 [extractor/sbs] Python 3.7 compat
Closes #7410
2023-06-26 16:14:20 +05:30
pukkandan
8a8af356e3 [downloader/aria2c] Add --no-conf
Closes #7404
2023-06-26 16:13:31 +05:30
pukkandan
d949c10c45 [extractor/youtube] Process post_live over 2 hours 2023-06-26 07:25:52 +05:30
bashonly
ef8509c300 [extractor/kick] Fix _VALID_URL
Closes #7384
Authored by: bashonly
2023-06-25 17:04:42 -05:00
nnoboa
5e16cf92eb [extractor/AdultSwim] Extract subtitles from m3u8 (#7421)
Authored by: nnoboa
Closes #6191
2023-06-26 01:52:38 +05:30
bashonly
f0a1ff1181 [extractor/qdance] Add extractor (#7420)
Closes #7385
Authored by: bashonly
2023-06-25 18:13:28 +00:00
pukkandan
58786a10f2 [extractor/youtube] Add extractor-arg formats
Closes #7417
2023-06-25 20:14:37 +05:30
pukkandan
e59e20744e Bugfix for b4e0d75848 2023-06-22 23:45:53 +05:30
Simon
89bed01374 [extractor/youtube] Fix comments' is_favorited (#7390)
Authored by: bbilly1
Closes #7389
2023-06-22 23:38:42 +05:30
1241 changed files with 50485 additions and 34063 deletions

View File

@@ -18,7 +18,7 @@ body:
options:
- label: I'm reporting that yt-dlp is broken on a **supported** site
required: true
- label: I've verified that I'm running yt-dlp version **2023.06.22** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
- label: I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
required: true
- label: I've checked that all provided URLs are playable in a browser with the same IP and same login details
required: true
@@ -61,19 +61,18 @@ body:
description: |
It should start like this:
placeholder: |
[debug] Command-line config: ['-vU', 'test:youtube']
[debug] Portable config "yt-dlp.conf": ['-i']
[debug] Command-line config: ['-vU', 'https://www.youtube.com/watch?v=BaW_jenozKc']
[debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version 2023.06.22 [9d339c4] (win32_exe)
[debug] yt-dlp version nightly@... from yt-dlp/yt-dlp [b634ba742] (win_exe)
[debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.22000-SP0
[debug] Checking exe version: ffmpeg -bsfs
[debug] Checking exe version: ffprobe -bsfs
[debug] exe versions: ffmpeg N-106550-g072101bd52-20220410 (fdk,setts), ffprobe N-106624-g391ce570c8-20220415, phantomjs 2.1.1
[debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3
[debug] Proxy map: {}
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: 2023.06.22, Current version: 2023.06.22
yt-dlp is up to date (2023.06.22)
[debug] Request Handlers: urllib, requests
[debug] Loaded 1893 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest
yt-dlp is up to date (nightly@... from yt-dlp/yt-dlp-nightly-builds)
[youtube] Extracting URL: https://www.youtube.com/watch?v=BaW_jenozKc
<more lines>
render: shell
validations:

View File

@@ -18,7 +18,7 @@ body:
options:
- label: I'm reporting a new site support request
required: true
- label: I've verified that I'm running yt-dlp version **2023.06.22** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
- label: I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
required: true
- label: I've checked that all provided URLs are playable in a browser with the same IP and same login details
required: true
@@ -73,19 +73,18 @@ body:
description: |
It should start like this:
placeholder: |
[debug] Command-line config: ['-vU', 'test:youtube']
[debug] Portable config "yt-dlp.conf": ['-i']
[debug] Command-line config: ['-vU', 'https://www.youtube.com/watch?v=BaW_jenozKc']
[debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version 2023.06.22 [9d339c4] (win32_exe)
[debug] yt-dlp version nightly@... from yt-dlp/yt-dlp [b634ba742] (win_exe)
[debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.22000-SP0
[debug] Checking exe version: ffmpeg -bsfs
[debug] Checking exe version: ffprobe -bsfs
[debug] exe versions: ffmpeg N-106550-g072101bd52-20220410 (fdk,setts), ffprobe N-106624-g391ce570c8-20220415, phantomjs 2.1.1
[debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3
[debug] Proxy map: {}
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: 2023.06.22, Current version: 2023.06.22
yt-dlp is up to date (2023.06.22)
[debug] Request Handlers: urllib, requests
[debug] Loaded 1893 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest
yt-dlp is up to date (nightly@... from yt-dlp/yt-dlp-nightly-builds)
[youtube] Extracting URL: https://www.youtube.com/watch?v=BaW_jenozKc
<more lines>
render: shell
validations:

View File

@@ -18,7 +18,7 @@ body:
options:
- label: I'm requesting a site-specific feature
required: true
- label: I've verified that I'm running yt-dlp version **2023.06.22** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
- label: I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
required: true
- label: I've checked that all provided URLs are playable in a browser with the same IP and same login details
required: true
@@ -69,19 +69,18 @@ body:
description: |
It should start like this:
placeholder: |
[debug] Command-line config: ['-vU', 'test:youtube']
[debug] Portable config "yt-dlp.conf": ['-i']
[debug] Command-line config: ['-vU', 'https://www.youtube.com/watch?v=BaW_jenozKc']
[debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version 2023.06.22 [9d339c4] (win32_exe)
[debug] yt-dlp version nightly@... from yt-dlp/yt-dlp [b634ba742] (win_exe)
[debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.22000-SP0
[debug] Checking exe version: ffmpeg -bsfs
[debug] Checking exe version: ffprobe -bsfs
[debug] exe versions: ffmpeg N-106550-g072101bd52-20220410 (fdk,setts), ffprobe N-106624-g391ce570c8-20220415, phantomjs 2.1.1
[debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3
[debug] Proxy map: {}
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: 2023.06.22, Current version: 2023.06.22
yt-dlp is up to date (2023.06.22)
[debug] Request Handlers: urllib, requests
[debug] Loaded 1893 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest
yt-dlp is up to date (nightly@... from yt-dlp/yt-dlp-nightly-builds)
[youtube] Extracting URL: https://www.youtube.com/watch?v=BaW_jenozKc
<more lines>
render: shell
validations:

View File

@@ -18,7 +18,7 @@ body:
options:
- label: I'm reporting a bug unrelated to a specific site
required: true
- label: I've verified that I'm running yt-dlp version **2023.06.22** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
- label: I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
required: true
- label: I've checked that all provided URLs are playable in a browser with the same IP and same login details
required: true
@@ -54,19 +54,18 @@ body:
description: |
It should start like this:
placeholder: |
[debug] Command-line config: ['-vU', 'test:youtube']
[debug] Portable config "yt-dlp.conf": ['-i']
[debug] Command-line config: ['-vU', 'https://www.youtube.com/watch?v=BaW_jenozKc']
[debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version 2023.06.22 [9d339c4] (win32_exe)
[debug] yt-dlp version nightly@... from yt-dlp/yt-dlp [b634ba742] (win_exe)
[debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.22000-SP0
[debug] Checking exe version: ffmpeg -bsfs
[debug] Checking exe version: ffprobe -bsfs
[debug] exe versions: ffmpeg N-106550-g072101bd52-20220410 (fdk,setts), ffprobe N-106624-g391ce570c8-20220415, phantomjs 2.1.1
[debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3
[debug] Proxy map: {}
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: 2023.06.22, Current version: 2023.06.22
yt-dlp is up to date (2023.06.22)
[debug] Request Handlers: urllib, requests
[debug] Loaded 1893 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest
yt-dlp is up to date (nightly@... from yt-dlp/yt-dlp-nightly-builds)
[youtube] Extracting URL: https://www.youtube.com/watch?v=BaW_jenozKc
<more lines>
render: shell
validations:

View File

@@ -20,7 +20,7 @@ body:
required: true
- label: I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
required: true
- label: I've verified that I'm running yt-dlp version **2023.06.22** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
- label: I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
required: true
- label: I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
required: true
@@ -50,18 +50,17 @@ body:
description: |
It should start like this:
placeholder: |
[debug] Command-line config: ['-vU', 'test:youtube']
[debug] Portable config "yt-dlp.conf": ['-i']
[debug] Command-line config: ['-vU', 'https://www.youtube.com/watch?v=BaW_jenozKc']
[debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version 2023.06.22 [9d339c4] (win32_exe)
[debug] yt-dlp version nightly@... from yt-dlp/yt-dlp [b634ba742] (win_exe)
[debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.22000-SP0
[debug] Checking exe version: ffmpeg -bsfs
[debug] Checking exe version: ffprobe -bsfs
[debug] exe versions: ffmpeg N-106550-g072101bd52-20220410 (fdk,setts), ffprobe N-106624-g391ce570c8-20220415, phantomjs 2.1.1
[debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3
[debug] Proxy map: {}
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: 2023.06.22, Current version: 2023.06.22
yt-dlp is up to date (2023.06.22)
[debug] Request Handlers: urllib, requests
[debug] Loaded 1893 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest
yt-dlp is up to date (nightly@... from yt-dlp/yt-dlp-nightly-builds)
[youtube] Extracting URL: https://www.youtube.com/watch?v=BaW_jenozKc
<more lines>
render: shell

View File

@@ -26,7 +26,7 @@ body:
required: true
- label: I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
required: true
- label: I've verified that I'm running yt-dlp version **2023.06.22** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
- label: I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
required: true
- label: I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions **including closed ones**. DO NOT post duplicates
required: true
@@ -56,18 +56,17 @@ body:
description: |
It should start like this:
placeholder: |
[debug] Command-line config: ['-vU', 'test:youtube']
[debug] Portable config "yt-dlp.conf": ['-i']
[debug] Command-line config: ['-vU', 'https://www.youtube.com/watch?v=BaW_jenozKc']
[debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version 2023.06.22 [9d339c4] (win32_exe)
[debug] yt-dlp version nightly@... from yt-dlp/yt-dlp [b634ba742] (win_exe)
[debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.22000-SP0
[debug] Checking exe version: ffmpeg -bsfs
[debug] Checking exe version: ffprobe -bsfs
[debug] exe versions: ffmpeg N-106550-g072101bd52-20220410 (fdk,setts), ffprobe N-106624-g391ce570c8-20220415, phantomjs 2.1.1
[debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3
[debug] Proxy map: {}
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: 2023.06.22, Current version: 2023.06.22
yt-dlp is up to date (2023.06.22)
[debug] Request Handlers: urllib, requests
[debug] Loaded 1893 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest
yt-dlp is up to date (nightly@... from yt-dlp/yt-dlp-nightly-builds)
[youtube] Extracting URL: https://www.youtube.com/watch?v=BaW_jenozKc
<more lines>
render: shell

View File

@@ -12,7 +12,7 @@ body:
options:
- label: I'm reporting that yt-dlp is broken on a **supported** site
required: true
- label: I've verified that I'm running yt-dlp version **%(version)s** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
- label: I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
required: true
- label: I've checked that all provided URLs are playable in a browser with the same IP and same login details
required: true

View File

@@ -12,7 +12,7 @@ body:
options:
- label: I'm reporting a new site support request
required: true
- label: I've verified that I'm running yt-dlp version **%(version)s** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
- label: I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
required: true
- label: I've checked that all provided URLs are playable in a browser with the same IP and same login details
required: true

View File

@@ -12,7 +12,7 @@ body:
options:
- label: I'm requesting a site-specific feature
required: true
- label: I've verified that I'm running yt-dlp version **%(version)s** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
- label: I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
required: true
- label: I've checked that all provided URLs are playable in a browser with the same IP and same login details
required: true

View File

@@ -12,7 +12,7 @@ body:
options:
- label: I'm reporting a bug unrelated to a specific site
required: true
- label: I've verified that I'm running yt-dlp version **%(version)s** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
- label: I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
required: true
- label: I've checked that all provided URLs are playable in a browser with the same IP and same login details
required: true

View File

@@ -14,7 +14,7 @@ body:
required: true
- label: I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
required: true
- label: I've verified that I'm running yt-dlp version **%(version)s** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
- label: I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
required: true
- label: I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
required: true

View File

@@ -20,7 +20,7 @@ body:
required: true
- label: I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
required: true
- label: I've verified that I'm running yt-dlp version **%(version)s** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
- label: I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
required: true
- label: I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions **including closed ones**. DO NOT post duplicates
required: true

View File

@@ -28,7 +28,6 @@ # PLEASE FOLLOW THE GUIDE BELOW
### Before submitting a *pull request* make sure you have:
- [ ] At least skimmed through [contributing guidelines](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#developer-instructions) including [yt-dlp coding conventions](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#yt-dlp-coding-conventions)
- [ ] [Searched](https://github.com/yt-dlp/yt-dlp/search?q=is%3Apr&type=Issues) the bugtracker for similar pull requests
- [ ] Checked the code with [flake8](https://pypi.python.org/pypi/flake8) and [ran relevant tests](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#developer-instructions)
### In order to be accepted and merged into yt-dlp each piece of code must be in public domain or released under [Unlicense](http://unlicense.org/). Check all of the following options that apply:
- [ ] I am the original author of this code and I am willing to release it under [Unlicense](http://unlicense.org/)
@@ -40,10 +39,4 @@ ### What is the purpose of your *pull request*?
- [ ] Core bug fix/improvement
- [ ] New feature (It is strongly [recommended to open an issue first](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#adding-new-feature-or-making-overarching-changes))
<!-- Do NOT edit/remove anything below this! -->
</details><details><summary>Copilot Summary</summary>
copilot:all
</details>

10
.github/banner.svg vendored

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 24 KiB

After

Width:  |  Height:  |  Size: 15 KiB

View File

@@ -12,6 +12,9 @@ on:
unix:
default: true
type: boolean
linux_static:
default: true
type: boolean
linux_arm:
default: true
type: boolean
@@ -27,9 +30,10 @@ on:
windows32:
default: true
type: boolean
meta_files:
default: true
type: boolean
origin:
required: false
default: ''
type: string
secrets:
GPG_SIGNING_KEY:
required: false
@@ -37,16 +41,22 @@ on:
workflow_dispatch:
inputs:
version:
description: Version tag (YYYY.MM.DD[.REV])
description: |
VERSION: yyyy.mm.dd[.rev] or rev
required: true
type: string
channel:
description: Update channel (stable/nightly/...)
description: |
SOURCE of this build's updates: stable/nightly/master/<repo>
required: true
default: stable
type: string
unix:
description: yt-dlp, yt-dlp.tar.gz, yt-dlp_linux, yt-dlp_linux.zip
description: yt-dlp, yt-dlp.tar.gz
default: true
type: boolean
linux_static:
description: yt-dlp_linux
default: true
type: boolean
linux_arm:
@@ -69,87 +79,103 @@ on:
description: yt-dlp_x86.exe
default: true
type: boolean
meta_files:
description: SHA2-256SUMS, SHA2-512SUMS, _update_spec
default: true
type: boolean
origin:
description: Origin
required: false
default: 'current repo'
type: choice
options:
- 'current repo'
permissions:
contents: read
jobs:
process:
runs-on: ubuntu-latest
outputs:
origin: ${{ steps.process_origin.outputs.origin }}
steps:
- name: Process origin
id: process_origin
run: |
echo "origin=${{ inputs.origin == 'current repo' && github.repository || inputs.origin }}" | tee "$GITHUB_OUTPUT"
unix:
needs: process
if: inputs.unix
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-python@v4
- uses: actions/checkout@v4
with:
fetch-depth: 0 # Needed for changelog
- uses: actions/setup-python@v5
with:
python-version: "3.10"
- uses: conda-incubator/setup-miniconda@v2
with:
miniforge-variant: Mambaforge
use-mamba: true
channels: conda-forge
auto-update-conda: true
activate-environment: ""
auto-activate-base: false
- name: Install Requirements
run: |
sudo apt-get -y install zip pandoc man sed
python -m pip install -U pip setuptools wheel
python -m pip install -U Pyinstaller -r requirements.txt
reqs=$(mktemp)
cat > $reqs << EOF
python=3.10.*
pyinstaller
cffi
brotli-python
EOF
sed '/^brotli.*/d' requirements.txt >> $reqs
mamba create -n build --file $reqs
sudo apt -y install zip pandoc man sed
- name: Prepare
run: |
python devscripts/update-version.py -c ${{ inputs.channel }} ${{ inputs.version }}
python devscripts/update-version.py -c "${{ inputs.channel }}" -r "${{ needs.process.outputs.origin }}" "${{ inputs.version }}"
python devscripts/update_changelog.py -vv
python devscripts/make_lazy_extractors.py
- name: Build Unix platform-independent binary
run: |
make all tar
- name: Build Unix standalone binary
shell: bash -l {0}
run: |
unset LD_LIBRARY_PATH # Harmful; set by setup-python
conda activate build
python pyinst.py --onedir
(cd ./dist/yt-dlp_linux && zip -r ../yt-dlp_linux.zip .)
python pyinst.py
mv ./dist/yt-dlp_linux ./yt-dlp_linux
mv ./dist/yt-dlp_linux.zip ./yt-dlp_linux.zip
- name: Verify --update-to
if: vars.UPDATE_TO_VERIFICATION
run: |
binaries=("yt-dlp" "yt-dlp_linux")
for binary in "${binaries[@]}"; do
chmod +x ./${binary}
cp ./${binary} ./${binary}_downgraded
version="$(./${binary} --version)"
./${binary}_downgraded -v --update-to yt-dlp/yt-dlp@2023.03.04
downgraded_version="$(./${binary}_downgraded --version)"
chmod +x ./yt-dlp
cp ./yt-dlp ./yt-dlp_downgraded
version="$(./yt-dlp --version)"
./yt-dlp_downgraded -v --update-to yt-dlp/yt-dlp@2023.03.04
downgraded_version="$(./yt-dlp_downgraded --version)"
[[ "$version" != "$downgraded_version" ]]
done
- name: Upload artifacts
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: build-bin-${{ github.job }}
path: |
yt-dlp
yt-dlp.tar.gz
yt-dlp_linux
yt-dlp_linux.zip
compression-level: 0
linux_static:
needs: process
if: inputs.linux_static
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Build static executable
env:
channel: ${{ inputs.channel }}
origin: ${{ needs.process.outputs.origin }}
version: ${{ inputs.version }}
run: |
mkdir ~/build
cd bundle/docker
docker compose up --build static
sudo chown "${USER}:docker" ~/build/yt-dlp_linux
- name: Verify --update-to
if: vars.UPDATE_TO_VERIFICATION
run: |
chmod +x ~/build/yt-dlp_linux
cp ~/build/yt-dlp_linux ~/build/yt-dlp_linux_downgraded
version="$(~/build/yt-dlp_linux --version)"
~/build/yt-dlp_linux_downgraded -v --update-to yt-dlp/yt-dlp@2023.03.04
downgraded_version="$(~/build/yt-dlp_linux_downgraded --version)"
[[ "$version" != "$downgraded_version" ]]
- name: Upload artifacts
uses: actions/upload-artifact@v4
with:
name: build-bin-${{ github.job }}
path: |
~/build/yt-dlp_linux
compression-level: 0
linux_arm:
needs: process
if: inputs.linux_arm
permissions:
contents: read
@@ -162,7 +188,7 @@ jobs:
- aarch64
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
with:
path: ./repo
- name: Virtualized Install, Prepare & Build
@@ -177,17 +203,18 @@ jobs:
dockerRunArgs: --volume "${PWD}/repo:/repo"
install: | # Installing Python 3.10 from the Deadsnakes repo raises errors
apt update
apt -y install zlib1g-dev python3.8 python3.8-dev python3.8-distutils python3-pip
apt -y install zlib1g-dev libffi-dev python3.8 python3.8-dev python3.8-distutils python3-pip
python3.8 -m pip install -U pip setuptools wheel
# Cannot access requirements.txt from the repo directory at this stage
python3.8 -m pip install -U Pyinstaller mutagen pycryptodomex websockets brotli certifi
# Cannot access any files from the repo directory at this stage
python3.8 -m pip install -U Pyinstaller mutagen pycryptodomex websockets brotli certifi secretstorage cffi
run: |
cd repo
python3.8 -m pip install -U Pyinstaller -r requirements.txt # Cached version may be out of date
python3.8 devscripts/update-version.py -c ${{ inputs.channel }} ${{ inputs.version }}
python3.8 devscripts/install_deps.py -o --include build
python3.8 devscripts/install_deps.py --include pyinstaller --include secretstorage # Cached version may be out of date
python3.8 devscripts/update-version.py -c "${{ inputs.channel }}" -r "${{ needs.process.outputs.origin }}" "${{ inputs.version }}"
python3.8 devscripts/make_lazy_extractors.py
python3.8 pyinst.py
python3.8 -m bundle.pyinstaller
if ${{ vars.UPDATE_TO_VERIFICATION && 'true' || 'false' }}; then
arch="${{ (matrix.architecture == 'armv7' && 'armv7l') || matrix.architecture }}"
@@ -200,34 +227,84 @@ jobs:
fi
- name: Upload artifacts
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: build-bin-linux_${{ matrix.architecture }}
path: | # run-on-arch-action designates armv7l as armv7
repo/dist/yt-dlp_linux_${{ (matrix.architecture == 'armv7' && 'armv7l') || matrix.architecture }}
compression-level: 0
macos:
needs: process
if: inputs.macos
runs-on: macos-11
permissions:
contents: read
actions: write # For cleaning up cache
runs-on: macos-12
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
# NB: Building universal2 does not work with python from actions/setup-python
- name: Restore cached requirements
id: restore-cache
uses: actions/cache/restore@v4
env:
SEGMENT_DOWNLOAD_TIMEOUT_MINS: 1
with:
path: |
~/yt-dlp-build-venv
key: cache-reqs-${{ github.job }}
- name: Install Requirements
run: |
brew install coreutils
python3 -m pip install -U --user pip setuptools wheel
python3 -m venv ~/yt-dlp-build-venv
source ~/yt-dlp-build-venv/bin/activate
python3 devscripts/install_deps.py -o --include build
python3 devscripts/install_deps.py --print --include pyinstaller > requirements.txt
# We need to ignore wheels otherwise we break universal2 builds
python3 -m pip install -U --user --no-binary :all: Pyinstaller -r requirements.txt
python3 -m pip install -U --no-binary :all: -r requirements.txt
# We need to fuse our own universal2 wheels for curl_cffi
python3 -m pip install -U delocate
mkdir curl_cffi_whls curl_cffi_universal2
python3 devscripts/install_deps.py --print -o --include curl-cffi > requirements.txt
for platform in "macosx_11_0_arm64" "macosx_11_0_x86_64"; do
python3 -m pip download \
--only-binary=:all: \
--platform "${platform}" \
-d curl_cffi_whls \
-r requirements.txt
done
( # Overwrite x86_64-only libs with fat/universal2 libs or else Pyinstaller will do the opposite
# See https://github.com/yt-dlp/yt-dlp/pull/10069
cd curl_cffi_whls
mkdir -p curl_cffi/.dylibs
python_libdir=$(python3 -c 'import sys; from pathlib import Path; print(Path(sys.path[1]).parent)')
for dylib in lib{ssl,crypto}.3.dylib; do
cp "${python_libdir}/${dylib}" "curl_cffi/.dylibs/${dylib}"
for wheel in curl_cffi*macos*x86_64.whl; do
zip "${wheel}" "curl_cffi/.dylibs/${dylib}"
done
done
)
python3 -m delocate.cmd.delocate_fuse curl_cffi_whls/curl_cffi*.whl -w curl_cffi_universal2
python3 -m delocate.cmd.delocate_fuse curl_cffi_whls/cffi*.whl -w curl_cffi_universal2
for wheel in curl_cffi_universal2/*cffi*.whl; do
mv -n -- "${wheel}" "${wheel/x86_64/universal2}"
done
python3 -m pip install --force-reinstall -U curl_cffi_universal2/*cffi*.whl
- name: Prepare
run: |
python3 devscripts/update-version.py -c ${{ inputs.channel }} ${{ inputs.version }}
python3 devscripts/update-version.py -c "${{ inputs.channel }}" -r "${{ needs.process.outputs.origin }}" "${{ inputs.version }}"
python3 devscripts/make_lazy_extractors.py
- name: Build
run: |
python3 pyinst.py --target-architecture universal2 --onedir
source ~/yt-dlp-build-venv/bin/activate
python3 -m bundle.pyinstaller --target-architecture universal2 --onedir
(cd ./dist/yt-dlp_macos && zip -r ../yt-dlp_macos.zip .)
python3 pyinst.py --target-architecture universal2
python3 -m bundle.pyinstaller --target-architecture universal2
- name: Verify --update-to
if: vars.UPDATE_TO_VERIFICATION
@@ -240,18 +317,39 @@ jobs:
[[ "$version" != "$downgraded_version" ]]
- name: Upload artifacts
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: build-bin-${{ github.job }}
path: |
dist/yt-dlp_macos
dist/yt-dlp_macos.zip
compression-level: 0
- name: Cleanup cache
if: steps.restore-cache.outputs.cache-hit == 'true'
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
cache_key: cache-reqs-${{ github.job }}
repository: ${{ github.repository }}
branch: ${{ github.ref }}
run: |
gh extension install actions/gh-actions-cache
gh actions-cache delete "${cache_key}" -R "${repository}" -B "${branch}" --confirm
- name: Cache requirements
uses: actions/cache/save@v4
with:
path: |
~/yt-dlp-build-venv
key: cache-reqs-${{ github.job }}
macos_legacy:
needs: process
if: inputs.macos_legacy
runs-on: macos-latest
runs-on: macos-12
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
- name: Install Python
# We need the official Python, because the GA ones only support newer macOS versions
env:
@@ -261,22 +359,22 @@ jobs:
# Hack to get the latest patch version. Uncomment if needed
#brew install python@3.10
#export PYTHON_VERSION=$( $(brew --prefix)/opt/python@3.10/bin/python3 --version | cut -d ' ' -f 2 )
curl https://www.python.org/ftp/python/${PYTHON_VERSION}/python-${PYTHON_VERSION}-macos11.pkg -o "python.pkg"
curl "https://www.python.org/ftp/python/${PYTHON_VERSION}/python-${PYTHON_VERSION}-macos11.pkg" -o "python.pkg"
sudo installer -pkg python.pkg -target /
python3 --version
- name: Install Requirements
run: |
brew install coreutils
python3 -m pip install -U --user pip setuptools wheel
python3 -m pip install -U --user Pyinstaller -r requirements.txt
python3 devscripts/install_deps.py --user -o --include build
python3 devscripts/install_deps.py --user --include pyinstaller
- name: Prepare
run: |
python3 devscripts/update-version.py -c ${{ inputs.channel }} ${{ inputs.version }}
python3 devscripts/update-version.py -c "${{ inputs.channel }}" -r "${{ needs.process.outputs.origin }}" "${{ inputs.version }}"
python3 devscripts/make_lazy_extractors.py
- name: Build
run: |
python3 pyinst.py
python3 -m bundle.pyinstaller
mv dist/yt-dlp_macos dist/yt-dlp_macos_legacy
- name: Verify --update-to
@@ -290,37 +388,49 @@ jobs:
[[ "$version" != "$downgraded_version" ]]
- name: Upload artifacts
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: build-bin-${{ github.job }}
path: |
dist/yt-dlp_macos_legacy
compression-level: 0
windows:
needs: process
if: inputs.windows
runs-on: windows-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-python@v4
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with: # 3.8 is used for Win7 support
python-version: "3.8"
- name: Install Requirements
run: | # Custom pyinstaller built with https://github.com/yt-dlp/pyinstaller-builds
python -m pip install -U pip setuptools wheel py2exe
pip install -U "https://yt-dlp.github.io/Pyinstaller-Builds/x86_64/pyinstaller-5.8.0-py3-none-any.whl" -r requirements.txt
python devscripts/install_deps.py -o --include build
python devscripts/install_deps.py --include curl-cffi
python -m pip install -U "https://yt-dlp.github.io/Pyinstaller-Builds/x86_64/pyinstaller-6.7.0-py3-none-any.whl"
- name: Prepare
run: |
python devscripts/update-version.py -c ${{ inputs.channel }} ${{ inputs.version }}
python devscripts/update-version.py -c "${{ inputs.channel }}" -r "${{ needs.process.outputs.origin }}" "${{ inputs.version }}"
python devscripts/make_lazy_extractors.py
- name: Build
run: |
python setup.py py2exe
Move-Item ./dist/yt-dlp.exe ./dist/yt-dlp_min.exe
python pyinst.py
python pyinst.py --onedir
python -m bundle.pyinstaller
python -m bundle.pyinstaller --onedir
Move-Item ./dist/yt-dlp.exe ./dist/yt-dlp_real.exe
Compress-Archive -Path ./dist/yt-dlp/* -DestinationPath ./dist/yt-dlp_win.zip
- name: Install Requirements (py2exe)
run: |
python devscripts/install_deps.py --include py2exe
- name: Build (py2exe)
run: |
python -m bundle.py2exe
Move-Item ./dist/yt-dlp.exe ./dist/yt-dlp_min.exe
Move-Item ./dist/yt-dlp_real.exe ./dist/yt-dlp.exe
- name: Verify --update-to
if: vars.UPDATE_TO_VERIFICATION
run: |
@@ -335,35 +445,39 @@ jobs:
}
- name: Upload artifacts
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: build-bin-${{ github.job }}
path: |
dist/yt-dlp.exe
dist/yt-dlp_min.exe
dist/yt-dlp_win.zip
compression-level: 0
windows32:
needs: process
if: inputs.windows32
runs-on: windows-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-python@v4
with: # 3.7 is used for Vista support. See https://github.com/yt-dlp/yt-dlp/issues/390
python-version: "3.7"
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: "3.8"
architecture: "x86"
- name: Install Requirements
run: |
python -m pip install -U pip setuptools wheel
pip install -U "https://yt-dlp.github.io/Pyinstaller-Builds/i686/pyinstaller-5.8.0-py3-none-any.whl" -r requirements.txt
python devscripts/install_deps.py -o --include build
python devscripts/install_deps.py
python -m pip install -U "https://yt-dlp.github.io/Pyinstaller-Builds/i686/pyinstaller-6.7.0-py3-none-any.whl"
- name: Prepare
run: |
python devscripts/update-version.py -c ${{ inputs.channel }} ${{ inputs.version }}
python devscripts/update-version.py -c "${{ inputs.channel }}" -r "${{ needs.process.outputs.origin }}" "${{ inputs.version }}"
python devscripts/make_lazy_extractors.py
- name: Build
run: |
python pyinst.py
python -m bundle.pyinstaller
- name: Verify --update-to
if: vars.UPDATE_TO_VERIFICATION
@@ -379,15 +493,19 @@ jobs:
}
- name: Upload artifacts
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: build-bin-${{ github.job }}
path: |
dist/yt-dlp_x86.exe
compression-level: 0
meta_files:
if: inputs.meta_files && always() && !cancelled()
if: always() && !cancelled()
needs:
- process
- unix
- linux_static
- linux_arm
- macos
- macos_legacy
@@ -395,19 +513,37 @@ jobs:
- windows32
runs-on: ubuntu-latest
steps:
- uses: actions/download-artifact@v3
- uses: actions/download-artifact@v4
with:
path: artifact
pattern: build-bin-*
merge-multiple: true
- name: Make SHA2-SUMS files
run: |
cd ./artifact/
sha256sum * > ../SHA2-256SUMS
sha512sum * > ../SHA2-512SUMS
# make sure SHA sums are also printed to stdout
sha256sum -- * | tee ../SHA2-256SUMS
sha512sum -- * | tee ../SHA2-512SUMS
# also print as permanent annotations to the summary page
while read -r shasum; do
echo "::notice title=${shasum##* }::sha256: ${shasum% *}"
done < ../SHA2-256SUMS
- name: Make Update spec
run: |
cat >> _update_spec << EOF
# This file is used for regulating self-update
lock 2022.08.18.36 .+ Python 3.6
lock 2022.08.18.36 .+ Python 3\.6
lock 2023.11.16 (?!win_x86_exe).+ Python 3\.7
lock 2023.11.16 win_x86_exe .+ Windows-(?:Vista|2008Server)
lockV2 yt-dlp/yt-dlp 2022.08.18.36 .+ Python 3\.6
lockV2 yt-dlp/yt-dlp 2023.11.16 (?!win_x86_exe).+ Python 3\.7
lockV2 yt-dlp/yt-dlp 2023.11.16 win_x86_exe .+ Windows-(?:Vista|2008Server)
lockV2 yt-dlp/yt-dlp-nightly-builds 2023.11.15.232826 (?!win_x86_exe).+ Python 3\.7
lockV2 yt-dlp/yt-dlp-nightly-builds 2023.11.15.232826 win_x86_exe .+ Windows-(?:Vista|2008Server)
lockV2 yt-dlp/yt-dlp-master-builds 2023.11.15.232812 (?!win_x86_exe).+ Python 3\.7
lockV2 yt-dlp/yt-dlp-master-builds 2023.11.15.232812 win_x86_exe .+ Windows-(?:Vista|2008Server)
EOF
- name: Sign checksum files
@@ -421,8 +557,11 @@ jobs:
done
- name: Upload artifacts
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: build-${{ github.job }}
path: |
SHA*SUMS*
_update_spec
SHA*SUMS*
compression-level: 0
overwrite: true

65
.github/workflows/codeql.yml vendored Normal file
View File

@@ -0,0 +1,65 @@
name: "CodeQL"
on:
push:
branches: [ 'master', 'gh-pages', 'release' ]
pull_request:
# The branches below must be a subset of the branches above
branches: [ 'master' ]
schedule:
- cron: '59 11 * * 5'
jobs:
analyze:
name: Analyze
runs-on: ubuntu-latest
permissions:
actions: read
contents: read
security-events: write
strategy:
fail-fast: false
matrix:
language: [ 'python' ]
# CodeQL supports [ 'cpp', 'csharp', 'go', 'java', 'javascript', 'python', 'ruby' ]
# Use only 'java' to analyze code written in Java, Kotlin or both
# Use only 'javascript' to analyze code written in JavaScript, TypeScript or both
# Learn more about CodeQL language support at https://aka.ms/codeql-docs/language-support
steps:
- name: Checkout repository
uses: actions/checkout@v4
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@v2
with:
languages: ${{ matrix.language }}
# If you wish to specify custom queries, you can do so here or in a config file.
# By default, queries listed here will override any specified in a config file.
# Prefix the list here with "+" to use these queries and those in the config file.
# For more details on CodeQL's query packs, refer to: https://docs.github.com/en/code-security/code-scanning/automatically-scanning-your-code-for-vulnerabilities-and-errors/configuring-code-scanning#using-queries-in-ql-packs
# queries: security-extended,security-and-quality
# Autobuild attempts to build any compiled languages (C/C++, C#, Go, Java, or Swift).
# If this step fails, then you should remove it and run the build manually (see below)
- name: Autobuild
uses: github/codeql-action/autobuild@v2
# Command-line programs to run using the OS shell.
# 📚 See https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idstepsrun
# If the Autobuild fails above, remove it and uncomment the following three lines.
# modify them (or add more) to build your code if your project, please refer to the EXAMPLE below for guidance.
# - run: |
# echo "Run, Build Application using script"
# ./location_of_script_within_repo/buildscript.sh
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v2
with:
category: "/language:${{matrix.language}}"

View File

@@ -1,8 +1,32 @@
name: Core Tests
on: [push, pull_request]
on:
push:
paths:
- .github/**
- devscripts/**
- test/**
- yt_dlp/**.py
- '!yt_dlp/extractor/*.py'
- yt_dlp/extractor/__init__.py
- yt_dlp/extractor/common.py
- yt_dlp/extractor/extractors.py
pull_request:
paths:
- .github/**
- devscripts/**
- test/**
- yt_dlp/**.py
- '!yt_dlp/extractor/*.py'
- yt_dlp/extractor/__init__.py
- yt_dlp/extractor/common.py
- yt_dlp/extractor/extractors.py
permissions:
contents: read
concurrency:
group: core-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: ${{ github.event_name == 'pull_request' }}
jobs:
tests:
name: Core Tests
@@ -12,27 +36,26 @@ jobs:
fail-fast: false
matrix:
os: [ubuntu-latest]
# CPython 3.11 is in quick-test
python-version: ['3.8', '3.9', '3.10', pypy-3.7, pypy-3.8]
run-tests-ext: [sh]
# CPython 3.8 is in quick-test
python-version: ['3.9', '3.10', '3.11', '3.12', pypy-3.8, pypy-3.10]
include:
# atleast one of each CPython/PyPy tests must be in windows
- os: windows-latest
python-version: '3.7'
run-tests-ext: bat
python-version: '3.8'
- os: windows-latest
python-version: '3.12'
- os: windows-latest
python-version: pypy-3.9
run-tests-ext: bat
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v4
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
- name: Install pytest
run: pip install pytest
- name: Install test requirements
run: python3 ./devscripts/install_deps.py --include test --include curl-cffi
- name: Run tests
continue-on-error: False
run: |
python3 -m yt_dlp -v || true # Print debug head
./devscripts/run_tests.${{ matrix.run-tests-ext }} core
python3 ./devscripts/run_tests.py core

View File

@@ -9,16 +9,16 @@ jobs:
if: "contains(github.event.head_commit.message, 'ci run dl')"
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v4
uses: actions/setup-python@v5
with:
python-version: 3.9
- name: Install test requirements
run: pip install pytest
run: python3 ./devscripts/install_deps.py --include dev
- name: Run tests
continue-on-error: true
run: ./devscripts/run_tests.sh download
run: python3 ./devscripts/run_tests.py download
full:
name: Full Download Tests
@@ -28,24 +28,21 @@ jobs:
fail-fast: true
matrix:
os: [ubuntu-latest]
python-version: ['3.7', '3.10', 3.11-dev, pypy-3.7, pypy-3.8]
run-tests-ext: [sh]
python-version: ['3.10', '3.11', '3.12', pypy-3.8, pypy-3.10]
include:
# atleast one of each CPython/PyPy tests must be in windows
- os: windows-latest
python-version: '3.8'
run-tests-ext: bat
- os: windows-latest
python-version: pypy-3.9
run-tests-ext: bat
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v4
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
- name: Install pytest
run: pip install pytest
- name: Install test requirements
run: python3 ./devscripts/install_deps.py --include dev
- name: Run tests
continue-on-error: true
run: ./devscripts/run_tests.${{ matrix.run-tests-ext }} download
run: python3 ./devscripts/run_tests.py download

View File

@@ -1,97 +0,0 @@
name: Publish
on:
workflow_call:
inputs:
channel:
default: stable
required: true
type: string
version:
required: true
type: string
target_commitish:
required: true
type: string
prerelease:
default: false
required: true
type: boolean
secrets:
ARCHIVE_REPO_TOKEN:
required: false
permissions:
contents: write
jobs:
publish:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 0
- uses: actions/download-artifact@v3
- uses: actions/setup-python@v4
with:
python-version: "3.10"
- name: Generate release notes
run: |
printf '%s' \
'[![Installation](https://img.shields.io/badge/-Which%20file%20should%20I%20download%3F-white.svg?style=for-the-badge)]' \
'(https://github.com/yt-dlp/yt-dlp#installation "Installation instructions") ' \
'[![Documentation](https://img.shields.io/badge/-Docs-brightgreen.svg?style=for-the-badge&logo=GitBook&labelColor=555555)]' \
'(https://github.com/yt-dlp/yt-dlp/tree/2023.03.04#readme "Documentation") ' \
'[![Donate](https://img.shields.io/badge/_-Donate-red.svg?logo=githubsponsors&labelColor=555555&style=for-the-badge)]' \
'(https://github.com/yt-dlp/yt-dlp/blob/master/Collaborators.md#collaborators "Donate") ' \
'[![Discord](https://img.shields.io/discord/807245652072857610?color=blue&labelColor=555555&label=&logo=discord&style=for-the-badge)]' \
'(https://discord.gg/H5MNcFW63r "Discord") ' \
${{ inputs.channel != 'nightly' && '"[![Nightly](https://img.shields.io/badge/Get%20nightly%20builds-purple.svg?style=for-the-badge)]" \
"(https://github.com/yt-dlp/yt-dlp-nightly-builds/releases/latest \"Nightly builds\")"' || '' }} \
> ./RELEASE_NOTES
printf '\n\n' >> ./RELEASE_NOTES
cat >> ./RELEASE_NOTES << EOF
#### A description of the various files are in the [README](https://github.com/yt-dlp/yt-dlp#release-files)
---
$(python ./devscripts/make_changelog.py -vv --collapsible)
EOF
printf '%s\n\n' '**This is an automated nightly pre-release build**' >> ./NIGHTLY_NOTES
cat ./RELEASE_NOTES >> ./NIGHTLY_NOTES
printf '%s\n\n' 'Generated from: https://github.com/${{ github.repository }}/commit/${{ inputs.target_commitish }}' >> ./ARCHIVE_NOTES
cat ./RELEASE_NOTES >> ./ARCHIVE_NOTES
- name: Archive nightly release
env:
GH_TOKEN: ${{ secrets.ARCHIVE_REPO_TOKEN }}
GH_REPO: ${{ vars.ARCHIVE_REPO }}
if: |
inputs.channel == 'nightly' && env.GH_TOKEN != '' && env.GH_REPO != ''
run: |
gh release create \
--notes-file ARCHIVE_NOTES \
--title "yt-dlp nightly ${{ inputs.version }}" \
${{ inputs.version }} \
artifact/*
- name: Prune old nightly release
if: inputs.channel == 'nightly' && !vars.ARCHIVE_REPO
env:
GH_TOKEN: ${{ github.token }}
run: |
gh release delete --yes --cleanup-tag "nightly" || true
git tag --delete "nightly" || true
sleep 5 # Enough time to cover deletion race condition
- name: Publish release${{ inputs.channel == 'nightly' && ' (nightly)' || '' }}
env:
GH_TOKEN: ${{ github.token }}
if: (inputs.channel == 'nightly' && !vars.ARCHIVE_REPO) || inputs.channel != 'nightly'
run: |
gh release create \
--notes-file ${{ inputs.channel == 'nightly' && 'NIGHTLY_NOTES' || 'RELEASE_NOTES' }} \
--target ${{ inputs.target_commitish }} \
--title "yt-dlp ${{ inputs.channel == 'nightly' && 'nightly ' || '' }}${{ inputs.version }}" \
${{ inputs.prerelease && '--prerelease' || '' }} \
${{ inputs.channel == 'nightly' && '"nightly"' || inputs.version }} \
artifact/*

View File

@@ -9,27 +9,31 @@ jobs:
if: "!contains(github.event.head_commit.message, 'ci skip all')"
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up Python 3.11
uses: actions/setup-python@v4
- uses: actions/checkout@v4
- name: Set up Python 3.8
uses: actions/setup-python@v5
with:
python-version: '3.11'
python-version: '3.8'
- name: Install test requirements
run: pip install pytest pycryptodomex
run: python3 ./devscripts/install_deps.py --include test
- name: Run tests
run: |
python3 -m yt_dlp -v || true
./devscripts/run_tests.sh core
flake8:
name: Linter
python3 ./devscripts/run_tests.py core
check:
name: Code check
if: "!contains(github.event.head_commit.message, 'ci skip all')"
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-python@v4
- name: Install flake8
run: pip install flake8
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: '3.8'
- name: Install dev dependencies
run: python3 ./devscripts/install_deps.py -o --include static-analysis
- name: Make lazy extractors
run: python devscripts/make_lazy_extractors.py
- name: Run flake8
run: flake8 .
run: python3 ./devscripts/make_lazy_extractors.py
- name: Run ruff
run: ruff check --output-format github .
- name: Run autopep8
run: autopep8 --diff .

30
.github/workflows/release-master.yml vendored Normal file
View File

@@ -0,0 +1,30 @@
name: Release (master)
on:
push:
branches:
- master
paths:
- "yt_dlp/**.py"
- "!yt_dlp/version.py"
- "bundle/*.py"
- "pyproject.toml"
- "Makefile"
- ".github/workflows/build.yml"
concurrency:
group: release-master
permissions:
contents: read
jobs:
release:
if: vars.BUILD_MASTER != ''
uses: ./.github/workflows/release.yml
with:
prerelease: true
source: master
permissions:
contents: write
packages: write # For package cache
actions: write # For cleaning up cache
id-token: write # mandatory for trusted publishing
secrets: inherit

View File

@@ -1,52 +1,43 @@
name: Release (nightly)
on:
push:
branches:
- master
paths:
- "yt_dlp/**.py"
- "!yt_dlp/version.py"
concurrency:
group: release-nightly
cancel-in-progress: true
schedule:
- cron: '23 23 * * *'
permissions:
contents: read
jobs:
prepare:
check_nightly:
if: vars.BUILD_NIGHTLY != ''
runs-on: ubuntu-latest
outputs:
version: ${{ steps.get_version.outputs.version }}
commit: ${{ steps.check_for_new_commits.outputs.commit }}
steps:
- uses: actions/checkout@v3
- name: Get version
id: get_version
run: |
python devscripts/update-version.py "$(date -u +"%H%M%S")" | grep -Po "version=\d+(\.\d+){3}" >> "$GITHUB_OUTPUT"
build:
needs: prepare
uses: ./.github/workflows/build.yml
- uses: actions/checkout@v4
with:
version: ${{ needs.prepare.outputs.version }}
channel: nightly
permissions:
contents: read
packages: write # For package cache
secrets:
GPG_SIGNING_KEY: ${{ secrets.GPG_SIGNING_KEY }}
fetch-depth: 0
- name: Check for new commits
id: check_for_new_commits
run: |
relevant_files=(
"yt_dlp/*.py"
':!yt_dlp/version.py'
"bundle/*.py"
"pyproject.toml"
"Makefile"
".github/workflows/build.yml"
)
echo "commit=$(git log --format=%H -1 --since="24 hours ago" -- "${relevant_files[@]}")" | tee "$GITHUB_OUTPUT"
publish:
needs: [prepare, build]
uses: ./.github/workflows/publish.yml
secrets:
ARCHIVE_REPO_TOKEN: ${{ secrets.ARCHIVE_REPO_TOKEN }}
release:
needs: [check_nightly]
if: ${{ needs.check_nightly.outputs.commit }}
uses: ./.github/workflows/release.yml
with:
prerelease: true
source: nightly
permissions:
contents: write
with:
channel: nightly
prerelease: true
version: ${{ needs.prepare.outputs.version }}
target_commitish: ${{ github.sha }}
packages: write # For package cache
actions: write # For cleaning up cache
id-token: write # mandatory for trusted publishing
secrets: inherit

View File

@@ -1,14 +1,45 @@
name: Release
on:
workflow_dispatch:
workflow_call:
inputs:
version:
description: Version tag (YYYY.MM.DD[.REV])
prerelease:
required: false
default: true
type: boolean
source:
required: false
default: ''
type: string
channel:
description: Update channel (stable/nightly/...)
target:
required: false
default: ''
type: string
version:
required: false
default: ''
type: string
workflow_dispatch:
inputs:
source:
description: |
SOURCE of this release's updates:
channel, repo, tag, or channel/repo@tag
(default: <current_repo>)
required: false
default: ''
type: string
target:
description: |
TARGET to publish this release to:
channel, tag, or channel@tag
(default: <source> if writable else <current_repo>[@source_tag])
required: false
default: ''
type: string
version:
description: |
VERSION: yyyy.mm.dd[.rev] or rev
(default: auto-generated)
required: false
default: ''
type: string
@@ -26,51 +57,153 @@ jobs:
contents: write
runs-on: ubuntu-latest
outputs:
channel: ${{ steps.set_channel.outputs.channel }}
version: ${{ steps.update_version.outputs.version }}
channel: ${{ steps.setup_variables.outputs.channel }}
version: ${{ steps.setup_variables.outputs.version }}
target_repo: ${{ steps.setup_variables.outputs.target_repo }}
target_repo_token: ${{ steps.setup_variables.outputs.target_repo_token }}
target_tag: ${{ steps.setup_variables.outputs.target_tag }}
pypi_project: ${{ steps.setup_variables.outputs.pypi_project }}
pypi_suffix: ${{ steps.setup_variables.outputs.pypi_suffix }}
head_sha: ${{ steps.get_target.outputs.head_sha }}
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
with:
fetch-depth: 0
- uses: actions/setup-python@v4
- uses: actions/setup-python@v5
with:
python-version: "3.10"
- name: Set channel
id: set_channel
- name: Process inputs
id: process_inputs
run: |
CHANNEL="${{ github.repository == 'yt-dlp/yt-dlp' && 'stable' || github.repository }}"
echo "channel=${{ inputs.channel || '$CHANNEL' }}" > "$GITHUB_OUTPUT"
cat << EOF
::group::Inputs
prerelease=${{ inputs.prerelease }}
source=${{ inputs.source }}
target=${{ inputs.target }}
version=${{ inputs.version }}
::endgroup::
EOF
IFS='@' read -r source_repo source_tag <<<"${{ inputs.source }}"
IFS='@' read -r target_repo target_tag <<<"${{ inputs.target }}"
cat << EOF >> "$GITHUB_OUTPUT"
source_repo=${source_repo}
source_tag=${source_tag}
target_repo=${target_repo}
target_tag=${target_tag}
EOF
- name: Update version
id: update_version
- name: Setup variables
id: setup_variables
env:
source_repo: ${{ steps.process_inputs.outputs.source_repo }}
source_tag: ${{ steps.process_inputs.outputs.source_tag }}
target_repo: ${{ steps.process_inputs.outputs.target_repo }}
target_tag: ${{ steps.process_inputs.outputs.target_tag }}
run: |
REVISION="${{ vars.PUSH_VERSION_COMMIT == '' && '$(date -u +"%H%M%S")' || '' }}"
REVISION="${{ inputs.prerelease && '$(date -u +"%H%M%S")' || '$REVISION' }}"
python devscripts/update-version.py ${{ inputs.version || '$REVISION' }} | \
grep -Po "version=\d+\.\d+\.\d+(\.\d+)?" >> "$GITHUB_OUTPUT"
# unholy bash monstrosity (sincere apologies)
fallback_token () {
if ${{ !secrets.ARCHIVE_REPO_TOKEN }}; then
echo "::error::Repository access secret ${target_repo_token^^} not found"
exit 1
fi
target_repo_token=ARCHIVE_REPO_TOKEN
return 0
}
source_is_channel=0
[[ "${source_repo}" == 'stable' ]] && source_repo='yt-dlp/yt-dlp'
if [[ -z "${source_repo}" ]]; then
source_repo='${{ github.repository }}'
elif [[ '${{ vars[format('{0}_archive_repo', env.source_repo)] }}' ]]; then
source_is_channel=1
source_channel='${{ vars[format('{0}_archive_repo', env.source_repo)] }}'
elif [[ -z "${source_tag}" && "${source_repo}" != */* ]]; then
source_tag="${source_repo}"
source_repo='${{ github.repository }}'
fi
resolved_source="${source_repo}"
if [[ "${source_tag}" ]]; then
resolved_source="${resolved_source}@${source_tag}"
elif [[ "${source_repo}" == 'yt-dlp/yt-dlp' ]]; then
resolved_source='stable'
fi
revision="${{ (inputs.prerelease || !vars.PUSH_VERSION_COMMIT) && '$(date -u +"%H%M%S")' || '' }}"
version="$(
python devscripts/update-version.py \
-c "${resolved_source}" -r "${{ github.repository }}" ${{ inputs.version || '$revision' }} | \
grep -Po "version=\K\d+\.\d+\.\d+(\.\d+)?")"
if [[ "${target_repo}" ]]; then
if [[ -z "${target_tag}" ]]; then
if [[ '${{ vars[format('{0}_archive_repo', env.target_repo)] }}' ]]; then
target_tag="${source_tag:-${version}}"
else
target_tag="${target_repo}"
target_repo='${{ github.repository }}'
fi
fi
if [[ "${target_repo}" != '${{ github.repository}}' ]]; then
target_repo='${{ vars[format('{0}_archive_repo', env.target_repo)] }}'
target_repo_token='${{ env.target_repo }}_archive_repo_token'
${{ !!secrets[format('{0}_archive_repo_token', env.target_repo)] }} || fallback_token
pypi_project='${{ vars[format('{0}_pypi_project', env.target_repo)] }}'
pypi_suffix='${{ vars[format('{0}_pypi_suffix', env.target_repo)] }}'
fi
else
target_tag="${source_tag:-${version}}"
if ((source_is_channel)); then
target_repo="${source_channel}"
target_repo_token='${{ env.source_repo }}_archive_repo_token'
${{ !!secrets[format('{0}_archive_repo_token', env.source_repo)] }} || fallback_token
pypi_project='${{ vars[format('{0}_pypi_project', env.source_repo)] }}'
pypi_suffix='${{ vars[format('{0}_pypi_suffix', env.source_repo)] }}'
else
target_repo='${{ github.repository }}'
fi
fi
if [[ "${target_repo}" == '${{ github.repository }}' ]] && ${{ !inputs.prerelease }}; then
pypi_project='${{ vars.PYPI_PROJECT }}'
fi
echo "::group::Output variables"
cat << EOF | tee -a "$GITHUB_OUTPUT"
channel=${resolved_source}
version=${version}
target_repo=${target_repo}
target_repo_token=${target_repo_token}
target_tag=${target_tag}
pypi_project=${pypi_project}
pypi_suffix=${pypi_suffix}
EOF
echo "::endgroup::"
- name: Update documentation
env:
version: ${{ steps.setup_variables.outputs.version }}
target_repo: ${{ steps.setup_variables.outputs.target_repo }}
if: |
!inputs.prerelease && env.target_repo == github.repository
run: |
python devscripts/update_changelog.py -vv
make doc
sed '/### /Q' Changelog.md >> ./CHANGELOG
echo '### ${{ steps.update_version.outputs.version }}' >> ./CHANGELOG
python ./devscripts/make_changelog.py -vv -c >> ./CHANGELOG
echo >> ./CHANGELOG
grep -Poz '(?s)### \d+\.\d+\.\d+.+' 'Changelog.md' | head -n -1 >> ./CHANGELOG
cat ./CHANGELOG > Changelog.md
- name: Push to release
id: push_release
if: ${{ !inputs.prerelease }}
env:
version: ${{ steps.setup_variables.outputs.version }}
target_repo: ${{ steps.setup_variables.outputs.target_repo }}
if: |
!inputs.prerelease && env.target_repo == github.repository
run: |
git config --global user.name github-actions
git config --global user.email github-actions@example.com
git config --global user.name "github-actions[bot]"
git config --global user.email "41898282+github-actions[bot]@users.noreply.github.com"
git add -u
git commit -m "Release ${{ steps.update_version.outputs.version }}" \
git commit -m "Release ${{ env.version }}" \
-m "Created by: ${{ github.event.sender.login }}" -m ":ci skip all :ci run dl"
git push origin --force ${{ github.event.ref }}:release
@@ -80,7 +213,10 @@ jobs:
echo "head_sha=$(git rev-parse HEAD)" >> "$GITHUB_OUTPUT"
- name: Update master
if: vars.PUSH_VERSION_COMMIT != '' && !inputs.prerelease
env:
target_repo: ${{ steps.setup_variables.outputs.target_repo }}
if: |
vars.PUSH_VERSION_COMMIT != '' && !inputs.prerelease && env.target_repo == github.repository
run: git push origin ${{ github.event.ref }}
build:
@@ -89,75 +225,160 @@ jobs:
with:
version: ${{ needs.prepare.outputs.version }}
channel: ${{ needs.prepare.outputs.channel }}
origin: ${{ needs.prepare.outputs.target_repo }}
permissions:
contents: read
packages: write # For package cache
actions: write # For cleaning up cache
secrets:
GPG_SIGNING_KEY: ${{ secrets.GPG_SIGNING_KEY }}
publish_pypi_homebrew:
publish_pypi:
needs: [prepare, build]
if: ${{ needs.prepare.outputs.pypi_project }}
runs-on: ubuntu-latest
permissions:
id-token: write # mandatory for trusted publishing
steps:
- uses: actions/checkout@v3
- uses: actions/setup-python@v4
- uses: actions/checkout@v4
with:
fetch-depth: 0
- uses: actions/setup-python@v5
with:
python-version: "3.10"
- name: Install Requirements
run: |
sudo apt-get -y install pandoc man
python -m pip install -U pip setuptools wheel twine
python -m pip install -U -r requirements.txt
sudo apt -y install pandoc man
python devscripts/install_deps.py -o --include build
- name: Prepare
run: |
python devscripts/update-version.py ${{ needs.prepare.outputs.version }}
python devscripts/make_lazy_extractors.py
- name: Build and publish on PyPI
env:
TWINE_USERNAME: __token__
TWINE_PASSWORD: ${{ secrets.PYPI_TOKEN }}
if: env.TWINE_PASSWORD != '' && !inputs.prerelease
version: ${{ needs.prepare.outputs.version }}
suffix: ${{ needs.prepare.outputs.pypi_suffix }}
channel: ${{ needs.prepare.outputs.channel }}
target_repo: ${{ needs.prepare.outputs.target_repo }}
pypi_project: ${{ needs.prepare.outputs.pypi_project }}
run: |
python devscripts/update-version.py -c "${{ env.channel }}" -r "${{ env.target_repo }}" -s "${{ env.suffix }}" "${{ env.version }}"
python devscripts/update_changelog.py -vv
python devscripts/make_lazy_extractors.py
sed -i -E '0,/(name = ")[^"]+(")/s//\1${{ env.pypi_project }}\2/' pyproject.toml
- name: Build
run: |
rm -rf dist/*
make pypi-files
printf '%s\n\n' \
'Official repository: <https://github.com/yt-dlp/yt-dlp>' \
'**PS**: Some links in this document will not work since this is a copy of the README.md from Github' > ./README.md.new
cat ./README.md >> ./README.md.new && mv -f ./README.md.new ./README.md
python devscripts/set-variant.py pip -M "You installed yt-dlp with pip or using the wheel from PyPi; Use that to update"
python setup.py sdist bdist_wheel
twine upload dist/*
make clean-cache
python -m build --no-isolation .
- name: Checkout Homebrew repository
env:
BREW_TOKEN: ${{ secrets.BREW_TOKEN }}
PYPI_TOKEN: ${{ secrets.PYPI_TOKEN }}
if: env.BREW_TOKEN != '' && env.PYPI_TOKEN != '' && !inputs.prerelease
uses: actions/checkout@v3
- name: Publish to PyPI
uses: pypa/gh-action-pypi-publish@release/v1
with:
repository: yt-dlp/homebrew-taps
path: taps
ssh-key: ${{ secrets.BREW_TOKEN }}
- name: Update Homebrew Formulae
env:
BREW_TOKEN: ${{ secrets.BREW_TOKEN }}
PYPI_TOKEN: ${{ secrets.PYPI_TOKEN }}
if: env.BREW_TOKEN != '' && env.PYPI_TOKEN != '' && !inputs.prerelease
run: |
python devscripts/update-formulae.py taps/Formula/yt-dlp.rb "${{ needs.prepare.outputs.version }}"
git -C taps/ config user.name github-actions
git -C taps/ config user.email github-actions@example.com
git -C taps/ commit -am 'yt-dlp: ${{ needs.prepare.outputs.version }}'
git -C taps/ push
verbose: true
publish:
needs: [prepare, build]
uses: ./.github/workflows/publish.yml
permissions:
contents: write
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
channel: ${{ needs.prepare.outputs.channel }}
prerelease: ${{ inputs.prerelease }}
fetch-depth: 0
- uses: actions/download-artifact@v4
with:
path: artifact
pattern: build-*
merge-multiple: true
- uses: actions/setup-python@v5
with:
python-version: "3.10"
- name: Generate release notes
env:
head_sha: ${{ needs.prepare.outputs.head_sha }}
target_repo: ${{ needs.prepare.outputs.target_repo }}
target_tag: ${{ needs.prepare.outputs.target_tag }}
run: |
printf '%s' \
'[![Installation](https://img.shields.io/badge/-Which%20file%20to%20download%3F-white.svg?style=for-the-badge)]' \
'(https://github.com/${{ github.repository }}#installation "Installation instructions") ' \
'[![Discord](https://img.shields.io/discord/807245652072857610?color=blue&labelColor=555555&label=&logo=discord&style=for-the-badge)]' \
'(https://discord.gg/H5MNcFW63r "Discord") ' \
'[![Donate](https://img.shields.io/badge/_-Donate-red.svg?logo=githubsponsors&labelColor=555555&style=for-the-badge)]' \
'(https://github.com/yt-dlp/yt-dlp/blob/master/Collaborators.md#collaborators "Donate") ' \
'[![Documentation](https://img.shields.io/badge/-Docs-brightgreen.svg?style=for-the-badge&logo=GitBook&labelColor=555555)]' \
'(https://github.com/${{ github.repository }}' \
'${{ env.target_repo == github.repository && format('/tree/{0}', env.target_tag) || '' }}#readme "Documentation") ' \
${{ env.target_repo == 'yt-dlp/yt-dlp' && '\
"[![Nightly](https://img.shields.io/badge/Nightly%20builds-purple.svg?style=for-the-badge)]" \
"(https://github.com/yt-dlp/yt-dlp-nightly-builds/releases/latest \"Nightly builds\") " \
"[![Master](https://img.shields.io/badge/Master%20builds-lightblue.svg?style=for-the-badge)]" \
"(https://github.com/yt-dlp/yt-dlp-master-builds/releases/latest \"Master builds\")"' || '' }} > ./RELEASE_NOTES
printf '\n\n' >> ./RELEASE_NOTES
cat >> ./RELEASE_NOTES << EOF
#### A description of the various files are in the [README](https://github.com/${{ github.repository }}#release-files)
---
$(python ./devscripts/make_changelog.py -vv --collapsible)
EOF
printf '%s\n\n' '**This is a pre-release build**' >> ./PRERELEASE_NOTES
cat ./RELEASE_NOTES >> ./PRERELEASE_NOTES
printf '%s\n\n' 'Generated from: https://github.com/${{ github.repository }}/commit/${{ env.head_sha }}' >> ./ARCHIVE_NOTES
cat ./RELEASE_NOTES >> ./ARCHIVE_NOTES
- name: Publish to archive repo
env:
GH_TOKEN: ${{ secrets[needs.prepare.outputs.target_repo_token] }}
GH_REPO: ${{ needs.prepare.outputs.target_repo }}
version: ${{ needs.prepare.outputs.version }}
target_commitish: ${{ needs.prepare.outputs.head_sha }}
channel: ${{ needs.prepare.outputs.channel }}
if: |
inputs.prerelease && env.GH_TOKEN != '' && env.GH_REPO != '' && env.GH_REPO != github.repository
run: |
title="${{ startswith(env.GH_REPO, 'yt-dlp/') && 'yt-dlp ' || '' }}${{ env.channel }}"
gh release create \
--notes-file ARCHIVE_NOTES \
--title "${title} ${{ env.version }}" \
${{ env.version }} \
artifact/*
- name: Prune old release
env:
GH_TOKEN: ${{ github.token }}
version: ${{ needs.prepare.outputs.version }}
target_repo: ${{ needs.prepare.outputs.target_repo }}
target_tag: ${{ needs.prepare.outputs.target_tag }}
if: |
env.target_repo == github.repository && env.target_tag != env.version
run: |
gh release delete --yes --cleanup-tag "${{ env.target_tag }}" || true
git tag --delete "${{ env.target_tag }}" || true
sleep 5 # Enough time to cover deletion race condition
- name: Publish release
env:
GH_TOKEN: ${{ github.token }}
version: ${{ needs.prepare.outputs.version }}
target_repo: ${{ needs.prepare.outputs.target_repo }}
target_tag: ${{ needs.prepare.outputs.target_tag }}
head_sha: ${{ needs.prepare.outputs.head_sha }}
if: |
env.target_repo == github.repository
run: |
title="${{ github.repository == 'yt-dlp/yt-dlp' && 'yt-dlp ' || '' }}"
title+="${{ env.target_tag != env.version && format('{0} ', env.target_tag) || '' }}"
gh release create \
--notes-file ${{ inputs.prerelease && 'PRERELEASE_NOTES' || 'RELEASE_NOTES' }} \
--target ${{ env.head_sha }} \
--title "${title}${{ env.version }}" \
${{ inputs.prerelease && '--prerelease' || '' }} \
${{ env.target_tag }} \
artifact/*

7
.gitignore vendored
View File

@@ -33,6 +33,7 @@ cookies
*.gif
*.jpeg
*.jpg
*.lrc
*.m4a
*.m4v
*.mhtml
@@ -40,6 +41,7 @@ cookies
*.mov
*.mp3
*.mp4
*.mpg
*.mpga
*.oga
*.ogg
@@ -47,8 +49,8 @@ cookies
*.png
*.sbv
*.srt
*.ssa
*.swf
*.swp
*.tt
*.ttml
*.url
@@ -64,7 +66,7 @@ cookies
# Python
*.pyc
*.pyo
.pytest_cache
.*_cache
wine-py2exe/
py2exe.log
build/
@@ -116,6 +118,7 @@ yt-dlp.zip
.vscode
*.sublime-*
*.code-workspace
*.swp
# Lazy extractors
*/extractor/lazy_extractors.py

14
.pre-commit-config.yaml Normal file
View File

@@ -0,0 +1,14 @@
repos:
- repo: local
hooks:
- id: linter
name: Apply linter fixes
entry: ruff check --fix .
language: system
types: [python]
require_serial: true
- id: format
name: Apply formatting fixes
entry: autopep8 --in-place .
language: system
types: [python]

9
.pre-commit-hatch.yaml Normal file
View File

@@ -0,0 +1,9 @@
repos:
- repo: local
hooks:
- id: fix
name: Apply code fixes
entry: hatch fmt
language: system
types: [python]
require_serial: true

View File

@@ -79,7 +79,7 @@ ### Are you using the latest version?
### Is the issue already documented?
Make sure that someone has not already opened the issue you're trying to open. Search at the top of the window or browse the [GitHub Issues](https://github.com/yt-dlp/yt-dlp/search?type=Issues) of this repository. If there is an issue, subcribe to it to be notified when there is any progress. Unless you have something useful to add to the converation, please refrain from commenting.
Make sure that someone has not already opened the issue you're trying to open. Search at the top of the window or browse the [GitHub Issues](https://github.com/yt-dlp/yt-dlp/search?type=Issues) of this repository. If there is an issue, subscribe to it to be notified when there is any progress. Unless you have something useful to add to the conversation, please refrain from commenting.
Additionally, it is also helpful to see if the issue has already been documented in the [youtube-dl issue tracker](https://github.com/ytdl-org/youtube-dl/issues). If similar issues have already been reported in youtube-dl (but not in our issue tracker), links to them can be included in your issue report here.
@@ -134,27 +134,59 @@ ### Is the website primarily used for piracy?
# DEVELOPER INSTRUCTIONS
Most users do not need to build yt-dlp and can [download the builds](https://github.com/yt-dlp/yt-dlp/releases) or get them via [the other installation methods](README.md#installation).
Most users do not need to build yt-dlp and can [download the builds](https://github.com/yt-dlp/yt-dlp/releases), get them via [the other installation methods](README.md#installation) or directly run it using `python -m yt_dlp`.
To run yt-dlp as a developer, you don't need to build anything either. Simply execute
`yt-dlp` uses [`hatch`](<https://hatch.pypa.io>) as a project management tool.
You can easily install it using [`pipx`](<https://pipx.pypa.io>) via `pipx install hatch`, or else via `pip` or your package manager of choice. Make sure you are using at least version `1.10.0`, otherwise some functionality might not work as expected.
python -m yt_dlp
If you plan on contributing to `yt-dlp`, best practice is to start by running the following command:
To run the test, simply invoke your favorite test runner, or execute a test file directly; any of the following work:
```shell
$ hatch run setup
```
python -m unittest discover
python test/test_download.py
nosetests
pytest
The above command will install a `pre-commit` hook so that required checks/fixes (linting, formatting) will run automatically before each commit. If any code needs to be linted or formatted, then the commit will be blocked and the necessary changes will be made; you should review all edits and re-commit the fixed version.
After this you can use `hatch shell` to enable a virtual environment that has `yt-dlp` and its development dependencies installed.
In addition, the following script commands can be used to run simple tasks such as linting or testing (without having to run `hatch shell` first):
* `hatch fmt`: Automatically fix linter violations and apply required code formatting changes
* See `hatch fmt --help` for more info
* `hatch test`: Run extractor or core tests
* See `hatch test --help` for more info
See item 6 of [new extractor tutorial](#adding-support-for-a-new-site) for how to run extractor specific test cases.
While it is strongly recommended to use `hatch` for yt-dlp development, if you are unable to do so, alternatively you can manually create a virtual environment and use the following commands:
```shell
# To only install development dependencies:
$ python -m devscripts.install_deps --include dev
# Or, for an editable install plus dev dependencies:
$ python -m pip install -e ".[default,dev]"
# To setup the pre-commit hook:
$ pre-commit install
# To be used in place of `hatch test`:
$ python -m devscripts.run_tests
# To be used in place of `hatch fmt`:
$ ruff check --fix .
$ autopep8 --in-place .
# To only check code instead of applying fixes:
$ ruff check .
$ autopep8 --diff .
```
If you want to create a build of yt-dlp yourself, you can follow the instructions [here](README.md#compile).
## Adding new feature or making overarching changes
Before you start writing code for implementing a new feature, open an issue explaining your feature request and atleast one use case. This allows the maintainers to decide whether such a feature is desired for the project in the first place, and will provide an avenue to discuss some implementation details. If you open a pull request for a new feature without discussing with us first, do not be surprised when we ask for large changes to the code, or even reject it outright.
Before you start writing code for implementing a new feature, open an issue explaining your feature request and at least one use case. This allows the maintainers to decide whether such a feature is desired for the project in the first place, and will provide an avenue to discuss some implementation details. If you open a pull request for a new feature without discussing with us first, do not be surprised when we ask for large changes to the code, or even reject it outright.
The same applies for changes to the documentation, code style, or overarching changes to the architecture
@@ -168,12 +200,16 @@ ## Adding support for a new site
1. [Fork this repository](https://github.com/yt-dlp/yt-dlp/fork)
1. Check out the source code with:
git clone git@github.com:YOUR_GITHUB_USERNAME/yt-dlp.git
```shell
$ git clone git@github.com:YOUR_GITHUB_USERNAME/yt-dlp.git
```
1. Start a new git branch with
cd yt-dlp
git checkout -b yourextractor
```shell
$ cd yt-dlp
$ git checkout -b yourextractor
```
1. Start with this simple template and save it to `yt_dlp/extractor/yourextractor.py`:
@@ -187,15 +223,21 @@ ## Adding support for a new site
'url': 'https://yourextractor.com/watch/42',
'md5': 'TODO: md5 sum of the first 10241 bytes of the video file (use --test)',
'info_dict': {
# For videos, only the 'id' and 'ext' fields are required to RUN the test:
'id': '42',
'ext': 'mp4',
'title': 'Video title goes here',
'thumbnail': r're:^https?://.*\.jpg$',
# TODO more properties, either as:
# * A value
# * MD5 checksum; start the string with md5:
# * A regular expression; start the string with re:
# * Any Python type, e.g. int or float
# Then if the test run fails, it will output the missing/incorrect fields.
# Properties can be added as:
# * A value, e.g.
# 'title': 'Video title goes here',
# * MD5 checksum; start the string with 'md5:', e.g.
# 'description': 'md5:098f6bcd4621d373cade4e832627b4f6',
# * A regular expression; start the string with 're:', e.g.
# 'thumbnail': r're:^https?://.*\.jpg$',
# * A count of elements in a list; start the string with 'count:', e.g.
# 'tags': 'count:10',
# * Any Python type, e.g.
# 'view_count': int,
}
}]
@@ -214,27 +256,33 @@ ## Adding support for a new site
# TODO more properties (see yt_dlp/extractor/common.py)
}
```
1. Add an import in [`yt_dlp/extractor/_extractors.py`](yt_dlp/extractor/_extractors.py). Note that the class name must end with `IE`.
1. Run `python test/test_download.py TestDownload.test_YourExtractor` (note that `YourExtractor` doesn't end with `IE`). This *should fail* at first, but you can continually re-run it until you're done. If you decide to add more than one test, the tests will then be named `TestDownload.test_YourExtractor`, `TestDownload.test_YourExtractor_1`, `TestDownload.test_YourExtractor_2`, etc. Note that tests with `only_matching` key in test's dict are not counted in. You can also run all the tests in one go with `TestDownload.test_YourExtractor_all`
1. Make sure you have atleast one test for your extractor. Even if all videos covered by the extractor are expected to be inaccessible for automated testing, tests should still be added with a `skip` parameter indicating why the particular test is disabled from running.
1. Have a look at [`yt_dlp/extractor/common.py`](yt_dlp/extractor/common.py) for possible helper methods and a [detailed description of what your extractor should and may return](yt_dlp/extractor/common.py#L91-L426). Add tests and code for as many as you want.
1. Make sure your code follows [yt-dlp coding conventions](#yt-dlp-coding-conventions) and check the code with [flake8](https://flake8.pycqa.org/en/latest/index.html#quickstart):
1. Add an import in [`yt_dlp/extractor/_extractors.py`](yt_dlp/extractor/_extractors.py). Note that the class name must end with `IE`. Also note that when adding a parenthesized import group, the last import in the group must have a trailing comma in order for this formatting to be respected by our code formatter.
1. Run `hatch test YourExtractor`. This *may fail* at first, but you can continually re-run it until you're done. Upon failure, it will output the missing fields and/or correct values which you can copy. If you decide to add more than one test, the tests will then be named `YourExtractor`, `YourExtractor_1`, `YourExtractor_2`, etc. Note that tests with an `only_matching` key in the test's dict are not included in the count. You can also run all the tests in one go with `YourExtractor_all`
1. Make sure you have at least one test for your extractor. Even if all videos covered by the extractor are expected to be inaccessible for automated testing, tests should still be added with a `skip` parameter indicating why the particular test is disabled from running.
1. Have a look at [`yt_dlp/extractor/common.py`](yt_dlp/extractor/common.py) for possible helper methods and a [detailed description of what your extractor should and may return](yt_dlp/extractor/common.py#L119-L440). Add tests and code for as many as you want.
1. Make sure your code follows [yt-dlp coding conventions](#yt-dlp-coding-conventions), passes [ruff](https://docs.astral.sh/ruff/tutorial/#getting-started) code checks and is properly formatted:
$ flake8 yt_dlp/extractor/yourextractor.py
```shell
$ hatch fmt --check
```
1. Make sure your code works under all [Python](https://www.python.org/) versions supported by yt-dlp, namely CPython and PyPy for Python 3.7 and above. Backward compatibility is not required for even older versions of Python.
You can use `hatch fmt` to automatically fix problems. Rules that the linter/formatter enforces should not be disabled with `# noqa` unless a maintainer requests it. The only exception allowed is for old/printf-style string formatting in GraphQL query templates (use `# noqa: UP031`).
1. Make sure your code works under all [Python](https://www.python.org/) versions supported by yt-dlp, namely CPython and PyPy for Python 3.8 and above. Backward compatibility is not required for even older versions of Python.
1. When the tests pass, [add](https://git-scm.com/docs/git-add) the new files, [commit](https://git-scm.com/docs/git-commit) them and [push](https://git-scm.com/docs/git-push) the result, like this:
```shell
$ git add yt_dlp/extractor/_extractors.py
$ git add yt_dlp/extractor/yourextractor.py
$ git commit -m '[yourextractor] Add extractor'
$ git push origin yourextractor
```
1. Finally, [create a pull request](https://help.github.com/articles/creating-a-pull-request). We'll then review and merge it.
In any case, thank you very much for your contributions!
**Tip:** To test extractors that require login information, create a file `test/local_parameters.json` and add `"usenetrc": true` or your username and password in it:
**Tip:** To test extractors that require login information, create a file `test/local_parameters.json` and add `"usenetrc": true` or your `username`&`password` or `cookiefile`/`cookiesfrombrowser` in it:
```json
{
"username": "your user name",
@@ -251,7 +299,7 @@ ## yt-dlp coding conventions
### Mandatory and optional metafields
For extraction to work yt-dlp relies on metadata your extractor extracts and provides to yt-dlp expressed by an [information dictionary](yt_dlp/extractor/common.py#L91-L426) or simply *info dict*. Only the following meta fields in the *info dict* are considered mandatory for a successful extraction process by yt-dlp:
For extraction to work yt-dlp relies on metadata your extractor extracts and provides to yt-dlp expressed by an [information dictionary](yt_dlp/extractor/common.py#L119-L440) or simply *info dict*. Only the following meta fields in the *info dict* are considered mandatory for a successful extraction process by yt-dlp:
- `id` (media identifier)
- `title` (media title)
@@ -261,7 +309,7 @@ ### Mandatory and optional metafields
For pornographic sites, appropriate `age_limit` must also be returned.
The extractor is allowed to return the info dict without url or formats in some special cases if it allows the user to extract usefull information with `--ignore-no-formats-error` - e.g. when the video is a live stream that has not started yet.
The extractor is allowed to return the info dict without url or formats in some special cases if it allows the user to extract useful information with `--ignore-no-formats-error` - e.g. when the video is a live stream that has not started yet.
[Any field](yt_dlp/extractor/common.py#219-L426) apart from the aforementioned ones are considered **optional**. That means that extraction should be **tolerant** to situations when sources for these fields can potentially be unavailable (even if they are always available at the moment) and **future-proof** in order not to break the extraction of general purpose mandatory fields.
@@ -696,7 +744,7 @@ #### Examples
### Use convenience conversion and parsing functions
Wrap all extracted numeric data into safe functions from [`yt_dlp/utils.py`](yt_dlp/utils.py): `int_or_none`, `float_or_none`. Use them for string to number conversions as well.
Wrap all extracted numeric data into safe functions from [`yt_dlp/utils/`](yt_dlp/utils/): `int_or_none`, `float_or_none`. Use them for string to number conversions as well.
Use `url_or_none` for safe URL processing.
@@ -704,7 +752,7 @@ ### Use convenience conversion and parsing functions
Use `unified_strdate` for uniform `upload_date` or any `YYYYMMDD` meta field extraction, `unified_timestamp` for uniform `timestamp` extraction, `parse_filesize` for `filesize` extraction, `parse_count` for count meta fields extraction, `parse_resolution`, `parse_duration` for `duration` extraction, `parse_age_limit` for `age_limit` extraction.
Explore [`yt_dlp/utils.py`](yt_dlp/utils.py) for more useful convenience functions.
Explore [`yt_dlp/utils/`](yt_dlp/utils/) for more useful convenience functions.
#### Examples

View File

@@ -2,7 +2,6 @@ pukkandan (owner)
shirt-dev (collaborator)
coletdjnz/colethedj (collaborator)
Ashish0804 (collaborator)
nao20010128nao/Lesmiscore (collaborator)
bashonly (collaborator)
Grub4K (collaborator)
h-h-h-h
@@ -460,3 +459,194 @@ berkanteber
OverlordQ
rexlambert22
Ti4eeT4e
AmanSal1
bbilly1
meliber
nnoboa
rdamas
RfadnjdExt
urectanc
nao20010128nao/Lesmiscore
04-pasha-04
aaruni96
aky-01
AmirAflak
ApoorvShah111
at-wat
davinkevin
demon071
denhotte
FinnRG
fireattack
Frankgoji
GD-Slime
hatsomatt
ifan-t
kshitiz305
kylegustavo
mabdelfattah
nathantouze
niemands
Rajeshwaran2001
RedDeffender
Rohxn16
sb0stn
SevenLives
simon300000
snixon
soundchaser128
szabyg
trainman261
trislee
wader
Yalab7
zhallgato
zhong-yiyu
Zprokkel
AS6939
drzraf
handlerug
jiru
madewokherd
xofe
awalgarg
midnightveil
naginatana
Riteo
1100101
aniolpages
bartbroere
CrendKing
Esokrates
HitomaruKonpaku
LoserFox
peci1
saintliao
shubhexists
SirElderling
almx
elivinsky
starius
TravisDupes
amir16yp
Fymyte
Ganesh910
hashFactory
kclauhk
Kyraminol
lstrojny
middlingphys
NickCis
nicodato
prettykool
S-Aarab
sonmezberkay
TSRBerry
114514ns
agibson-fl
alard
alien-developers
antonkesy
ArnauvGilotra
Arthurszzz
Bibhav48
Bl4Cc4t
boredzo
Caesim404
chkuendig
chtk
Danish-H
dasidiot
diman8
divStar
DmitryScaletta
feederbox826
gmes78
gonzalezjo
hui1601
infanf
jazz1611
jingtra
jkmartindale
johnvictorfs
llistochek
marcdumais
martinxyz
michal-repo
mrmedieval
nbr23
Nicals
Noor-5
NurTasin
pompos02
Pranaxcau
pwaldhauer
RaduManole
RalphORama
rrgomes
ruiminggu
rvsit
sefidel
shmohawk
Snack-X
src-tinkerer
stilor
syntaxsurge
t-nil
ufukk
vista-narvas
x11x
xpadev-net
Xpl0itU
YoshichikaAAA
zhijinwuu
alb
hruzgar
kasper93
leoheitmannruiz
luiso1979
nipotan
Offert4324
sta1us
Tomoka1
trwstin
alexhuot1
clienthax
DaPotato69
emqi
hugohaa
imanoreotwe
JakeFinley96
lostfictions
minamotorin
ocococococ
Podiumnoche
RasmusAntons
roeniss
shoxie007
Szpachlarz
The-MAGI
TuxCoder
voidful
vtexier
WyohKnott
trueauracoral
ASertacAkkaya
axpauls
chilinux
hafeoz
JSubelj
jucor
megumintyan
mgedmin
Niluge-KiWi
peisenwang
TheZ3ro
tippfehlr
varunchopra
DrakoCpp
PatrykMis
DinhHuy2010
exterrestris
harbhim
LeSuisse

File diff suppressed because it is too large Load Diff

View File

@@ -29,6 +29,7 @@ ## [coletdjnz](https://github.com/coletdjnz)
[![gh-sponsor](https://img.shields.io/badge/_-Github-white.svg?logo=github&labelColor=555555&style=for-the-badge)](https://github.com/sponsors/coletdjnz)
* Improved plugin architecture
* Rewrote the networking infrastructure, implemented support for `requests`
* YouTube improvements including: age-gate bypass, private playlists, multiple-clients (to avoid throttling) and a lot of under-the-hood improvements
* Added support for new websites YoutubeWebArchive, MainStreaming, PRX, nzherald, Mediaklikk, StarTV etc
* Improved/fixed support for Patreon, panopto, gfycat, itv, pbs, SouthParkDE etc
@@ -44,28 +45,26 @@ ## [Ashish0804](https://github.com/Ashish0804) <sub><sup>[Inactive]</sup></sub>
* Improved/fixed support for HiDive, HotStar, Hungama, LBRY, LinkedInLearning, Mxplayer, SonyLiv, TV2, Vimeo, VLive etc
## [Lesmiscore](https://github.com/Lesmiscore)
**Bitcoin**: bc1qfd02r007cutfdjwjmyy9w23rjvtls6ncve7r3s
**Monacoin**: mona1q3tf7dzvshrhfe3md379xtvt2n22duhglv5dskr
* Download live from start to end for YouTube
* Added support for new websites AbemaTV, mildom, PixivSketch, skeb, radiko, voicy, mirrativ, openrec, whowatch, damtomo, 17.live, mixch etc
* Improved/fixed support for fc2, YahooJapanNews, tver, iwara etc
## [bashonly](https://github.com/bashonly)
* `--update-to`, automated release, nightly builds
* `--cookies-from-browser` support for Firefox containers
* Added support for new websites Genius, Kick, NBCStations, Triller, VideoKen etc
* Improved/fixed support for Anvato, Brightcove, Instagram, ParamountPlus, Reddit, SlidesLive, TikTok, Twitter, Vimeo etc
* `--update-to`, self-updater rewrite, automated/nightly/master releases
* `--cookies-from-browser` support for Firefox containers, external downloader cookie handling overhaul
* Added support for new websites like Dacast, Kick, NBCStations, Triller, VideoKen, Weverse, WrestleUniverse etc
* Improved/fixed support for Anvato, Brightcove, Reddit, SlidesLive, TikTok, Twitter, Vimeo etc
## [Grub4K](https://github.com/Grub4K)
[![ko-fi](https://img.shields.io/badge/_-Ko--fi-red.svg?logo=kofi&labelColor=555555&style=for-the-badge)](https://ko-fi.com/Grub4K) [![gh-sponsor](https://img.shields.io/badge/_-Github-white.svg?logo=github&labelColor=555555&style=for-the-badge)](https://github.com/sponsors/Grub4K)
[![gh-sponsor](https://img.shields.io/badge/_-Github-white.svg?logo=github&labelColor=555555&style=for-the-badge)](https://github.com/sponsors/Grub4K) [![ko-fi](https://img.shields.io/badge/_-Ko--fi-red.svg?logo=kofi&labelColor=555555&style=for-the-badge)](https://ko-fi.com/Grub4K)
* `--update-to`, automated release, nightly builds
* Rework internals like `traverse_obj`, various core refactors and bugs fixes
* Helped fix crunchyroll, Twitter, wrestleuniverse, wistia, slideslive etc
* `--update-to`, self-updater rewrite, automated/nightly/master releases
* Reworked internals like `traverse_obj`, various core refactors and bugs fixes
* Implemented proper progress reporting for parallel downloads
* Improved/fixed/added Bundestag, crunchyroll, pr0gramm, Twitter, WrestleUniverse etc
## [sepro](https://github.com/seproDev)
* UX improvements: Warn when ffmpeg is missing, warn when double-clicking exe
* Code cleanup: Remove dead extractors, mark extractors as broken, enable/apply ruff rules
* Improved/fixed/added ArdMediathek, DRTV, Floatplane, MagentaMusik, Naver, Nebula, OnDemandKorea, Vbox7 etc

View File

@@ -1,10 +0,0 @@
include AUTHORS
include Changelog.md
include LICENSE
include README.md
include completions/*/*
include supportedsites.md
include yt-dlp.1
include requirements.txt
recursive-include devscripts *
recursive-include test *

View File

@@ -2,29 +2,32 @@ all: lazy-extractors yt-dlp doc pypi-files
clean: clean-test clean-dist
clean-all: clean clean-cache
completions: completion-bash completion-fish completion-zsh
doc: README.md CONTRIBUTING.md issuetemplates supportedsites
doc: README.md CONTRIBUTING.md CONTRIBUTORS issuetemplates supportedsites
ot: offlinetest
tar: yt-dlp.tar.gz
# Keep this list in sync with MANIFEST.in
# Keep this list in sync with pyproject.toml includes/artifacts
# intended use: when building a source distribution,
# make pypi-files && python setup.py sdist
# make pypi-files && python3 -m build -sn .
pypi-files: AUTHORS Changelog.md LICENSE README.md README.txt supportedsites \
completions yt-dlp.1 requirements.txt setup.cfg devscripts/* test/*
completions yt-dlp.1 pyproject.toml setup.cfg devscripts/* test/*
.PHONY: all clean install test tar pypi-files completions ot offlinetest codetest supportedsites
.PHONY: all clean clean-all clean-test clean-dist clean-cache \
completions completion-bash completion-fish completion-zsh \
doc issuetemplates supportedsites ot offlinetest codetest test \
tar pypi-files lazy-extractors install uninstall
clean-test:
rm -rf test/testdata/sigs/player-*.js tmp/ *.annotations.xml *.aria2 *.description *.dump *.frag \
*.frag.aria2 *.frag.urls *.info.json *.live_chat.json *.meta *.part* *.tmp *.temp *.unknown_video *.ytdl \
*.3gp *.ape *.ass *.avi *.desktop *.f4v *.flac *.flv *.gif *.jpeg *.jpg *.m4a *.m4v *.mhtml *.mkv *.mov *.mp3 \
*.mp4 *.mpga *.oga *.ogg *.opus *.png *.sbv *.srt *.swf *.swp *.tt *.ttml *.url *.vtt *.wav *.webloc *.webm *.webp
*.3gp *.ape *.ass *.avi *.desktop *.f4v *.flac *.flv *.gif *.jpeg *.jpg *.lrc *.m4a *.m4v *.mhtml *.mkv *.mov *.mp3 *.mp4 \
*.mpg *.mpga *.oga *.ogg *.opus *.png *.sbv *.srt *.ssa *.swf *.tt *.ttml *.url *.vtt *.wav *.webloc *.webm *.webp
clean-dist:
rm -rf yt-dlp.1.temp.md yt-dlp.1 README.txt MANIFEST build/ dist/ .coverage cover/ yt-dlp.tar.gz completions/ \
yt_dlp/extractor/lazy_extractors.py *.spec CONTRIBUTING.md.tmp yt-dlp yt-dlp.exe yt_dlp.egg-info/ AUTHORS .mailmap
yt_dlp/extractor/lazy_extractors.py *.spec CONTRIBUTING.md.tmp yt-dlp yt-dlp.exe yt_dlp.egg-info/ AUTHORS
clean-cache:
find . \( \
-type d -name .pytest_cache -o -type d -name __pycache__ -o -name "*.pyc" -o -name "*.class" \
-type d -name ".*_cache" -o -type d -name __pycache__ -o -name "*.pyc" -o -name "*.class" \
\) -prune -exec rm -rf {} \;
completion-bash: completions/bash/yt-dlp
@@ -37,12 +40,15 @@ BINDIR ?= $(PREFIX)/bin
MANDIR ?= $(PREFIX)/man
SHAREDIR ?= $(PREFIX)/share
PYTHON ?= /usr/bin/env python3
GNUTAR ?= tar
# set SYSCONFDIR to /etc if PREFIX=/usr or PREFIX=/usr/local
SYSCONFDIR = $(shell if [ $(PREFIX) = /usr -o $(PREFIX) = /usr/local ]; then echo /etc; else echo $(PREFIX)/etc; fi)
# set markdown input format to "markdown-smart" for pandoc version 2 and to "markdown" for pandoc prior to version 2
MARKDOWN = $(shell if [ `pandoc -v | head -n1 | cut -d" " -f2 | head -c1` = "2" ]; then echo markdown-smart; else echo markdown; fi)
# set markdown input format to "markdown-smart" for pandoc version 2+ and to "markdown" for pandoc prior to version 2
PANDOC_VERSION_CMD = pandoc -v 2>/dev/null | head -n1 | cut -d' ' -f2 | head -c1
PANDOC_VERSION != $(PANDOC_VERSION_CMD)
PANDOC_VERSION ?= $(shell $(PANDOC_VERSION_CMD))
MARKDOWN_CMD = if [ "$(PANDOC_VERSION)" = "1" -o "$(PANDOC_VERSION)" = "0" ]; then echo markdown; else echo markdown-smart; fi
MARKDOWN != $(MARKDOWN_CMD)
MARKDOWN ?= $(shell $(MARKDOWN_CMD))
install: lazy-extractors yt-dlp yt-dlp.1 completions
mkdir -p $(DESTDIR)$(BINDIR)
@@ -64,33 +70,38 @@ uninstall:
rm -f $(DESTDIR)$(SHAREDIR)/fish/vendor_completions.d/yt-dlp.fish
codetest:
flake8 .
ruff check .
autopep8 --diff .
test:
$(PYTHON) -m pytest
$(PYTHON) -m pytest -Werror
$(MAKE) codetest
offlinetest: codetest
$(PYTHON) -m pytest -k "not download"
$(PYTHON) -m pytest -Werror -m "not download"
# XXX: This is hard to maintain
CODE_FOLDERS = yt_dlp yt_dlp/downloader yt_dlp/extractor yt_dlp/postprocessor yt_dlp/compat yt_dlp/compat/urllib yt_dlp/utils yt_dlp/dependencies
yt-dlp: yt_dlp/*.py yt_dlp/*/*.py
CODE_FOLDERS_CMD = find yt_dlp -type f -name '__init__.py' | sed 's,/__init__.py,,' | grep -v '/__' | sort
CODE_FOLDERS != $(CODE_FOLDERS_CMD)
CODE_FOLDERS ?= $(shell $(CODE_FOLDERS_CMD))
CODE_FILES_CMD = for f in $(CODE_FOLDERS) ; do echo "$$f" | sed 's,$$,/*.py,' ; done
CODE_FILES != $(CODE_FILES_CMD)
CODE_FILES ?= $(shell $(CODE_FILES_CMD))
yt-dlp: $(CODE_FILES)
mkdir -p zip
for d in $(CODE_FOLDERS) ; do \
mkdir -p zip/$$d ;\
cp -pPR $$d/*.py zip/$$d/ ;\
done
touch -t 200001010101 zip/yt_dlp/*.py zip/yt_dlp/*/*.py
(cd zip && touch -t 200001010101 $(CODE_FILES))
mv zip/yt_dlp/__main__.py zip/
cd zip ; zip -q ../yt-dlp yt_dlp/*.py yt_dlp/*/*.py __main__.py
(cd zip && zip -q ../yt-dlp $(CODE_FILES) __main__.py)
rm -rf zip
echo '#!$(PYTHON)' > yt-dlp
cat yt-dlp.zip >> yt-dlp
rm yt-dlp.zip
chmod a+x yt-dlp
README.md: yt_dlp/*.py yt_dlp/*/*.py devscripts/make_readme.py
README.md: $(CODE_FILES) devscripts/make_readme.py
COLUMNS=80 $(PYTHON) yt_dlp/__main__.py --ignore-config --help | $(PYTHON) devscripts/make_readme.py
CONTRIBUTING.md: README.md devscripts/make_contributing.py
@@ -115,41 +126,48 @@ yt-dlp.1: README.md devscripts/prepare_manpage.py
pandoc -s -f $(MARKDOWN) -t man yt-dlp.1.temp.md -o yt-dlp.1
rm -f yt-dlp.1.temp.md
completions/bash/yt-dlp: yt_dlp/*.py yt_dlp/*/*.py devscripts/bash-completion.in
completions/bash/yt-dlp: $(CODE_FILES) devscripts/bash-completion.in
mkdir -p completions/bash
$(PYTHON) devscripts/bash-completion.py
completions/zsh/_yt-dlp: yt_dlp/*.py yt_dlp/*/*.py devscripts/zsh-completion.in
completions/zsh/_yt-dlp: $(CODE_FILES) devscripts/zsh-completion.in
mkdir -p completions/zsh
$(PYTHON) devscripts/zsh-completion.py
completions/fish/yt-dlp.fish: yt_dlp/*.py yt_dlp/*/*.py devscripts/fish-completion.in
completions/fish/yt-dlp.fish: $(CODE_FILES) devscripts/fish-completion.in
mkdir -p completions/fish
$(PYTHON) devscripts/fish-completion.py
_EXTRACTOR_FILES = $(shell find yt_dlp/extractor -name '*.py' -and -not -name 'lazy_extractors.py')
_EXTRACTOR_FILES_CMD = find yt_dlp/extractor -name '*.py' -and -not -name 'lazy_extractors.py'
_EXTRACTOR_FILES != $(_EXTRACTOR_FILES_CMD)
_EXTRACTOR_FILES ?= $(shell $(_EXTRACTOR_FILES_CMD))
yt_dlp/extractor/lazy_extractors.py: devscripts/make_lazy_extractors.py devscripts/lazy_load_template.py $(_EXTRACTOR_FILES)
$(PYTHON) devscripts/make_lazy_extractors.py $@
yt-dlp.tar.gz: all
@tar -czf yt-dlp.tar.gz --transform "s|^|yt-dlp/|" --owner 0 --group 0 \
@$(GNUTAR) -czf yt-dlp.tar.gz --transform "s|^|yt-dlp/|" --owner 0 --group 0 \
--exclude '*.DS_Store' \
--exclude '*.kate-swp' \
--exclude '*.pyc' \
--exclude '*.pyo' \
--exclude '*~' \
--exclude '__pycache__' \
--exclude '.pytest_cache' \
--exclude '.*_cache' \
--exclude '.git' \
-- \
README.md supportedsites.md Changelog.md LICENSE \
CONTRIBUTING.md Collaborators.md CONTRIBUTORS AUTHORS \
Makefile MANIFEST.in yt-dlp.1 README.txt completions \
setup.py setup.cfg yt-dlp yt_dlp requirements.txt \
devscripts test
Makefile yt-dlp.1 README.txt completions .gitignore \
setup.cfg yt-dlp yt_dlp pyproject.toml devscripts test
AUTHORS: .mailmap
git shortlog -s -n | cut -f2 | sort > AUTHORS
AUTHORS: Changelog.md
@if [ -d '.git' ] && command -v git > /dev/null ; then \
echo 'Generating $@ from git commit history' ; \
git shortlog -s -n HEAD | cut -f2 | sort > $@ ; \
fi
.mailmap:
git shortlog -s -e -n | awk '!(out[$$NF]++) { $$1="";sub(/^[ \t]+/,""); print}' > .mailmap
CONTRIBUTORS: Changelog.md
@if [ -d '.git' ] && command -v git > /dev/null ; then \
echo 'Updating $@ from git commit history' ; \
$(PYTHON) devscripts/make_changelog.py -v -c > /dev/null ; \
fi

547
README.md
View File

@@ -12,22 +12,20 @@
[![License: Unlicense](https://img.shields.io/badge/-Unlicense-blue.svg?style=for-the-badge)](LICENSE "License")
[![CI Status](https://img.shields.io/github/actions/workflow/status/yt-dlp/yt-dlp/core.yml?branch=master&label=Tests&style=for-the-badge)](https://github.com/yt-dlp/yt-dlp/actions "CI Status")
[![Commits](https://img.shields.io/github/commit-activity/m/yt-dlp/yt-dlp?label=commits&style=for-the-badge)](https://github.com/yt-dlp/yt-dlp/commits "Commit History")
[![Last Commit](https://img.shields.io/github/last-commit/yt-dlp/yt-dlp/master?label=&style=for-the-badge&display_timestamp=committer)](https://github.com/yt-dlp/yt-dlp/commits "Commit History")
[![Last Commit](https://img.shields.io/github/last-commit/yt-dlp/yt-dlp/master?label=&style=for-the-badge&display_timestamp=committer)](https://github.com/yt-dlp/yt-dlp/pulse/monthly "Last activity")
</div>
<!-- MANPAGE: END EXCLUDED SECTION -->
yt-dlp is a [youtube-dl](https://github.com/ytdl-org/youtube-dl) fork based on the now inactive [youtube-dlc](https://github.com/blackjack4494/yt-dlc). The main focus of this project is adding new features and patches while also keeping up to date with the original project
yt-dlp is a feature-rich command-line audio/video downloader with support for [thousands of sites](supportedsites.md). The project is a fork of [youtube-dl](https://github.com/ytdl-org/youtube-dl) based on the now inactive [youtube-dlc](https://github.com/blackjack4494/yt-dlc).
<!-- MANPAGE: MOVE "USAGE AND OPTIONS" SECTION HERE -->
<!-- MANPAGE: BEGIN EXCLUDED SECTION -->
* [NEW FEATURES](#new-features)
* [Differences in default behavior](#differences-in-default-behavior)
* [INSTALLATION](#installation)
* [Detailed instructions](https://github.com/yt-dlp/yt-dlp/wiki/Installation)
* [Update](#update)
* [Release Files](#release-files)
* [Update](#update)
* [Dependencies](#dependencies)
* [Compile](#compile)
* [USAGE AND OPTIONS](#usage-and-options)
@@ -65,7 +63,10 @@
* [Developing Plugins](#developing-plugins)
* [EMBEDDING YT-DLP](#embedding-yt-dlp)
* [Embedding examples](#embedding-examples)
* [DEPRECATED OPTIONS](#deprecated-options)
* [CHANGES FROM YOUTUBE-DL](#changes-from-youtube-dl)
* [New features](#new-features)
* [Differences in default behavior](#differences-in-default-behavior)
* [Deprecated options](#deprecated-options)
* [CONTRIBUTING](CONTRIBUTING.md#contributing-to-yt-dlp)
* [Opening an Issue](CONTRIBUTING.md#opening-an-issue)
* [Developer Instructions](CONTRIBUTING.md#developer-instructions)
@@ -74,100 +75,6 @@
<!-- MANPAGE: END EXCLUDED SECTION -->
# NEW FEATURES
* Forked from [**yt-dlc@f9401f2**](https://github.com/blackjack4494/yt-dlc/commit/f9401f2a91987068139c5f757b12fc711d4c0cee) and merged with [**youtube-dl@42f2d4**](https://github.com/yt-dlp/yt-dlp/commit/42f2d4) ([exceptions](https://github.com/yt-dlp/yt-dlp/issues/21))
* **[SponsorBlock Integration](#sponsorblock-options)**: You can mark/remove sponsor sections in YouTube videos by utilizing the [SponsorBlock](https://sponsor.ajay.app) API
* **[Format Sorting](#sorting-formats)**: The default format sorting options have been changed so that higher resolution and better codecs will be now preferred instead of simply using larger bitrate. Furthermore, you can now specify the sort order using `-S`. This allows for much easier format selection than what is possible by simply using `--format` ([examples](#format-selection-examples))
* **Merged with animelover1984/youtube-dl**: You get most of the features and improvements from [animelover1984/youtube-dl](https://github.com/animelover1984/youtube-dl) including `--write-comments`, `BiliBiliSearch`, `BilibiliChannel`, Embedding thumbnail in mp4/ogg/opus, playlist infojson etc. Note that NicoNico livestreams are not available. See [#31](https://github.com/yt-dlp/yt-dlp/pull/31) for details.
* **YouTube improvements**:
* Supports Clips, Stories (`ytstories:<channel UCID>`), Search (including filters)**\***, YouTube Music Search, Channel-specific search, Search prefixes (`ytsearch:`, `ytsearchdate:`)**\***, Mixes, and Feeds (`:ytfav`, `:ytwatchlater`, `:ytsubs`, `:ythistory`, `:ytrec`, `:ytnotif`)
* Fix for [n-sig based throttling](https://github.com/ytdl-org/youtube-dl/issues/29326) **\***
* Supports some (but not all) age-gated content without cookies
* Download livestreams from the start using `--live-from-start` (*experimental*)
* `255kbps` audio is extracted (if available) from YouTube Music when premium cookies are given
* Channel URLs download all uploads of the channel, including shorts and live
* **Cookies from browser**: Cookies can be automatically extracted from all major web browsers using `--cookies-from-browser BROWSER[+KEYRING][:PROFILE][::CONTAINER]`
* **Download time range**: Videos can be downloaded partially based on either timestamps or chapters using `--download-sections`
* **Split video by chapters**: Videos can be split into multiple files based on chapters using `--split-chapters`
* **Multi-threaded fragment downloads**: Download multiple fragments of m3u8/mpd videos in parallel. Use `--concurrent-fragments` (`-N`) option to set the number of threads used
* **Aria2c with HLS/DASH**: You can use `aria2c` as the external downloader for DASH(mpd) and HLS(m3u8) formats
* **New and fixed extractors**: Many new extractors have been added and a lot of existing ones have been fixed. See the [changelog](Changelog.md) or the [list of supported sites](supportedsites.md)
* **New MSOs**: Philo, Spectrum, SlingTV, Cablevision, RCN etc.
* **Subtitle extraction from manifests**: Subtitles can be extracted from streaming media manifests. See [commit/be6202f](https://github.com/yt-dlp/yt-dlp/commit/be6202f12b97858b9d716e608394b51065d0419f) for details
* **Multiple paths and output templates**: You can give different [output templates](#output-template) and download paths for different types of files. You can also set a temporary path where intermediary files are downloaded to using `--paths` (`-P`)
* **Portable Configuration**: Configuration files are automatically loaded from the home and root directories. See [CONFIGURATION](#configuration) for details
* **Output template improvements**: Output templates can now have date-time formatting, numeric offsets, object traversal etc. See [output template](#output-template) for details. Even more advanced operations can also be done with the help of `--parse-metadata` and `--replace-in-metadata`
* **Other new options**: Many new options have been added such as `--alias`, `--print`, `--concat-playlist`, `--wait-for-video`, `--retry-sleep`, `--sleep-requests`, `--convert-thumbnails`, `--force-download-archive`, `--force-overwrites`, `--break-match-filter` etc
* **Improvements**: Regex and other operators in `--format`/`--match-filter`, multiple `--postprocessor-args` and `--downloader-args`, faster archive checking, more [format selection options](#format-selection), merge multi-video/audio, multiple `--config-locations`, `--exec` at different stages, etc
* **Plugins**: Extractors and PostProcessors can be loaded from an external file. See [plugins](#plugins) for details
* **Self updater**: The releases can be updated using `yt-dlp -U`, and downgraded using `--update-to` if required
* **Nightly builds**: [Automated nightly builds](#update-channels) can be used with `--update-to nightly`
See [changelog](Changelog.md) or [commits](https://github.com/yt-dlp/yt-dlp/commits) for the full list of changes
Features marked with a **\*** have been back-ported to youtube-dl
### Differences in default behavior
Some of yt-dlp's default options are different from that of youtube-dl and youtube-dlc:
* yt-dlp supports only [Python 3.7+](## "Windows 7"), and *may* remove support for more versions as they [become EOL](https://devguide.python.org/versions/#python-release-cycle); while [youtube-dl still supports Python 2.6+ and 3.2+](https://github.com/ytdl-org/youtube-dl/issues/30568#issue-1118238743)
* The options `--auto-number` (`-A`), `--title` (`-t`) and `--literal` (`-l`), no longer work. See [removed options](#Removed) for details
* `avconv` is not supported as an alternative to `ffmpeg`
* yt-dlp stores config files in slightly different locations to youtube-dl. See [CONFIGURATION](#configuration) for a list of correct locations
* The default [output template](#output-template) is `%(title)s [%(id)s].%(ext)s`. There is no real reason for this change. This was changed before yt-dlp was ever made public and now there are no plans to change it back to `%(title)s-%(id)s.%(ext)s`. Instead, you may use `--compat-options filename`
* The default [format sorting](#sorting-formats) is different from youtube-dl and prefers higher resolution and better codecs rather than higher bitrates. You can use the `--format-sort` option to change this to any order you prefer, or use `--compat-options format-sort` to use youtube-dl's sorting order
* The default format selector is `bv*+ba/b`. This means that if a combined video + audio format that is better than the best video-only format is found, the former will be preferred. Use `-f bv+ba/b` or `--compat-options format-spec` to revert this
* Unlike youtube-dlc, yt-dlp does not allow merging multiple audio/video streams into one file by default (since this conflicts with the use of `-f bv*+ba`). If needed, this feature must be enabled using `--audio-multistreams` and `--video-multistreams`. You can also use `--compat-options multistreams` to enable both
* `--no-abort-on-error` is enabled by default. Use `--abort-on-error` or `--compat-options abort-on-error` to abort on errors instead
* When writing metadata files such as thumbnails, description or infojson, the same information (if available) is also written for playlists. Use `--no-write-playlist-metafiles` or `--compat-options no-playlist-metafiles` to not write these files
* `--add-metadata` attaches the `infojson` to `mkv` files in addition to writing the metadata when used with `--write-info-json`. Use `--no-embed-info-json` or `--compat-options no-attach-info-json` to revert this
* Some metadata are embedded into different fields when using `--add-metadata` as compared to youtube-dl. Most notably, `comment` field contains the `webpage_url` and `synopsis` contains the `description`. You can [use `--parse-metadata`](#modifying-metadata) to modify this to your liking or use `--compat-options embed-metadata` to revert this
* `playlist_index` behaves differently when used with options like `--playlist-reverse` and `--playlist-items`. See [#302](https://github.com/yt-dlp/yt-dlp/issues/302) for details. You can use `--compat-options playlist-index` if you want to keep the earlier behavior
* The output of `-F` is listed in a new format. Use `--compat-options list-formats` to revert this
* Live chats (if available) are considered as subtitles. Use `--sub-langs all,-live_chat` to download all subtitles except live chat. You can also use `--compat-options no-live-chat` to prevent any live chat/danmaku from downloading
* YouTube channel URLs download all uploads of the channel. To download only the videos in a specific tab, pass the tab's URL. If the channel does not show the requested tab, an error will be raised. Also, `/live` URLs raise an error if there are no live videos instead of silently downloading the entire channel. You may use `--compat-options no-youtube-channel-redirect` to revert all these redirections
* Unavailable videos are also listed for YouTube playlists. Use `--compat-options no-youtube-unavailable-videos` to remove this
* The upload dates extracted from YouTube are in UTC [when available](https://github.com/yt-dlp/yt-dlp/blob/89e4d86171c7b7c997c77d4714542e0383bf0db0/yt_dlp/extractor/youtube.py#L3898-L3900). Use `--compat-options no-youtube-prefer-utc-upload-date` to prefer the non-UTC upload date.
* If `ffmpeg` is used as the downloader, the downloading and merging of formats happen in a single step when possible. Use `--compat-options no-direct-merge` to revert this
* Thumbnail embedding in `mp4` is done with mutagen if possible. Use `--compat-options embed-thumbnail-atomicparsley` to force the use of AtomicParsley instead
* Some internal metadata such as filenames are removed by default from the infojson. Use `--no-clean-infojson` or `--compat-options no-clean-infojson` to revert this
* When `--embed-subs` and `--write-subs` are used together, the subtitles are written to disk and also embedded in the media file. You can use just `--embed-subs` to embed the subs and automatically delete the separate file. See [#630 (comment)](https://github.com/yt-dlp/yt-dlp/issues/630#issuecomment-893659460) for more info. `--compat-options no-keep-subs` can be used to revert this
* `certifi` will be used for SSL root certificates, if installed. If you want to use system certificates (e.g. self-signed), use `--compat-options no-certifi`
* yt-dlp's sanitization of invalid characters in filenames is different/smarter than in youtube-dl. You can use `--compat-options filename-sanitization` to revert to youtube-dl's behavior
* yt-dlp tries to parse the external downloader outputs into the standard progress output if possible (Currently implemented: [~~aria2c~~](https://github.com/yt-dlp/yt-dlp/issues/5931)). You can use `--compat-options no-external-downloader-progress` to get the downloader output as-is
* yt-dlp versions between 2021.09.01 and 2023.01.02 applies `--match-filter` to nested playlists. This was an unintentional side-effect of [8f18ac](https://github.com/yt-dlp/yt-dlp/commit/8f18aca8717bb0dd49054555af8d386e5eda3a88) and is fixed in [d7b460](https://github.com/yt-dlp/yt-dlp/commit/d7b460d0e5fc710950582baed2e3fc616ed98a80). Use `--compat-options playlist-match-filter` to revert this
For ease of use, a few more compat options are available:
* `--compat-options all`: Use all compat options (Do NOT use)
* `--compat-options youtube-dl`: Same as `--compat-options all,-multistreams,-playlist-match-filter`
* `--compat-options youtube-dlc`: Same as `--compat-options all,-no-live-chat,-no-youtube-channel-redirect,-playlist-match-filter`
* `--compat-options 2021`: Same as `--compat-options 2022,no-certifi,filename-sanitization,no-youtube-prefer-utc-upload-date`
* `--compat-options 2022`: Same as `--compat-options playlist-match-filter,no-external-downloader-progress`. Use this to enable all future compat options
# INSTALLATION
<!-- MANPAGE: BEGIN EXCLUDED SECTION -->
@@ -183,30 +90,6 @@ # INSTALLATION
You can install yt-dlp using [the binaries](#release-files), [pip](https://pypi.org/project/yt-dlp) or one using a third-party package manager. See [the wiki](https://github.com/yt-dlp/yt-dlp/wiki/Installation) for detailed instructions
## UPDATE
You can use `yt-dlp -U` to update if you are using the [release binaries](#release-files)
If you [installed with pip](https://github.com/yt-dlp/yt-dlp/wiki/Installation#with-pip), simply re-run the same command that was used to install the program
For other third-party package managers, see [the wiki](https://github.com/yt-dlp/yt-dlp/wiki/Installation#third-party-package-managers) or refer their documentation
<a id="update-channels"/>
There are currently two release channels for binaries, `stable` and `nightly`.
`stable` is the default channel, and many of its changes have been tested by users of the nightly channel.
The `nightly` channel has releases built after each push to the master branch, and will have the most recent fixes and additions, but also have more risk of regressions. They are available in [their own repo](https://github.com/yt-dlp/yt-dlp-nightly-builds/releases).
When using `--update`/`-U`, a release binary will only update to its current channel.
`--update-to CHANNEL` can be used to switch to a different channel when a newer version is available. `--update-to [CHANNEL@]TAG` can also be used to upgrade or downgrade to specific tags from a channel.
You may also use `--update-to <repository>` (`<owner>/<repository>`) to update to a channel on a completely different repository. Be careful with what repository you are updating to though, there is no verification done for binaries from different repositories.
Example usage:
* `yt-dlp --update-to nightly` change to `nightly` channel and update to its latest release
* `yt-dlp --update-to stable@2023.02.17` upgrade/downgrade to release to `stable` channel tag `2023.02.17`
* `yt-dlp --update-to 2023.01.06` upgrade/downgrade to tag `2023.01.06` if it exists on the current channel
* `yt-dlp --update-to example/yt-dlp@2023.03.01` upgrade/downgrade to the release from the `example/yt-dlp` repository, tag `2023.03.01`
<!-- MANPAGE: BEGIN EXCLUDED SECTION -->
## RELEASE FILES
@@ -222,10 +105,9 @@ #### Alternatives
File|Description
:---|:---
[yt-dlp_x86.exe](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_x86.exe)|Windows (Vista SP2+) standalone x86 (32-bit) binary
[yt-dlp_x86.exe](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_x86.exe)|Windows (Win7 SP1+) standalone x86 (32-bit) binary
[yt-dlp_min.exe](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_min.exe)|Windows (Win7 SP1+) standalone x64 binary built with `py2exe`<br/> ([Not recommended](#standalone-py2exe-builds-windows))
[yt-dlp_linux](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_linux)|Linux standalone x64 binary
[yt-dlp_linux.zip](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_linux.zip)|Unpackaged Linux executable (no auto-update)
[yt-dlp_linux_armv7l](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_linux_armv7l)|Linux standalone armv7l (32-bit) binary
[yt-dlp_linux_aarch64](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_linux_aarch64)|Linux standalone aarch64 (64-bit) binary
[yt-dlp_win.zip](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_win.zip)|Unpackaged Windows executable (no auto-update)
@@ -253,8 +135,45 @@ #### Misc
**Note**: The manpages, shell completion (autocomplete) files etc. are available inside the [source tarball](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp.tar.gz)
## UPDATE
You can use `yt-dlp -U` to update if you are using the [release binaries](#release-files)
If you [installed with pip](https://github.com/yt-dlp/yt-dlp/wiki/Installation#with-pip), simply re-run the same command that was used to install the program
For other third-party package managers, see [the wiki](https://github.com/yt-dlp/yt-dlp/wiki/Installation#third-party-package-managers) or refer to their documentation
<a id="update-channels"></a>
There are currently three release channels for binaries: `stable`, `nightly` and `master`.
* `stable` is the default channel, and many of its changes have been tested by users of the `nightly` and `master` channels.
* The `nightly` channel has releases scheduled to build every day around midnight UTC, for a snapshot of the project's new patches and changes. This is the **recommended channel for regular users** of yt-dlp. The `nightly` releases are available from [yt-dlp/yt-dlp-nightly-builds](https://github.com/yt-dlp/yt-dlp-nightly-builds/releases) or as development releases of the `yt-dlp` PyPI package (which can be installed with pip's `--pre` flag).
* The `master` channel features releases that are built after each push to the master branch, and these will have the very latest fixes and additions, but may also be more prone to regressions. They are available from [yt-dlp/yt-dlp-master-builds](https://github.com/yt-dlp/yt-dlp-master-builds/releases).
When using `--update`/`-U`, a release binary will only update to its current channel.
`--update-to CHANNEL` can be used to switch to a different channel when a newer version is available. `--update-to [CHANNEL@]TAG` can also be used to upgrade or downgrade to specific tags from a channel.
You may also use `--update-to <repository>` (`<owner>/<repository>`) to update to a channel on a completely different repository. Be careful with what repository you are updating to though, there is no verification done for binaries from different repositories.
Example usage:
* `yt-dlp --update-to master` switch to the `master` channel and update to its latest release
* `yt-dlp --update-to stable@2023.07.06` upgrade/downgrade to release to `stable` channel tag `2023.07.06`
* `yt-dlp --update-to 2023.10.07` upgrade/downgrade to tag `2023.10.07` if it exists on the current channel
* `yt-dlp --update-to example/yt-dlp@2023.09.24` upgrade/downgrade to the release from the `example/yt-dlp` repository, tag `2023.09.24`
**Important**: Any user experiencing an issue with the `stable` release should install or update to the `nightly` release before submitting a bug report:
```
# To update to nightly from stable executable/binary:
yt-dlp --update-to nightly
# To install nightly with pip:
python3 -m pip install -U --pre "yt-dlp[default]"
```
## DEPENDENCIES
Python versions 3.7+ (CPython and PyPy) are supported. Other versions and implementations may or may not work correctly.
Python versions 3.8+ (CPython and PyPy) are supported. Other versions and implementations may or may not work correctly.
<!-- Python 3.5+ uses VC++14 and it is already embedded in the binary created
<!x-- https://www.microsoft.com/en-us/download/details.aspx?id=26999 --x>
@@ -265,28 +184,38 @@ ## DEPENDENCIES
### Strongly recommended
* [**ffmpeg** and **ffprobe**](https://www.ffmpeg.org) - Required for [merging separate video and audio files](#format-selection) as well as for various [post-processing](#post-processing-options) tasks. License [depends on the build](https://www.ffmpeg.org/legal.html)
* [**ffmpeg** and **ffprobe**](https://www.ffmpeg.org) - Required for [merging separate video and audio files](#format-selection), as well as for various [post-processing](#post-processing-options) tasks. License [depends on the build](https://www.ffmpeg.org/legal.html)
There are bugs in ffmpeg that causes various issues when used alongside yt-dlp. Since ffmpeg is such an important dependency, we provide [custom builds](https://github.com/yt-dlp/FFmpeg-Builds#ffmpeg-static-auto-builds) with patches for some of these issues at [yt-dlp/FFmpeg-Builds](https://github.com/yt-dlp/FFmpeg-Builds). See [the readme](https://github.com/yt-dlp/FFmpeg-Builds#patches-applied) for details on the specific issues solved by these builds
There are bugs in ffmpeg that cause various issues when used alongside yt-dlp. Since ffmpeg is such an important dependency, we provide [custom builds](https://github.com/yt-dlp/FFmpeg-Builds#ffmpeg-static-auto-builds) with patches for some of these issues at [yt-dlp/FFmpeg-Builds](https://github.com/yt-dlp/FFmpeg-Builds). See [the readme](https://github.com/yt-dlp/FFmpeg-Builds#patches-applied) for details on the specific issues solved by these builds
**Important**: What you need is ffmpeg *binary*, **NOT** [the python package of the same name](https://pypi.org/project/ffmpeg)
**Important**: What you need is ffmpeg *binary*, **NOT** [the Python package of the same name](https://pypi.org/project/ffmpeg)
### Networking
* [**certifi**](https://github.com/certifi/python-certifi)\* - Provides Mozilla's root certificate bundle. Licensed under [MPLv2](https://github.com/certifi/python-certifi/blob/master/LICENSE)
* [**brotli**](https://github.com/google/brotli)\* or [**brotlicffi**](https://github.com/python-hyper/brotlicffi) - [Brotli](https://en.wikipedia.org/wiki/Brotli) content encoding support. Both licensed under MIT <sup>[1](https://github.com/google/brotli/blob/master/LICENSE) [2](https://github.com/python-hyper/brotlicffi/blob/master/LICENSE) </sup>
* [**websockets**](https://github.com/aaugustin/websockets)\* - For downloading over websocket. Licensed under [BSD-3-Clause](https://github.com/aaugustin/websockets/blob/main/LICENSE)
* [**requests**](https://github.com/psf/requests)\* - HTTP library. For HTTPS proxy and persistent connections support. Licensed under [Apache-2.0](https://github.com/psf/requests/blob/main/LICENSE)
#### Impersonation
The following provide support for impersonating browser requests. This may be required for some sites that employ TLS fingerprinting.
* [**curl_cffi**](https://github.com/yifeikong/curl_cffi) (recommended) - Python binding for [curl-impersonate](https://github.com/lwthiker/curl-impersonate). Provides impersonation targets for Chrome, Edge and Safari. Licensed under [MIT](https://github.com/yifeikong/curl_cffi/blob/main/LICENSE)
* Can be installed with the `curl-cffi` group, e.g. `pip install "yt-dlp[default,curl-cffi]"`
* Currently only included in `yt-dlp.exe` and `yt-dlp_macos` builds
### Metadata
* [**mutagen**](https://github.com/quodlibet/mutagen)\* - For `--embed-thumbnail` in certain formats. Licensed under [GPLv2+](https://github.com/quodlibet/mutagen/blob/master/COPYING)
* [**AtomicParsley**](https://github.com/wez/atomicparsley) - For `--embed-thumbnail` in `mp4`/`m4a` files when `mutagen`/`ffmpeg` cannot. Licensed under [GPLv2+](https://github.com/wez/atomicparsley/blob/master/COPYING)
* [**xattr**](https://github.com/xattr/xattr), [**pyxattr**](https://github.com/iustin/pyxattr) or [**setfattr**](http://savannah.nongnu.org/projects/attr) - For writing xattr metadata (`--xattr`) on **Linux**. Licensed under [MIT](https://github.com/xattr/xattr/blob/master/LICENSE.txt), [LGPL2.1](https://github.com/iustin/pyxattr/blob/master/COPYING) and [GPLv2+](http://git.savannah.nongnu.org/cgit/attr.git/tree/doc/COPYING) respectively
* [**xattr**](https://github.com/xattr/xattr), [**pyxattr**](https://github.com/iustin/pyxattr) or [**setfattr**](http://savannah.nongnu.org/projects/attr) - For writing xattr metadata (`--xattr`) on **Mac** and **BSD**. Licensed under [MIT](https://github.com/xattr/xattr/blob/master/LICENSE.txt), [LGPL2.1](https://github.com/iustin/pyxattr/blob/master/COPYING) and [GPLv2+](http://git.savannah.nongnu.org/cgit/attr.git/tree/doc/COPYING) respectively
### Misc
* [**pycryptodomex**](https://github.com/Legrandin/pycryptodome)\* - For decrypting AES-128 HLS streams and various other data. Licensed under [BSD-2-Clause](https://github.com/Legrandin/pycryptodome/blob/master/LICENSE.rst)
* [**phantomjs**](https://github.com/ariya/phantomjs) - Used in extractors where javascript needs to be run. Licensed under [BSD-3-Clause](https://github.com/ariya/phantomjs/blob/master/LICENSE.BSD)
* [**secretstorage**](https://github.com/mitya57/secretstorage) - For `--cookies-from-browser` to access the **Gnome** keyring while decrypting cookies of **Chromium**-based browsers on **Linux**. Licensed under [BSD-3-Clause](https://github.com/mitya57/secretstorage/blob/master/LICENSE)
* [**secretstorage**](https://github.com/mitya57/secretstorage)\* - For `--cookies-from-browser` to access the **Gnome** keyring while decrypting cookies of **Chromium**-based browsers on **Linux**. Licensed under [BSD-3-Clause](https://github.com/mitya57/secretstorage/blob/master/LICENSE)
* Any external downloader that you want to use with `--downloader`
### Deprecated
@@ -306,22 +235,26 @@ ### Deprecated
## COMPILE
### Standalone PyInstaller Builds
To build the standalone executable, you must have Python and `pyinstaller` (plus any of yt-dlp's [optional dependencies](#dependencies) if needed). Once you have all the necessary dependencies installed, simply run `pyinst.py`. The executable will be built for the same architecture (x86/ARM, 32/64 bit) as the Python used.
To build the standalone executable, you must have Python and `pyinstaller` (plus any of yt-dlp's [optional dependencies](#dependencies) if needed). The executable will be built for the same CPU architecture as the Python used.
python3 -m pip install -U pyinstaller -r requirements.txt
python3 devscripts/make_lazy_extractors.py
python3 pyinst.py
You can run the following commands:
```
python3 devscripts/install_deps.py --include pyinstaller
python3 devscripts/make_lazy_extractors.py
python3 -m bundle.pyinstaller
```
On some systems, you may need to use `py` or `python` instead of `python3`.
`pyinst.py` accepts any arguments that can be passed to `pyinstaller`, such as `--onefile/-F` or `--onedir/-D`, which is further [documented here](https://pyinstaller.org/en/stable/usage.html#what-to-generate).
`python -m bundle.pyinstaller` accepts any arguments that can be passed to `pyinstaller`, such as `--onefile/-F` or `--onedir/-D`, which is further [documented here](https://pyinstaller.org/en/stable/usage.html#what-to-generate).
**Note**: Pyinstaller versions below 4.4 [do not support](https://github.com/pyinstaller/pyinstaller#requirements-and-tested-platforms) Python installed from the Windows store without using a virtual environment.
**Important**: Running `pyinstaller` directly **without** using `pyinst.py` is **not** officially supported. This may or may not work correctly.
**Important**: Running `pyinstaller` directly **instead of** using `python -m bundle.pyinstaller` is **not** officially supported. This may or may not work correctly.
### Platform-independent Binary (UNIX)
You will need the build tools `python` (3.7+), `zip`, `make` (GNU), `pandoc`\* and `pytest`\*.
You will need the build tools `python` (3.8+), `zip`, `make` (GNU), `pandoc`\* and `pytest`\*.
After installing these, simply run `make`.
@@ -329,17 +262,20 @@ ### Platform-independent Binary (UNIX)
### Standalone Py2Exe Builds (Windows)
While we provide the option to build with [py2exe](https://www.py2exe.org), it is recommended to build [using PyInstaller](#standalone-pyinstaller-builds) instead since the py2exe builds **cannot contain `pycryptodomex`/`certifi` and needs VC++14** on the target computer to run.
While we provide the option to build with [py2exe](https://www.py2exe.org), it is recommended to build [using PyInstaller](#standalone-pyinstaller-builds) instead since the py2exe builds **cannot contain `pycryptodomex`/`certifi`/`requests` and need VC++14** on the target computer to run.
If you wish to build it anyway, install Python and py2exe, and then simply run `setup.py py2exe`
If you wish to build it anyway, install Python (if it is not already installed) and you can run the following commands:
py -m pip install -U py2exe -r requirements.txt
py devscripts/make_lazy_extractors.py
py setup.py py2exe
```
py devscripts/install_deps.py --include py2exe
py devscripts/make_lazy_extractors.py
py -m bundle.py2exe
```
### Related scripts
* **`devscripts/update-version.py`** - Update the version number based on current date.
* **`devscripts/install_deps.py`** - Install dependencies for yt-dlp.
* **`devscripts/update-version.py`** - Update the version number based on the current date.
* **`devscripts/set-variant.py`** - Set the build variant of the executable.
* **`devscripts/make_changelog.py`** - Create a markdown changelog using short commit messages and update `CONTRIBUTORS` file.
* **`devscripts/make_lazy_extractors.py`** - Create lazy extractors. Running this before building the binaries (any variant) will improve their startup performance. Set the environment variable `YTDLP_NO_LAZY_EXTRACTORS=1` if you wish to forcefully disable lazy extractor loading.
@@ -367,7 +303,8 @@ ## General Options:
CHANNEL can be a repository as well. CHANNEL
and TAG default to "stable" and "latest"
respectively if omitted; See "UPDATE" for
details. Supported channels: stable, nightly
details. Supported channels: stable,
nightly, master
-i, --ignore-errors Ignore download and postprocessing errors.
The download will be considered successful
even if the postprocessing fails
@@ -397,7 +334,7 @@ ## General Options:
URLs, but emits an error if this is not
possible instead of searching
--ignore-config Don't load any more configuration files
except those given by --config-locations.
except those given to --config-locations.
For backward compatibility, if this option
is found inside the system configuration
file, the user configuration is not loaded.
@@ -461,6 +398,13 @@ ## Network Options:
direct connection
--socket-timeout SECONDS Time to wait before giving up, in seconds
--source-address IP Client-side IP address to bind to
--impersonate CLIENT[:OS] Client to impersonate for requests. E.g.
chrome, chrome-110, chrome:windows-10. Pass
--impersonate="" to impersonate any client.
Note that forcing impersonation for all
requests may have a detrimental impact on
download speed and stability
--list-impersonate-targets List available clients to impersonate.
-4, --force-ipv4 Make all connections via IPv4
-6, --force-ipv6 Make all connections via IPv6
--enable-file-urls Enable file:// URLs. This is disabled by
@@ -512,8 +456,8 @@ ## Video Selection:
is not present, and "&" to check multiple
conditions. Use a "\" to escape "&" or
quotes if needed. If used multiple times,
the filter matches if atleast one of the
conditions are met. E.g. --match-filter
the filter matches if at least one of the
conditions is met. E.g. --match-filter
!is_live --match-filter "like_count>?100 &
description~='(?i)\bcats \& dogs\b'" matches
only videos that are not live OR those that
@@ -540,6 +484,9 @@ ## Video Selection:
--max-downloads NUMBER Abort after downloading NUMBER files
--break-on-existing Stop the download process when encountering
a file that is in the archive
--no-break-on-existing Do not stop the download process when
encountering a file that is in the archive
(default)
--break-per-input Alters --max-downloads, --break-on-existing,
--break-match-filter, and autonumber to
reset per input URL
@@ -662,7 +609,7 @@ ## Filesystem Options:
-o, --output [TYPES:]TEMPLATE Output filename template; see "OUTPUT
TEMPLATE" for details
--output-na-placeholder TEXT Placeholder for unavailable fields in
"OUTPUT TEMPLATE" (default: "NA")
--output (default: "NA")
--restrict-filenames Restrict filenames to only ASCII characters,
and avoid "&" and spaces in filenames
--no-restrict-filenames Allow Unicode characters, "&" and spaces in
@@ -721,16 +668,17 @@ ## Filesystem Options:
The name of the browser to load cookies
from. Currently supported browsers are:
brave, chrome, chromium, edge, firefox,
opera, safari, vivaldi. Optionally, the
KEYRING used for decrypting Chromium cookies
on Linux, the name/path of the PROFILE to
load cookies from, and the CONTAINER name
(if Firefox) ("none" for no container) can
be given with their respective seperators.
By default, all containers of the most
recently accessed profile are used.
Currently supported keyrings are: basictext,
gnomekeyring, kwallet, kwallet5, kwallet6
opera, safari, vivaldi, whale. Optionally,
the KEYRING used for decrypting Chromium
cookies on Linux, the name/path of the
PROFILE to load cookies from, and the
CONTAINER name (if Firefox) ("none" for no
container) can be given with their
respective separators. By default, all
containers of the most recently accessed
profile are used. Currently supported
keyrings are: basictext, gnomekeyring,
kwallet, kwallet5, kwallet6
--no-cookies-from-browser Do not load cookies from browser (default)
--cache-dir DIR Location in the filesystem where yt-dlp can
store some downloaded information (such as
@@ -813,6 +761,7 @@ ## Verbosity and Simulation Options:
accessible under "progress" key. E.g.
--console-title --progress-template
"download-title:%(info.id)s-%(progress.eta)s"
--progress-delta SECONDS Time between progress output (default: 0)
-v, --verbose Print various debugging information
--dump-pages Print downloaded pages encoded using base64
to debug problems (very verbose)
@@ -913,7 +862,7 @@ ## Authentication Options:
Defaults to ~/.netrc
--netrc-cmd NETRC_CMD Command to execute to get the credentials
for an extractor.
--video-password PASSWORD Video password (vimeo, youku)
--video-password PASSWORD Video-specific password
--ap-mso MSO Adobe Pass multiple-system operator (TV
provider) identifier, use --ap-list-mso for
a list of available MSOs
@@ -1087,7 +1036,7 @@ ## Post-Processing Options:
--print/--output), "before_dl" (before each
video download), "post_process" (after each
video download; default), "after_move"
(after moving video file to it's final
(after moving video file to its final
locations), "after_video" (after downloading
and processing all formats of a video), or
"playlist" (at end of playlist). This option
@@ -1151,12 +1100,12 @@ # CONFIGURATION
You can configure yt-dlp by placing any supported command line option to a configuration file. The configuration is loaded from the following locations:
1. **Main Configuration**:
* The file given by `--config-location`
* The file given to `--config-location`
1. **Portable Configuration**: (Recommended for portable installations)
* If using a binary, `yt-dlp.conf` in the same directory as the binary
* If running from source-code, `yt-dlp.conf` in the parent directory of `yt_dlp`
1. **Home Configuration**:
* `yt-dlp.conf` in the home path given by `-P`
* `yt-dlp.conf` in the home path given to `-P`
* If `-P` is not given, the current directory is searched
1. **User Configuration**:
* `${XDG_CONFIG_HOME}/yt-dlp.conf`
@@ -1176,7 +1125,7 @@ # CONFIGURATION
* `/etc/yt-dlp/config`
* `/etc/yt-dlp/config.txt`
E.g. with the following configuration file yt-dlp will always extract the audio, not copy the mtime, use a proxy and save all videos under `YouTube` directory in your home directory:
E.g. with the following configuration file, yt-dlp will always extract the audio, not copy the mtime, use a proxy and save all videos under `YouTube` directory in your home directory:
```
# Lines starting with # are comments
@@ -1193,7 +1142,7 @@ # Save all videos under YouTube directory in your home directory
-o ~/YouTube/%(title)s.%(ext)s
```
**Note**: Options in configuration file are just the same options aka switches used in regular command line calls; thus there **must be no whitespace** after `-` or `--`, e.g. `-o` or `--proxy` but not `- o` or `-- proxy`. They must also be quoted when necessary as-if it were a UNIX shell.
**Note**: Options in configuration file are just the same options aka switches used in regular command line calls; thus there **must be no whitespace** after `-` or `--`, e.g. `-o` or `--proxy` but not `- o` or `-- proxy`. They must also be quoted when necessary, as if it were a UNIX shell.
You can use `--ignore-config` if you want to disable all configuration files for a particular yt-dlp run. If `--ignore-config` is found inside any configuration file, no further configuration will be loaded. For example, having the option in the portable configuration file prevents loading of home, user, and system configurations. Additionally, (for backward compatibility) if `--ignore-config` is found inside the system configuration file, the user configuration is not loaded.
@@ -1205,12 +1154,12 @@ ### Configuration file encoding
### Authentication with netrc
You may also want to configure automatic credentials storage for extractors that support authentication (by providing login and password with `--username` and `--password`) in order not to pass credentials as command line arguments on every yt-dlp execution and prevent tracking plain text passwords in the shell command history. You can achieve this using a [`.netrc` file](https://stackoverflow.com/tags/.netrc/info) on a per-extractor basis. For that you will need to create a `.netrc` file in `--netrc-location` and restrict permissions to read/write by only you:
You may also want to configure automatic credentials storage for extractors that support authentication (by providing login and password with `--username` and `--password`) in order not to pass credentials as command line arguments on every yt-dlp execution and prevent tracking plain text passwords in the shell command history. You can achieve this using a [`.netrc` file](https://stackoverflow.com/tags/.netrc/info) on a per-extractor basis. For that, you will need to create a `.netrc` file in `--netrc-location` and restrict permissions to read/write by only you:
```
touch ${HOME}/.netrc
chmod a-rwx,u+rw ${HOME}/.netrc
```
After that you can add credentials for an extractor in the following format, where *extractor* is the name of the extractor in lowercase:
After that, you can add credentials for an extractor in the following format, where *extractor* is the name of the extractor in lowercase:
```
machine <extractor> login <username> password <password>
```
@@ -1252,9 +1201,9 @@ # OUTPUT TEMPLATE
The field names themselves (the part inside the parenthesis) can also have some special formatting:
1. **Object traversal**: The dictionaries and lists available in metadata can be traversed by using a dot `.` separator; e.g. `%(tags.0)s`, `%(subtitles.en.-1.ext)s`. You can do Python slicing with colon `:`; E.g. `%(id.3:7:-1)s`, `%(formats.:.format_id)s`. Curly braces `{}` can be used to build dictionaries with only specific keys; e.g. `%(formats.:.{format_id,height})#j`. An empty field name `%()s` refers to the entire infodict; e.g. `%(.{id,title})s`. Note that all the fields that become available using this method are not listed below. Use `-j` to see such fields
1. **Object traversal**: The dictionaries and lists available in metadata can be traversed by using a dot `.` separator; e.g. `%(tags.0)s`, `%(subtitles.en.-1.ext)s`. You can do Python slicing with colon `:`; E.g. `%(id.3:7)s`, `%(id.6:2:-1)s`, `%(formats.:.format_id)s`. Curly braces `{}` can be used to build dictionaries with only specific keys; e.g. `%(formats.:.{format_id,height})#j`. An empty field name `%()s` refers to the entire infodict; e.g. `%(.{id,title})s`. Note that all the fields that become available using this method are not listed below. Use `-j` to see such fields
1. **Addition**: Addition and subtraction of numeric fields can be done using `+` and `-` respectively. E.g. `%(playlist_index+10)03d`, `%(n_entries+1-playlist_index)d`
1. **Arithmetic**: Simple arithmetic can be done on numeric fields using `+`, `-` and `*`. E.g. `%(playlist_index+10)03d`, `%(n_entries+1-playlist_index)d`
1. **Date/time Formatting**: Date/time fields can be formatted according to [strftime formatting](https://docs.python.org/3/library/datetime.html#strftime-and-strptime-format-codes) by specifying it separated from the field name using a `>`. E.g. `%(duration>%H-%M-%S)s`, `%(upload_date>%Y-%m-%d)s`, `%(epoch-3600>%H-%M-%S)s`
@@ -1275,7 +1224,7 @@ # OUTPUT TEMPLATE
Additionally, you can set different output templates for the various metadata files separately from the general output template by specifying the type of file followed by the template separated by a colon `:`. The different file types supported are `subtitle`, `thumbnail`, `description`, `annotation` (deprecated), `infojson`, `link`, `pl_thumbnail`, `pl_description`, `pl_infojson`, `chapter`, `pl_video`. E.g. `-o "%(title)s.%(ext)s" -o "thumbnail:%(title)s\%(title)s.%(ext)s"` will put the thumbnails in a folder with the same name as the video. If any of the templates is empty, that type of file will not be written. E.g. `--write-thumbnail -o "thumbnail:"` will write thumbnails only for playlists and not for video.
<a id="outtmpl-postprocess-note"/>
<a id="outtmpl-postprocess-note"></a>
**Note**: Due to post-processing (i.e. merging etc.), the actual output filename might differ. Use `--print after_move:filepath` to get the name after all post-processing is complete.
@@ -1289,17 +1238,21 @@ # OUTPUT TEMPLATE
- `description` (string): The description of the video
- `display_id` (string): An alternative identifier for the video
- `uploader` (string): Full name of the video uploader
- `uploader_id` (string): Nickname or id of the video uploader
- `uploader_url` (string): URL to the video uploader's profile
- `license` (string): License name the video is licensed under
- `creator` (string): The creator of the video
- `creators` (list): The creators of the video
- `creator` (string): The creators of the video; comma-separated
- `timestamp` (numeric): UNIX timestamp of the moment the video became available
- `upload_date` (string): Video upload date in UTC (YYYYMMDD)
- `release_timestamp` (numeric): UNIX timestamp of the moment the video was released
- `release_date` (string): The date (YYYYMMDD) when the video was released in UTC
- `release_year` (numeric): Year (YYYY) when the video or album was released
- `modified_timestamp` (numeric): UNIX timestamp of the moment the video was last modified
- `modified_date` (string): The date (YYYYMMDD) when the video was last modified in UTC
- `uploader_id` (string): Nickname or id of the video uploader
- `channel` (string): Full name of the channel the video is uploaded on
- `channel_id` (string): Id of the channel
- `channel_url` (string): URL of the channel
- `channel_follower_count` (numeric): Number of followers of the channel
- `channel_is_verified` (boolean): Whether the channel is verified on the platform
- `location` (string): Physical location where the video was filmed
@@ -1318,26 +1271,32 @@ # OUTPUT TEMPLATE
- `was_live` (boolean): Whether this video was originally a live stream
- `playable_in_embed` (string): Whether this video is allowed to play in embedded players on other sites
- `availability` (string): Whether the video is "private", "premium_only", "subscriber_only", "needs_auth", "unlisted" or "public"
- `media_type` (string): The type of media as classified by the site, e.g. "episode", "clip", "trailer"
- `start_time` (numeric): Time in seconds where the reproduction should start, as specified in the URL
- `end_time` (numeric): Time in seconds where the reproduction should end, as specified in the URL
- `extractor` (string): Name of the extractor
- `extractor_key` (string): Key name of the extractor
- `epoch` (numeric): Unix epoch of when the information extraction was completed
- `autonumber` (numeric): Number that will be increased with each download, starting at `--autonumber-start`
- `autonumber` (numeric): Number that will be increased with each download, starting at `--autonumber-start`, padded with leading zeros to 5 digits
- `video_autonumber` (numeric): Number that will be increased with each video
- `n_entries` (numeric): Total number of extracted items in the playlist
- `playlist_id` (string): Identifier of the playlist that contains the video
- `playlist_title` (string): Name of the playlist that contains the video
- `playlist` (string): `playlist_id` or `playlist_title`
- `playlist` (string): `playlist_title` if available or else `playlist_id`
- `playlist_count` (numeric): Total number of items in the playlist. May not be known if entire playlist is not extracted
- `playlist_index` (numeric): Index of the video in the playlist padded with leading zeros according the final index
- `playlist_autonumber` (numeric): Position of the video in the playlist download queue padded with leading zeros according to the total length of the playlist
- `playlist_uploader` (string): Full name of the playlist uploader
- `playlist_uploader_id` (string): Nickname or id of the playlist uploader
- `webpage_url` (string): A URL to the video webpage which if given to yt-dlp should allow to get the same result again
- `playlist_channel` (string): Display name of the channel that uploaded the playlist
- `playlist_channel_id` (string): Identifier of the channel that uploaded the playlist
- `webpage_url` (string): A URL to the video webpage which, if given to yt-dlp, should yield the same result again
- `webpage_url_basename` (string): The basename of the webpage URL
- `webpage_url_domain` (string): The domain of the webpage URL
- `original_url` (string): The URL given by the user (or same as `webpage_url` for playlist entries)
- `categories` (list): List of categories the video belongs to
- `tags` (list): List of tags assigned to the video
- `cast` (list): List of cast members
All the fields in [Filtering Formats](#filtering-formats) can also be used
@@ -1347,9 +1306,10 @@ # OUTPUT TEMPLATE
- `chapter_number` (numeric): Number of the chapter the video belongs to
- `chapter_id` (string): Id of the chapter the video belongs to
Available for the video that is an episode of some series or programme:
Available for the video that is an episode of some series or program:
- `series` (string): Title of the series or programme the video episode belongs to
- `series` (string): Title of the series or program the video episode belongs to
- `series_id` (string): Id of the series or program the video episode belongs to
- `season` (string): Title of the season the video episode belongs to
- `season_number` (numeric): Number of the season the video episode belongs to
- `season_id` (string): Id of the season the video episode belongs to
@@ -1362,13 +1322,17 @@ # OUTPUT TEMPLATE
- `track` (string): Title of the track
- `track_number` (numeric): Number of the track within an album or a disc
- `track_id` (string): Id of the track
- `artist` (string): Artist(s) of the track
- `genre` (string): Genre(s) of the track
- `artists` (list): Artist(s) of the track
- `artist` (string): Artist(s) of the track; comma-separated
- `genres` (list): Genre(s) of the track
- `genre` (string): Genre(s) of the track; comma-separated
- `composers` (list): Composer(s) of the piece
- `composer` (string): Composer(s) of the piece; comma-separated
- `album` (string): Title of the album the track belongs to
- `album_type` (string): Type of the album
- `album_artist` (string): List of all artists appeared on the album
- `album_artists` (list): All artists appeared on the album
- `album_artist` (string): All artists appeared on the album; comma-separated
- `disc_number` (numeric): Number of the disc or other physical medium the track belongs to
- `release_year` (numeric): Year (YYYY) when the album was released
Available only when using `--download-sections` and for `chapter:` prefix when using `--split-chapters` for videos with internal chapters:
@@ -1402,7 +1366,7 @@ # OUTPUT TEMPLATE
Each aforementioned sequence when referenced in an output template will be replaced by the actual value corresponding to the sequence name. E.g. for `-o %(title)s-%(id)s.%(ext)s` and an mp4 video with title `yt-dlp test video` and id `BaW_jenozKc`, this will result in a `yt-dlp test video-BaW_jenozKc.mp4` file created in the current directory.
**Note**: Some of the sequences are not guaranteed to be present since they depend on the metadata obtained by a particular extractor. Such sequences will be replaced with placeholder value provided with `--output-na-placeholder` (`NA` by default).
**Note**: Some of the sequences are not guaranteed to be present, since they depend on the metadata obtained by a particular extractor. Such sequences will be replaced with placeholder value provided with `--output-na-placeholder` (`NA` by default).
**Tip**: Look at the `-j` output to identify which fields are available for the particular URL
@@ -1480,7 +1444,7 @@ # FORMAT SELECTION
- `all`: Select **all formats** separately
- `mergeall`: Select and **merge all formats** (Must be used with `--audio-multistreams`, `--video-multistreams` or both)
- `b*`, `best*`: Select the best quality format that **contains either** a video or an audio or both (ie; `vcodec!=none or acodec!=none`)
- `b*`, `best*`: Select the best quality format that **contains either** a video or an audio or both (i.e.; `vcodec!=none or acodec!=none`)
- `b`, `best`: Select the best quality format that **contains both** video and audio. Equivalent to `best*[vcodec!=none][acodec!=none]`
- `bv`, `bestvideo`: Select the best quality **video-only** format. Equivalent to `best*[acodec=none]`
- `bv*`, `bestvideo*`: Select the best quality format that **contains video**. It may also contain audio. Equivalent to `best*[vcodec!=none]`
@@ -1493,7 +1457,7 @@ # FORMAT SELECTION
- `wa`, `worstaudio`: Select the worst quality audio-only format. Equivalent to `worst*[vcodec=none]`
- `wa*`, `worstaudio*`: Select the worst quality format that contains audio. It may also contain video. Equivalent to `worst*[acodec!=none]`
For example, to download the worst quality video-only format you can use `-f worstvideo`. It is however recommended not to use `worst` and related options. When your format selector is `worst`, the format which is worst in all respects is selected. Most of the time, what you actually want is the video with the smallest filesize instead. So it is generally better to use `-S +size` or more rigorously, `-S +size,+br,+res,+fps` instead of `-f worst`. See [Sorting Formats](#sorting-formats) for more details.
For example, to download the worst quality video-only format you can use `-f worstvideo`. It is, however, recommended not to use `worst` and related options. When your format selector is `worst`, the format which is worst in all respects is selected. Most of the time, what you actually want is the video with the smallest filesize instead. So it is generally better to use `-S +size` or more rigorously, `-S +size,+br,+res,+fps` instead of `-f worst`. See [Sorting Formats](#sorting-formats) for more details.
You can select the n'th best format of a type by using `best<type>.<n>`. For example, `best.2` will select the 2nd best combined format. Similarly, `bv*.3` will select the 3rd best format that contains a video stream.
@@ -1509,7 +1473,7 @@ # FORMAT SELECTION
## Filtering Formats
You can also filter the video formats by putting a condition in brackets, as in `-f "best[height=720]"` (or `-f "[filesize>10M]"`).
You can also filter the video formats by putting a condition in brackets, as in `-f "best[height=720]"` (or `-f "[filesize>10M]"` since filters without a selector are interpreted as `best`).
The following numeric meta fields can be used with comparisons `<`, `<=`, `>`, `>=`, `=` (equals), `!=` (not equals):
@@ -1518,9 +1482,9 @@ ## Filtering Formats
- `width`: Width of the video, if known
- `height`: Height of the video, if known
- `aspect_ratio`: Aspect ratio of the video, if known
- `tbr`: Average bitrate of audio and video in KBit/s
- `abr`: Average audio bitrate in KBit/s
- `vbr`: Average video bitrate in KBit/s
- `tbr`: Average bitrate of audio and video in [kbps](## "1000 bits/sec")
- `abr`: Average audio bitrate in [kbps](## "1000 bits/sec")
- `vbr`: Average video bitrate in [kbps](## "1000 bits/sec")
- `asr`: Audio sampling rate in Hertz
- `fps`: Frame rate
- `audio_channels`: The number of audio channels
@@ -1543,9 +1507,9 @@ ## Filtering Formats
Any string comparison may be prefixed with negation `!` in order to produce an opposite comparison, e.g. `!*=` (does not contain). The comparand of a string comparison needs to be quoted with either double or single quotes if it contains spaces or special characters other than `._-`.
**Note**: None of the aforementioned meta fields are guaranteed to be present since this solely depends on the metadata obtained by particular extractor, i.e. the metadata offered by the website. Any other field made available by the extractor can also be used for filtering.
**Note**: None of the aforementioned meta fields are guaranteed to be present since this solely depends on the metadata obtained by the particular extractor, i.e. the metadata offered by the website. Any other field made available by the extractor can also be used for filtering.
Formats for which the value is not known are excluded unless you put a question mark (`?`) after the operator. You can combine format filters, so `-f "[height<=?720][tbr>500]"` selects up to 720p videos (or videos where the height is not known) with a bitrate of at least 500 KBit/s. You can also use the filters with `all` to download all formats that satisfy the filter, e.g. `-f "all[vcodec=none]"` selects all audio-only formats.
Formats for which the value is not known are excluded unless you put a question mark (`?`) after the operator. You can combine format filters, so `-f "bv[height<=?720][tbr>500]"` selects up to 720p videos (or videos where the height is not known) with a bitrate of at least 500 kbps. You can also use the filters with `all` to download all formats that satisfy the filter, e.g. `-f "all[vcodec=none]"` selects all audio-only formats.
Format selectors can also be grouped using parentheses; e.g. `-f "(mp4,webm)[height<480]"` will download the best pre-merged mp4 and webm formats with a height lower than 480.
@@ -1569,7 +1533,7 @@ ## Sorting Formats
- `aext`: Audio Extension (`m4a` > `aac` > `mp3` > `ogg` > `opus` > `webm` > other). If `--prefer-free-formats` is used, the order changes to `ogg` > `opus` > `webm` > `mp3` > `m4a` > `aac`
- `ext`: Equivalent to `vext,aext`
- `filesize`: Exact filesize, if known in advance
- `fs_approx`: Approximate filesize calculated from the manifests
- `fs_approx`: Approximate filesize
- `size`: Exact filesize if available, otherwise approximate filesize
- `height`: Height of video
- `width`: Width of video
@@ -1577,19 +1541,19 @@ ## Sorting Formats
- `fps`: Framerate of video
- `hdr`: The dynamic range of the video (`DV` > `HDR12` > `HDR10+` > `HDR10` > `HLG` > `SDR`)
- `channels`: The number of audio channels
- `tbr`: Total average bitrate in KBit/s
- `vbr`: Average video bitrate in KBit/s
- `abr`: Average audio bitrate in KBit/s
- `br`: Equivalent to using `tbr,vbr,abr`
- `tbr`: Total average bitrate in [kbps](## "1000 bits/sec")
- `vbr`: Average video bitrate in [kbps](## "1000 bits/sec")
- `abr`: Average audio bitrate in [kbps](## "1000 bits/sec")
- `br`: Average bitrate in [kbps](## "1000 bits/sec"), `tbr`/`vbr`/`abr`
- `asr`: Audio sample rate in Hz
**Deprecation warning**: Many of these fields have (currently undocumented) aliases, that may be removed in a future version. It is recommended to use only the documented field names.
All fields, unless specified otherwise, are sorted in descending order. To reverse this, prefix the field with a `+`. E.g. `+res` prefers format with the smallest resolution. Additionally, you can suffix a preferred value for the fields, separated by a `:`. E.g. `res:720` prefers larger videos, but no larger than 720p and the smallest video if there are no videos less than 720p. For `codec` and `ext`, you can provide two preferred values, the first for video and the second for audio. E.g. `+codec:avc:m4a` (equivalent to `+vcodec:avc,+acodec:m4a`) sets the video codec preference to `h264` > `h265` > `vp9` > `vp9.2` > `av01` > `vp8` > `h263` > `theora` and audio codec preference to `mp4a` > `aac` > `vorbis` > `opus` > `mp3` > `ac3` > `dts`. You can also make the sorting prefer the nearest values to the provided by using `~` as the delimiter. E.g. `filesize~1G` prefers the format with filesize closest to 1 GiB.
The fields `hasvid` and `ie_pref` are always given highest priority in sorting, irrespective of the user-defined order. This behaviour can be changed by using `--format-sort-force`. Apart from these, the default order used is: `lang,quality,res,fps,hdr:12,vcodec:vp9.2,channels,acodec,size,br,asr,proto,ext,hasaud,source,id`. The extractors may override this default order, but they cannot override the user-provided order.
The fields `hasvid` and `ie_pref` are always given highest priority in sorting, irrespective of the user-defined order. This behavior can be changed by using `--format-sort-force`. Apart from these, the default order used is: `lang,quality,res,fps,hdr:12,vcodec:vp9.2,channels,acodec,size,br,asr,proto,ext,hasaud,source,id`. The extractors may override this default order, but they cannot override the user-provided order.
Note that the default has `vcodec:vp9.2`; i.e. `av1` is not preferred. Similarly, the default for hdr is `hdr:12`; i.e. dolby vision is not preferred. These choices are made since DV and AV1 formats are not yet fully compatible with most devices. This may be changed in the future as more devices become capable of smoothly playing back these formats.
Note that the default has `vcodec:vp9.2`; i.e. `av1` is not preferred. Similarly, the default for hdr is `hdr:12`; i.e. Dolby Vision is not preferred. These choices are made since DV and AV1 formats are not yet fully compatible with most devices. This may be changed in the future as more devices become capable of smoothly playing back these formats.
If your format selector is `worst`, the last item is selected after sorting. This means it will select the format that is worst in all respects. Most of the time, what you actually want is the video with the smallest filesize instead. So it is generally better to use `-f best -S +size,+br,+res,+fps`.
@@ -1722,9 +1686,9 @@ # MODIFYING METADATA
The metadata obtained by the extractors can be modified by using `--parse-metadata` and `--replace-in-metadata`
`--replace-in-metadata FIELDS REGEX REPLACE` is used to replace text in any metadata field using [python regular expression](https://docs.python.org/3/library/re.html#regular-expression-syntax). [Backreferences](https://docs.python.org/3/library/re.html?highlight=backreferences#re.sub) can be used in the replace string for advanced use.
`--replace-in-metadata FIELDS REGEX REPLACE` is used to replace text in any metadata field using [Python regular expression](https://docs.python.org/3/library/re.html#regular-expression-syntax). [Backreferences](https://docs.python.org/3/library/re.html?highlight=backreferences#re.sub) can be used in the replace string for advanced use.
The general syntax of `--parse-metadata FROM:TO` is to give the name of a field or an [output template](#output-template) to extract data from, and the format to interpret it as, separated by a colon `:`. Either a [python regular expression](https://docs.python.org/3/library/re.html#regular-expression-syntax) with named capture groups, a single field name, or a similar syntax to the [output template](#output-template) (only `%(field)s` formatting is supported) can be used for `TO`. The option can be used multiple times to parse and modify various fields.
The general syntax of `--parse-metadata FROM:TO` is to give the name of a field or an [output template](#output-template) to extract data from, and the format to interpret it as, separated by a colon `:`. Either a [Python regular expression](https://docs.python.org/3/library/re.html#regular-expression-syntax) with named capture groups, a single field name, or a similar syntax to the [output template](#output-template) (only `%(field)s` formatting is supported) can be used for `TO`. The option can be used multiple times to parse and modify various fields.
Note that these options preserve their relative order, allowing replacements to be made in parsed fields and viceversa. Also, any field thus created can be used in the [output template](#output-template) and will also affect the media file's metadata added when using `--embed-metadata`.
@@ -1745,10 +1709,11 @@ # MODIFYING METADATA
`description`, `synopsis` | `description`
`purl`, `comment` | `webpage_url`
`track` | `track_number`
`artist` | `artist`, `creator`, `uploader` or `uploader_id`
`genre` | `genre`
`artist` | `artist`, `artists`, `creator`, `creators`, `uploader` or `uploader_id`
`composer` | `composer` or `composers`
`genre` | `genre` or `genres`
`album` | `album`
`album_artist` | `album_artist`
`album_artist` | `album_artist` or `album_artists`
`disc` | `disc_number`
`show` | `series`
`season_number` | `season_number`
@@ -1791,7 +1756,7 @@ # Replace all spaces and "_" in title and uploader with a `-`
# EXTRACTOR ARGUMENTS
Some extractors accept additional arguments which can be passed using `--extractor-args KEY:ARGS`. `ARGS` is a `;` (semicolon) separated string of `ARG=VAL1,VAL2`. E.g. `--extractor-args "youtube:player-client=android_embedded,web;include_live_dash" --extractor-args "funimation:version=uncut"`
Some extractors accept additional arguments which can be passed using `--extractor-args KEY:ARGS`. `ARGS` is a `;` (semicolon) separated string of `ARG=VAL1,VAL2`. E.g. `--extractor-args "youtube:player-client=android_embedded,web;formats=incomplete" --extractor-args "funimation:version=uncut"`
Note: In CLI, `ARG` can use `-` instead of `_`; e.g. `youtube:player-client"` becomes `youtube:player_client"`
@@ -1800,23 +1765,25 @@ # EXTRACTOR ARGUMENTS
#### youtube
* `lang`: Prefer translated metadata (`title`, `description` etc) of this language code (case-sensitive). By default, the video primary language metadata is preferred, with a fallback to `en` translated. See [youtube.py](https://github.com/yt-dlp/yt-dlp/blob/c26f9b991a0681fd3ea548d535919cec1fbbd430/yt_dlp/extractor/youtube.py#L381-L390) for list of supported content language codes
* `skip`: One or more of `hls`, `dash` or `translated_subs` to skip extraction of the m3u8 manifests, dash manifests and [auto-translated subtitles](https://github.com/yt-dlp/yt-dlp/issues/4090#issuecomment-1158102032) respectively
* `player_client`: Clients to extract video data from. The main clients are `web`, `android` and `ios` with variants `_music`, `_embedded`, `_embedscreen`, `_creator` (e.g. `web_embedded`); and `mweb` and `tv_embedded` (agegate bypass) with no variants. By default, `ios,android,web` is used, but `tv_embedded` and `creator` variants are added as required for age-gated videos. Similarly, the music variants are added for `music.youtube.com` urls. You can use `all` to use all the clients, and `default` for the default clients.
* `player_client`: Clients to extract video data from. The main clients are `web`, `ios` and `android`, with variants `_music`, `_embedded`, `_embedscreen`, `_creator` (e.g. `web_embedded`); and `mediaconnect`, `mweb`, `mweb_embedscreen` and `tv_embedded` (agegate bypass) with no variants. By default, `ios,web` is used, but `tv_embedded` and `creator` variants are added as required for age-gated videos. Similarly, the music variants are added for `music.youtube.com` urls. The `android` clients will always be given lowest priority since their formats are broken. You can use `all` to use all the clients, and `default` for the default clients.
* `player_skip`: Skip some network requests that are generally needed for robust extraction. One or more of `configs` (skip client configs), `webpage` (skip initial webpage), `js` (skip js player). While these options can help reduce the number of requests needed or avoid some rate-limiting, they could cause some issues. See [#860](https://github.com/yt-dlp/yt-dlp/pull/860) for more details
* `player_params`: YouTube player parameters to use for player requests. Will overwrite any default ones set by yt-dlp.
* `comment_sort`: `top` or `new` (default) - choose comment sorting mode (on YouTube's side)
* `max_comments`: Limit the amount of comments to gather. Comma-separated list of integers representing `max-comments,max-parents,max-replies,max-replies-per-thread`. Default is `all,all,all,all`
* E.g. `all,all,1000,10` will get a maximum of 1000 replies total, with up to 10 replies per thread. `1000,all,100` will get a maximum of 1000 comments, with a maximum of 100 replies total
* `include_duplicate_formats`: Extract formats with identical content but different URLs or protocol. This is useful if some of the formats are unavailable or throttled.
* `include_incomplete_formats`: Extract formats that cannot be downloaded completely (live dash and post-live m3u8)
* `formats`: Change the types of formats to return. `dashy` (convert HTTP to DASH), `duplicate` (identical content but different URLs or protocol; includes `dashy`), `incomplete` (cannot be downloaded completely - live dash and post-live m3u8)
* `innertube_host`: Innertube API host to use for all API requests; e.g. `studio.youtube.com`, `youtubei.googleapis.com`. Note that cookies exported from one subdomain will not work on others
* `innertube_key`: Innertube API key to use for all API requests
* `raise_incomplete_data`: `Incomplete Data Received` raises an error instead of reporting a warning
#### youtubetab (YouTube playlists, channels, feeds, etc.)
* `skip`: One or more of `webpage` (skip initial webpage download), `authcheck` (allow the download of playlists requiring authentication when no initial webpage is downloaded. This may cause unwanted behavior, see [#1122](https://github.com/yt-dlp/yt-dlp/pull/1122) for more details)
* `approximate_date`: Extract approximate `upload_date` and `timestamp` in flat-playlist. This may cause date-based filters to be slightly off
#### generic
* `fragment_query`: Passthrough any query in mpd/m3u8 manifest URLs to their fragments if no value is provided, or else apply the query string given as `fragment_query=VALUE`. Does not apply to ffmpeg
* `fragment_query`: Passthrough any query in mpd/m3u8 manifest URLs to their fragments if no value is provided, or else apply the query string given as `fragment_query=VALUE`. Note that if the stream has an HLS AES-128 key, then the query parameters will be passed to the key URI as well, unless the `key_query` extractor-arg is passed, or unless an external key URI is provided via the `hls_key` extractor-arg. Does not apply to ffmpeg
* `variant_query`: Passthrough the master m3u8 URL query to its variant playlist URLs if no value is provided, or else apply the query string given as `variant_query=VALUE`
* `key_query`: Passthrough the master m3u8 URL query to its HLS AES-128 decryption key URI if no value is provided, or else apply the query string given as `key_query=VALUE`. Note that this will have no effect if the key URI is provided via the `hls_key` extractor-arg. Does not apply to ffmpeg
* `hls_key`: An HLS AES-128 key URI *or* key (as hex), and optionally the IV (as hex), in the form of `(URI|KEY)[,IV]`; e.g. `generic:hls_key=ABCDEF1234567980,0xFEDCBA0987654321`. Passing any of these values will force usage of the native HLS downloader and override the corresponding values found in the m3u8 playlist
* `is_live`: Bypass live HLS detection and manually set `live_status` - a value of `false` will set `not_live`, any other value (or no value) will set `is_live`
@@ -1825,8 +1792,7 @@ #### funimation
* `version`: The video version to extract - `uncut` or `simulcast`
#### crunchyrollbeta (Crunchyroll)
* `format`: Which stream type(s) to extract (default: `adaptive_hls`). Potentially useful values include `adaptive_hls`, `adaptive_dash`, `vo_adaptive_hls`, `vo_adaptive_dash`, `download_hls`, `download_dash`, `multitrack_adaptive_hls_v2`
* `hardsub`: Preference order for which hardsub versions to extract, or `all` (default: `None` = no hardsubs), e.g. `crunchyrollbeta:hardsub=en-US,None`
* `hardsub`: One or more hardsub versions to extract (in order of preference), or `all` (default: `None` = no hardsubs will be extracted), e.g. `crunchyrollbeta:hardsub=en-US,de-DE`
#### vikichannel
* `video_types`: Types of videos to download - one or more of `episodes`, `movies`, `clips`, `trailers`
@@ -1845,18 +1811,25 @@ #### hotstar
* `vcodec`: vcodec to ignore - one or more of `h264`, `h265`, `dvh265`
* `dr`: dynamic range to ignore - one or more of `sdr`, `hdr10`, `dv`
#### niconicochannelplus
* `max_comments`: Maximum number of comments to extract - default is `120`
#### tiktok
* `api_hostname`: Hostname to use for mobile API requests, e.g. `api-h2.tiktokv.com`
* `app_version`: App version to call mobile APIs with - should be set along with `manifest_app_version`, e.g. `20.2.1`
* `manifest_app_version`: Numeric app version to call mobile APIs with, e.g. `221`
* `api_hostname`: Hostname to use for mobile API calls, e.g. `api22-normal-c-alisg.tiktokv.com`
* `app_name`: Default app name to use with mobile API calls, e.g. `trill`
* `app_version`: Default app version to use with mobile API calls - should be set along with `manifest_app_version`, e.g. `34.1.2`
* `manifest_app_version`: Default numeric app version to use with mobile API calls, e.g. `2023401020`
* `aid`: Default app ID to use with mobile API calls, e.g. `1180`
* `app_info`: Enable mobile API extraction with one or more app info strings in the format of `<iid>/[app_name]/[app_version]/[manifest_app_version]/[aid]`, where `iid` is the unique app install ID. `iid` is the only required value; all other values and their `/` separators can be omitted, e.g. `tiktok:app_info=1234567890123456789` or `tiktok:app_info=123,456/trill///1180,789//34.0.1/340001`
* `device_id`: Enable mobile API extraction with a genuine device ID to be used with mobile API calls. Default is a random 19-digit string
#### rokfinchannel
* `tab`: Which tab to download - one of `new`, `top`, `videos`, `podcasts`, `streams`, `stacks`
#### twitter
* `legacy_api`: Force usage of the legacy Twitter API instead of the GraphQL API for tweet extraction. Has no effect if login cookies are passed
* `api`: Select one of `graphql` (default), `legacy` or `syndication` as the API for tweet extraction. Has no effect if logged in
#### wrestleuniverse
#### stacommu, wrestleuniverse
* `device_id`: UUID value assigned by the website and used to enforce device limits for paid livestream content. Can be found in browser local storage
#### twitch
@@ -1865,6 +1838,27 @@ #### twitch
#### nhkradirulive (NHK らじる★らじる LIVE)
* `area`: Which regional variation to extract. Valid areas are: `sapporo`, `sendai`, `tokyo`, `nagoya`, `osaka`, `hiroshima`, `matsuyama`, `fukuoka`. Defaults to `tokyo`
#### nflplusreplay
* `type`: Type(s) of game replays to extract. Valid types are: `full_game`, `full_game_spanish`, `condensed_game` and `all_22`. You can use `all` to extract all available replay types, which is the default
#### jiocinema
* `refresh_token`: The `refreshToken` UUID from browser local storage can be passed to extend the life of your login session when logging in with `token` as username and the `accessToken` from browser local storage as password
#### jiosaavn
* `bitrate`: Audio bitrates to request. One or more of `16`, `32`, `64`, `128`, `320`. Default is `128,320`
#### afreecatvlive
* `cdn`: One or more CDN IDs to use with the API call for stream URLs, e.g. `gcp_cdn`, `gs_cdn_pc_app`, `gs_cdn_mobile_web`, `gs_cdn_pc_web`
#### soundcloud
* `formats`: Formats to request from the API. Requested values should be in the format of `{protocol}_{extension}` (omitting the bitrate), e.g. `hls_opus,http_aac`. The `*` character functions as a wildcard, e.g. `*_mp3`, and can be passed by itself to request all formats. Known protocols include `http`, `hls` and `hls-aes`; known extensions include `aac`, `opus` and `mp3`. Original `download` formats are always extracted. Default is `http_aac,hls_aac,http_opus,hls_opus,http_mp3,hls_mp3`
#### orfon (orf:on)
* `prefer_segments_playlist`: Prefer a playlist of program segments instead of a single complete video when available. If individual segments are desired, use `--concat-playlist never --extractor-args "orfon:prefer_segments_playlist"`
#### bilibili
* `prefer_multi_flv`: Prefer extracting flv formats over mp4 for older videos that still provide legacy formats
**Note**: These options may be changed/removed in the future without concern for backward compatibility
<!-- MANPAGE: MOVE "INSTALLATION" SECTION HERE -->
@@ -1876,7 +1870,7 @@ # PLUGINS
Plugins can be of `<type>`s `extractor` or `postprocessor`.
- Extractor plugins do not need to be enabled from the CLI and are automatically invoked when the input URL is suitable for it.
- Extractor plugins take priority over builtin extractors.
- Extractor plugins take priority over built-in extractors.
- Postprocessor plugins can be invoked using `--use-postprocessor NAME`.
@@ -1922,6 +1916,7 @@ ## Installing Plugins
`.zip`, `.egg` and `.whl` archives containing a `yt_dlp_plugins` namespace folder in their root are also supported as plugin packages.
* e.g. `${XDG_CONFIG_HOME}/yt-dlp/plugins/mypluginpkg.zip` where `mypluginpkg.zip` contains `yt_dlp_plugins/<type>/myplugin.py`
Run yt-dlp with `--verbose` to check if the plugin has been loaded.
@@ -1930,7 +1925,7 @@ ## Developing Plugins
See the [yt-dlp-sample-plugins](https://github.com/yt-dlp/yt-dlp-sample-plugins) repo for a template plugin package and the [Plugin Development](https://github.com/yt-dlp/yt-dlp/wiki/Plugin-Development) section of the wiki for a plugin development guide.
All public classes with a name ending in `IE`/`PP` are imported from each file for extractors and postprocessors repectively. This respects underscore prefix (e.g. `_MyBasePluginIE` is private) and `__all__`. Modules can similarly be excluded by prefixing the module name with an underscore (e.g. `_myplugin.py`).
All public classes with a name ending in `IE`/`PP` are imported from each file for extractors and postprocessors respectively. This respects underscore prefix (e.g. `_MyBasePluginIE` is private) and `__all__`. Modules can similarly be excluded by prefixing the module name with an underscore (e.g. `_myplugin.py`).
To replace an existing extractor with a subclass of one, set the `plugin_name` class keyword argument (e.g. `class MyPluginIE(ABuiltInIE, plugin_name='myplugin')` will replace `ABuiltInIE` with `MyPluginIE`). Since the extractor replaces the parent, you should exclude the subclass extractor from being imported separately by making it private using one of the methods described above.
@@ -1942,7 +1937,7 @@ # EMBEDDING YT-DLP
yt-dlp makes the best effort to be a good command-line program, and thus should be callable from any programming language.
Your program should avoid parsing the normal stdout since they may change in future versions. Instead they should use options such as `-J`, `--print`, `--progress-template`, `--exec` etc to create console output that you can reliably reproduce and parse.
Your program should avoid parsing the normal stdout since they may change in future versions. Instead, they should use options such as `-J`, `--print`, `--progress-template`, `--exec` etc to create console output that you can reliably reproduce and parse.
From a Python program, you can embed yt-dlp in a more powerful fashion, like this:
@@ -1954,7 +1949,7 @@ # EMBEDDING YT-DLP
ydl.download(URLS)
```
Most likely, you'll want to use various options. For a list of options available, have a look at [`yt_dlp/YoutubeDL.py`](yt_dlp/YoutubeDL.py#L184).
Most likely, you'll want to use various options. For a list of options available, have a look at [`yt_dlp/YoutubeDL.py`](yt_dlp/YoutubeDL.py#L183) or `help(yt_dlp.YoutubeDL)` in a Python shell. If you are already familiar with the CLI, you can use [`devscripts/cli_to_api.py`](https://github.com/yt-dlp/yt-dlp/blob/master/devscripts/cli_to_api.py) to translate any CLI switches to `YoutubeDL` params.
**Tip**: If you are porting your code from youtube-dl to yt-dlp, one important point to look out for is that we do not guarantee the return value of `YoutubeDL.extract_info` to be json serializable, or even be a dictionary. It will be dictionary-like, but if you want to ensure it is a serializable dictionary, pass it through `YoutubeDL.sanitize_info` as shown in the [example below](#extracting-information)
@@ -2135,9 +2130,114 @@ #### Use a custom format selector
ydl.download(URLS)
```
<!-- MANPAGE: MOVE "NEW FEATURES" SECTION HERE -->
# DEPRECATED OPTIONS
# CHANGES FROM YOUTUBE-DL
### New features
* Forked from [**yt-dlc@f9401f2**](https://github.com/blackjack4494/yt-dlc/commit/f9401f2a91987068139c5f757b12fc711d4c0cee) and merged with [**youtube-dl@a08f2b7**](https://github.com/ytdl-org/youtube-dl/commit/a08f2b7e4567cdc50c0614ee0a4ffdff49b8b6e6) ([exceptions](https://github.com/yt-dlp/yt-dlp/issues/21))
* **[SponsorBlock Integration](#sponsorblock-options)**: You can mark/remove sponsor sections in YouTube videos by utilizing the [SponsorBlock](https://sponsor.ajay.app) API
* **[Format Sorting](#sorting-formats)**: The default format sorting options have been changed so that higher resolution and better codecs will be now preferred instead of simply using larger bitrate. Furthermore, you can now specify the sort order using `-S`. This allows for much easier format selection than what is possible by simply using `--format` ([examples](#format-selection-examples))
* **Merged with animelover1984/youtube-dl**: You get most of the features and improvements from [animelover1984/youtube-dl](https://github.com/animelover1984/youtube-dl) including `--write-comments`, `BiliBiliSearch`, `BilibiliChannel`, Embedding thumbnail in mp4/ogg/opus, playlist infojson etc. Note that NicoNico livestreams are not available. See [#31](https://github.com/yt-dlp/yt-dlp/pull/31) for details.
* **YouTube improvements**:
* Supports Clips, Stories (`ytstories:<channel UCID>`), Search (including filters)**\***, YouTube Music Search, Channel-specific search, Search prefixes (`ytsearch:`, `ytsearchdate:`)**\***, Mixes, and Feeds (`:ytfav`, `:ytwatchlater`, `:ytsubs`, `:ythistory`, `:ytrec`, `:ytnotif`)
* Fix for [n-sig based throttling](https://github.com/ytdl-org/youtube-dl/issues/29326) **\***
* Supports some (but not all) age-gated content without cookies
* Download livestreams from the start using `--live-from-start` (*experimental*)
* Channel URLs download all uploads of the channel, including shorts and live
* **Cookies from browser**: Cookies can be automatically extracted from all major web browsers using `--cookies-from-browser BROWSER[+KEYRING][:PROFILE][::CONTAINER]`
* **Download time range**: Videos can be downloaded partially based on either timestamps or chapters using `--download-sections`
* **Split video by chapters**: Videos can be split into multiple files based on chapters using `--split-chapters`
* **Multi-threaded fragment downloads**: Download multiple fragments of m3u8/mpd videos in parallel. Use `--concurrent-fragments` (`-N`) option to set the number of threads used
* **Aria2c with HLS/DASH**: You can use `aria2c` as the external downloader for DASH(mpd) and HLS(m3u8) formats
* **New and fixed extractors**: Many new extractors have been added and a lot of existing ones have been fixed. See the [changelog](Changelog.md) or the [list of supported sites](supportedsites.md)
* **New MSOs**: Philo, Spectrum, SlingTV, Cablevision, RCN etc.
* **Subtitle extraction from manifests**: Subtitles can be extracted from streaming media manifests. See [commit/be6202f](https://github.com/yt-dlp/yt-dlp/commit/be6202f12b97858b9d716e608394b51065d0419f) for details
* **Multiple paths and output templates**: You can give different [output templates](#output-template) and download paths for different types of files. You can also set a temporary path where intermediary files are downloaded to using `--paths` (`-P`)
* **Portable Configuration**: Configuration files are automatically loaded from the home and root directories. See [CONFIGURATION](#configuration) for details
* **Output template improvements**: Output templates can now have date-time formatting, numeric offsets, object traversal etc. See [output template](#output-template) for details. Even more advanced operations can also be done with the help of `--parse-metadata` and `--replace-in-metadata`
* **Other new options**: Many new options have been added such as `--alias`, `--print`, `--concat-playlist`, `--wait-for-video`, `--retry-sleep`, `--sleep-requests`, `--convert-thumbnails`, `--force-download-archive`, `--force-overwrites`, `--break-match-filter` etc
* **Improvements**: Regex and other operators in `--format`/`--match-filter`, multiple `--postprocessor-args` and `--downloader-args`, faster archive checking, more [format selection options](#format-selection), merge multi-video/audio, multiple `--config-locations`, `--exec` at different stages, etc
* **Plugins**: Extractors and PostProcessors can be loaded from an external file. See [plugins](#plugins) for details
* **Self updater**: The releases can be updated using `yt-dlp -U`, and downgraded using `--update-to` if required
* **Automated builds**: [Nightly/master builds](#update-channels) can be used with `--update-to nightly` and `--update-to master`
See [changelog](Changelog.md) or [commits](https://github.com/yt-dlp/yt-dlp/commits) for the full list of changes
Features marked with a **\*** have been back-ported to youtube-dl
### Differences in default behavior
Some of yt-dlp's default options are different from that of youtube-dl and youtube-dlc:
* yt-dlp supports only [Python 3.8+](## "Windows 7"), and *may* remove support for more versions as they [become EOL](https://devguide.python.org/versions/#python-release-cycle); while [youtube-dl still supports Python 2.6+ and 3.2+](https://github.com/ytdl-org/youtube-dl/issues/30568#issue-1118238743)
* The options `--auto-number` (`-A`), `--title` (`-t`) and `--literal` (`-l`), no longer work. See [removed options](#Removed) for details
* `avconv` is not supported as an alternative to `ffmpeg`
* yt-dlp stores config files in slightly different locations to youtube-dl. See [CONFIGURATION](#configuration) for a list of correct locations
* The default [output template](#output-template) is `%(title)s [%(id)s].%(ext)s`. There is no real reason for this change. This was changed before yt-dlp was ever made public and now there are no plans to change it back to `%(title)s-%(id)s.%(ext)s`. Instead, you may use `--compat-options filename`
* The default [format sorting](#sorting-formats) is different from youtube-dl and prefers higher resolution and better codecs rather than higher bitrates. You can use the `--format-sort` option to change this to any order you prefer, or use `--compat-options format-sort` to use youtube-dl's sorting order
* The default format selector is `bv*+ba/b`. This means that if a combined video + audio format that is better than the best video-only format is found, the former will be preferred. Use `-f bv+ba/b` or `--compat-options format-spec` to revert this
* Unlike youtube-dlc, yt-dlp does not allow merging multiple audio/video streams into one file by default (since this conflicts with the use of `-f bv*+ba`). If needed, this feature must be enabled using `--audio-multistreams` and `--video-multistreams`. You can also use `--compat-options multistreams` to enable both
* `--no-abort-on-error` is enabled by default. Use `--abort-on-error` or `--compat-options abort-on-error` to abort on errors instead
* When writing metadata files such as thumbnails, description or infojson, the same information (if available) is also written for playlists. Use `--no-write-playlist-metafiles` or `--compat-options no-playlist-metafiles` to not write these files
* `--add-metadata` attaches the `infojson` to `mkv` files in addition to writing the metadata when used with `--write-info-json`. Use `--no-embed-info-json` or `--compat-options no-attach-info-json` to revert this
* Some metadata are embedded into different fields when using `--add-metadata` as compared to youtube-dl. Most notably, `comment` field contains the `webpage_url` and `synopsis` contains the `description`. You can [use `--parse-metadata`](#modifying-metadata) to modify this to your liking or use `--compat-options embed-metadata` to revert this
* `playlist_index` behaves differently when used with options like `--playlist-reverse` and `--playlist-items`. See [#302](https://github.com/yt-dlp/yt-dlp/issues/302) for details. You can use `--compat-options playlist-index` if you want to keep the earlier behavior
* The output of `-F` is listed in a new format. Use `--compat-options list-formats` to revert this
* Live chats (if available) are considered as subtitles. Use `--sub-langs all,-live_chat` to download all subtitles except live chat. You can also use `--compat-options no-live-chat` to prevent any live chat/danmaku from downloading
* YouTube channel URLs download all uploads of the channel. To download only the videos in a specific tab, pass the tab's URL. If the channel does not show the requested tab, an error will be raised. Also, `/live` URLs raise an error if there are no live videos instead of silently downloading the entire channel. You may use `--compat-options no-youtube-channel-redirect` to revert all these redirections
* Unavailable videos are also listed for YouTube playlists. Use `--compat-options no-youtube-unavailable-videos` to remove this
* The upload dates extracted from YouTube are in UTC [when available](https://github.com/yt-dlp/yt-dlp/blob/89e4d86171c7b7c997c77d4714542e0383bf0db0/yt_dlp/extractor/youtube.py#L3898-L3900). Use `--compat-options no-youtube-prefer-utc-upload-date` to prefer the non-UTC upload date.
* If `ffmpeg` is used as the downloader, the downloading and merging of formats happen in a single step when possible. Use `--compat-options no-direct-merge` to revert this
* Thumbnail embedding in `mp4` is done with mutagen if possible. Use `--compat-options embed-thumbnail-atomicparsley` to force the use of AtomicParsley instead
* Some internal metadata such as filenames are removed by default from the infojson. Use `--no-clean-infojson` or `--compat-options no-clean-infojson` to revert this
* When `--embed-subs` and `--write-subs` are used together, the subtitles are written to disk and also embedded in the media file. You can use just `--embed-subs` to embed the subs and automatically delete the separate file. See [#630 (comment)](https://github.com/yt-dlp/yt-dlp/issues/630#issuecomment-893659460) for more info. `--compat-options no-keep-subs` can be used to revert this
* `certifi` will be used for SSL root certificates, if installed. If you want to use system certificates (e.g. self-signed), use `--compat-options no-certifi`
* yt-dlp's sanitization of invalid characters in filenames is different/smarter than in youtube-dl. You can use `--compat-options filename-sanitization` to revert to youtube-dl's behavior
* ~~yt-dlp tries to parse the external downloader outputs into the standard progress output if possible (Currently implemented: [aria2c](https://github.com/yt-dlp/yt-dlp/issues/5931)). You can use `--compat-options no-external-downloader-progress` to get the downloader output as-is~~
* yt-dlp versions between 2021.09.01 and 2023.01.02 applies `--match-filter` to nested playlists. This was an unintentional side-effect of [8f18ac](https://github.com/yt-dlp/yt-dlp/commit/8f18aca8717bb0dd49054555af8d386e5eda3a88) and is fixed in [d7b460](https://github.com/yt-dlp/yt-dlp/commit/d7b460d0e5fc710950582baed2e3fc616ed98a80). Use `--compat-options playlist-match-filter` to revert this
* yt-dlp versions between 2021.11.10 and 2023.06.21 estimated `filesize_approx` values for fragmented/manifest formats. This was added for convenience in [f2fe69](https://github.com/yt-dlp/yt-dlp/commit/f2fe69c7b0d208bdb1f6292b4ae92bc1e1a7444a), but was reverted in [0dff8e](https://github.com/yt-dlp/yt-dlp/commit/0dff8e4d1e6e9fb938f4256ea9af7d81f42fd54f) due to the potentially extreme inaccuracy of the estimated values. Use `--compat-options manifest-filesize-approx` to keep extracting the estimated values
* yt-dlp uses modern http client backends such as `requests`. Use `--compat-options prefer-legacy-http-handler` to prefer the legacy http handler (`urllib`) to be used for standard http requests.
* The sub-modules `swfinterp`, `casefold` are removed.
For ease of use, a few more compat options are available:
* `--compat-options all`: Use all compat options (**Do NOT use this!**)
* `--compat-options youtube-dl`: Same as `--compat-options all,-multistreams,-playlist-match-filter,-manifest-filesize-approx,-allow-unsafe-ext`
* `--compat-options youtube-dlc`: Same as `--compat-options all,-no-live-chat,-no-youtube-channel-redirect,-playlist-match-filter,-manifest-filesize-approx,-allow-unsafe-ext`
* `--compat-options 2021`: Same as `--compat-options 2022,no-certifi,filename-sanitization,no-youtube-prefer-utc-upload-date`
* `--compat-options 2022`: Same as `--compat-options 2023,playlist-match-filter,no-external-downloader-progress,prefer-legacy-http-handler,manifest-filesize-approx`
* `--compat-options 2023`: Currently does nothing. Use this to enable all future compat options
The following compat options restore vulnerable behavior from before security patches:
* `--compat-options allow-unsafe-ext`: Allow files with any extension (including unsafe ones) to be downloaded ([GHSA-79w7-vh3h-8g4j](<https://github.com/yt-dlp/yt-dlp/security/advisories/GHSA-79w7-vh3h-8g4j>))
> :warning: Only use if a valid file download is rejected because its extension is detected as uncommon
>
> **This option can enable remote code execution! Consider [opening an issue](<https://github.com/yt-dlp/yt-dlp/issues/new/choose>) instead!**
### Deprecated options
These are all the deprecated options and the current alternative to achieve the same effect
@@ -2173,7 +2273,6 @@ #### Redundant options
--no-playlist-reverse Default
--no-colors --color no_color
#### Not recommended
While these options still work, their use is not recommended since there are other alternatives to achieve the same
@@ -2200,7 +2299,6 @@ #### Not recommended
--geo-bypass-country CODE --xff CODE
--geo-bypass-ip-block IP_BLOCK --xff IP_BLOCK
#### Developer options
These options are not intended to be used by the end-user
@@ -2210,7 +2308,6 @@ #### Developer options
--allow-unplayable-formats List unplayable formats also
--no-allow-unplayable-formats Default
#### Old aliases
These are aliases that are no longer documented for various reasons
@@ -2256,6 +2353,7 @@ #### No longer supported
--write-annotations No supported site has annotations now
--no-write-annotations Default
--compat-options seperate-video-versions No longer needed
--compat-options no-youtube-prefer-utc-upload-date No longer supported
#### Removed
These options were deprecated since 2014 and have now been entirely removed
@@ -2263,6 +2361,7 @@ #### Removed
-A, --auto-number -o "%(autonumber)s-%(id)s.%(ext)s"
-t, -l, --title, --literal -o "%(title)s-%(id)s.%(ext)s"
# CONTRIBUTING
See [CONTRIBUTING.md](CONTRIBUTING.md#contributing-to-yt-dlp) for instructions on [Opening an Issue](CONTRIBUTING.md#opening-an-issue) and [Contributing code to the project](CONTRIBUTING.md#developer-instructions)

0
bundle/__init__.py Normal file
View File

10
bundle/docker/compose.yml Normal file
View File

@@ -0,0 +1,10 @@
services:
static:
build: static
environment:
channel: ${channel}
origin: ${origin}
version: ${version}
volumes:
- ~/build:/build
- ../..:/yt-dlp

View File

@@ -0,0 +1,21 @@
FROM alpine:3.19 as base
RUN apk --update add --no-cache \
build-base \
python3 \
pipx \
;
RUN pipx install pyinstaller
# Requires above step to prepare the shared venv
RUN ~/.local/share/pipx/shared/bin/python -m pip install -U wheel
RUN apk --update add --no-cache \
scons \
patchelf \
binutils \
;
RUN pipx install staticx
WORKDIR /yt-dlp
COPY entrypoint.sh /entrypoint.sh
ENTRYPOINT /entrypoint.sh

View File

@@ -0,0 +1,13 @@
#!/bin/ash
set -e
source ~/.local/share/pipx/venvs/pyinstaller/bin/activate
python -m devscripts.install_deps --include secretstorage
python -m devscripts.make_lazy_extractors
python devscripts/update-version.py -c "${channel}" -r "${origin}" "${version}"
python -m bundle.pyinstaller
deactivate
source ~/.local/share/pipx/venvs/staticx/bin/activate
staticx /yt-dlp/dist/yt-dlp_linux /build/yt-dlp_linux
deactivate

59
bundle/py2exe.py Executable file
View File

@@ -0,0 +1,59 @@
#!/usr/bin/env python3
# Allow execution from anywhere
import os
import sys
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import warnings
from py2exe import freeze
from devscripts.utils import read_version
VERSION = read_version()
def main():
warnings.warn(
'py2exe builds do not support pycryptodomex and needs VC++14 to run. '
'It is recommended to run "pyinst.py" to build using pyinstaller instead')
freeze(
console=[{
'script': './yt_dlp/__main__.py',
'dest_base': 'yt-dlp',
'icon_resources': [(1, 'devscripts/logo.ico')],
}],
version_info={
'version': VERSION,
'description': 'A feature-rich command-line audio/video downloader',
'comments': 'Official repository: <https://github.com/yt-dlp/yt-dlp>',
'product_name': 'yt-dlp',
'product_version': VERSION,
},
options={
'bundle_files': 0,
'compressed': 1,
'optimize': 2,
'dist_dir': './dist',
'excludes': [
# py2exe cannot import Crypto
'Crypto',
'Cryptodome',
# requests >=2.32.0 breaks py2exe builds due to certifi dependency
'requests',
'urllib3',
],
'dll_excludes': ['w9xpopen.exe', 'crypt32.dll'],
# Modules that are only imported dynamically must be added here
'includes': ['yt_dlp.compat._legacy', 'yt_dlp.compat._deprecated',
'yt_dlp.utils._legacy', 'yt_dlp.utils._deprecated'],
},
zipfile=None,
)
if __name__ == '__main__':
main()

10
pyinst.py → bundle/pyinstaller.py Normal file → Executable file
View File

@@ -4,7 +4,7 @@
import os
import sys
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import platform
@@ -68,7 +68,7 @@ def exe(onedir):
'dist/',
onedir and f'{name}/',
name,
OS_NAME == 'win32' and '.exe'
OS_NAME == 'win32' and '.exe',
)))
@@ -113,7 +113,7 @@ def windows_set_version(exe, version):
),
kids=[
StringFileInfo([StringTable('040904B0', [
StringStruct('Comments', 'yt-dlp%s Command Line Interface' % suffix),
StringStruct('Comments', f'yt-dlp{suffix} Command Line Interface'),
StringStruct('CompanyName', 'https://github.com/yt-dlp'),
StringStruct('FileDescription', 'yt-dlp%s' % (MACHINE and f' ({MACHINE})')),
StringStruct('FileVersion', version),
@@ -123,8 +123,8 @@ def windows_set_version(exe, version):
StringStruct('ProductName', f'yt-dlp{suffix}'),
StringStruct(
'ProductVersion', f'{version}{suffix} on Python {platform.python_version()}'),
])]), VarFileInfo([VarStruct('Translation', [0, 1200])])
]
])]), VarFileInfo([VarStruct('Translation', [0, 1200])]),
],
))

Binary file not shown.

Binary file not shown.

View File

@@ -1 +0,0 @@
# Empty file needed to make devscripts.utils properly importable from outside

View File

@@ -9,8 +9,8 @@
import yt_dlp
BASH_COMPLETION_FILE = "completions/bash/yt-dlp"
BASH_COMPLETION_TEMPLATE = "devscripts/bash-completion.in"
BASH_COMPLETION_FILE = 'completions/bash/yt-dlp'
BASH_COMPLETION_TEMPLATE = 'devscripts/bash-completion.in'
def build_completion(opt_parser):
@@ -21,9 +21,9 @@ def build_completion(opt_parser):
opts_flag.append(option.get_opt_string())
with open(BASH_COMPLETION_TEMPLATE) as f:
template = f.read()
with open(BASH_COMPLETION_FILE, "w") as f:
with open(BASH_COMPLETION_FILE, 'w') as f:
# just using the special char
filled_template = template.replace("{{flags}}", " ".join(opts_flag))
filled_template = template.replace('{{flags}}', ' '.join(opts_flag))
f.write(filled_template)

View File

@@ -1,12 +1,12 @@
[
{
"action": "add",
"when": "776d1c3f0c9b00399896dd2e40e78e9a43218109",
"when": "29cb20bd563c02671b31dd840139e93dd37150a1",
"short": "[priority] **A new release type has been added!**\n * [`nightly`](https://github.com/yt-dlp/yt-dlp/releases/tag/nightly) builds will be made after each push, containing the latest fixes (but also possibly bugs).\n * When using `--update`/`-U`, a release binary will only update to its current channel (either `stable` or `nightly`).\n * The `--update-to` option has been added allowing the user more control over program upgrades (or downgrades).\n * `--update-to` can change the release channel (`stable`, `nightly`) and also upgrade or downgrade to specific tags.\n * **Usage**: `--update-to CHANNEL`, `--update-to TAG`, `--update-to CHANNEL@TAG`"
},
{
"action": "add",
"when": "776d1c3f0c9b00399896dd2e40e78e9a43218109",
"when": "5038f6d713303e0967d002216e7a88652401c22a",
"short": "[priority] **YouTube throttling fixes!**"
},
{
@@ -38,13 +38,15 @@
},
{
"action": "change",
"when": "7b37e8b23691613f331bd4ebc9d639dd6f93c972",
"short": "Improve `--download-sections`\n - Support negative time-ranges\n - Add `*from-url` to obey time-ranges in URL"
"when": "b4e0d75848e9447cee2cd3646ce54d4744a7ff56",
"short": "Improve `--download-sections`\n - Support negative time-ranges\n - Add `*from-url` to obey time-ranges in URL",
"authors": ["pukkandan"]
},
{
"action": "change",
"when": "1e75d97db21152acc764b30a688e516f04b8a142",
"short": "[extractor/youtube] Add `ios` to default clients used\n - IOS is affected neither by 403 nor by nsig so helps mitigate them preemptively\n - IOS also has higher bit-rate 'premium' formats though they are not labeled as such"
"short": "[extractor/youtube] Add `ios` to default clients used\n - IOS is affected neither by 403 nor by nsig so helps mitigate them preemptively\n - IOS also has higher bit-rate 'premium' formats though they are not labeled as such",
"authors": ["pukkandan"]
},
{
"action": "change",
@@ -55,6 +57,133 @@
{
"action": "change",
"when": "a4486bfc1dc7057efca9dd3fe70d7fa25c56f700",
"short": "[misc] Revert \"Add automatic duplicate issue detection\""
"short": "[misc] Revert \"Add automatic duplicate issue detection\"",
"authors": ["pukkandan"]
},
{
"action": "add",
"when": "1ceb657bdd254ad961489e5060f2ccc7d556b729",
"short": "[priority] Security: [[CVE-2023-35934](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-35934)] Fix [Cookie leak](https://github.com/yt-dlp/yt-dlp/security/advisories/GHSA-v8mc-9377-rwjj)\n - `--add-header Cookie:` is deprecated and auto-scoped to input URL domains\n - Cookies are scoped when passed to external downloaders\n - Add `cookies` field to info.json and deprecate `http_headers.Cookie`"
},
{
"action": "change",
"when": "b03fa7834579a01cc5fba48c0e73488a16683d48",
"short": "[ie/twitter] Revert 92315c03774cfabb3a921884326beb4b981f786b",
"authors": ["pukkandan"]
},
{
"action": "change",
"when": "fcd6a76adc49d5cd8783985c7ce35384b72e545f",
"short": "[test] Add tests for socks proxies (#7908)",
"authors": ["coletdjnz"]
},
{
"action": "change",
"when": "4bf912282a34b58b6b35d8f7e6be535770c89c76",
"short": "[rh:urllib] Remove dot segments during URL normalization (#7662)",
"authors": ["coletdjnz"]
},
{
"action": "change",
"when": "59e92b1f1833440bb2190f847eb735cf0f90bc85",
"short": "[rh:urllib] Simplify gzip decoding (#7611)",
"authors": ["Grub4K"]
},
{
"action": "add",
"when": "c1d71d0d9f41db5e4306c86af232f5f6220a130b",
"short": "[priority] **The minimum *recommended* Python version has been raised to 3.8**\nSince Python 3.7 has reached end-of-life, support for it will be dropped soon. [Read more](https://github.com/yt-dlp/yt-dlp/issues/7803)"
},
{
"action": "add",
"when": "61bdf15fc7400601c3da1aa7a43917310a5bf391",
"short": "[priority] Security: [[CVE-2023-40581](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-40581)] [Prevent RCE when using `--exec` with `%q` on Windows](https://github.com/yt-dlp/yt-dlp/security/advisories/GHSA-42h4-v29r-42qg)\n - The shell escape function is now using `\"\"` instead of `\\\"`.\n - `utils.Popen` has been patched to properly quote commands."
},
{
"action": "change",
"when": "8a8b54523addf46dfd50ef599761a81bc22362e6",
"short": "[rh:requests] Add handler for `requests` HTTP library (#3668)\n\n\tAdds support for HTTPS proxies and persistent connections (keep-alive)",
"authors": ["bashonly", "coletdjnz", "Grub4K"]
},
{
"action": "add",
"when": "1d03633c5a1621b9f3a756f0a4f9dc61fab3aeaa",
"short": "[priority] **The release channels have been adjusted!**\n\t* [`master`](https://github.com/yt-dlp/yt-dlp-master-builds) builds are made after each push, containing the latest fixes (but also possibly bugs). This was previously the `nightly` channel.\n\t* [`nightly`](https://github.com/yt-dlp/yt-dlp-nightly-builds) builds are now made once a day, if there were any changes."
},
{
"action": "add",
"when": "f04b5bedad7b281bee9814686bba1762bae092eb",
"short": "[priority] Security: [[CVE-2023-46121](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-46121)] Patch [Generic Extractor MITM Vulnerability via Arbitrary Proxy Injection](https://github.com/yt-dlp/yt-dlp/security/advisories/GHSA-3ch3-jhc6-5r8x)\n\t- Disallow smuggling of arbitrary `http_headers`; extractors now only use specific headers"
},
{
"action": "change",
"when": "15f22b4880b6b3f71f350c64d70976ae65b9f1ca",
"short": "[webvtt] Allow spaces before newlines for CueBlock (#7681)",
"authors": ["TSRBerry"]
},
{
"action": "change",
"when": "4ce57d3b873c2887814cbec03d029533e82f7db5",
"short": "[ie] Support multi-period MPD streams (#6654)",
"authors": ["alard", "pukkandan"]
},
{
"action": "change",
"when": "aa7e9ae4f48276bd5d0173966c77db9484f65a0a",
"short": "[ie/xvideos] Support new URL format (#9502)",
"authors": ["sta1us"]
},
{
"action": "remove",
"when": "22e4dfacb61f62dfbb3eb41b31c7b69ba1059b80"
},
{
"action": "change",
"when": "e3a3ed8a981d9395c4859b6ef56cd02bc3148db2",
"short": "[cleanup:ie] No `from` stdlib imports in extractors",
"authors": ["pukkandan"]
},
{
"action": "add",
"when": "9590cc6b4768e190183d7d071a6c78170889116a",
"short": "[priority] Security: [[CVE-2024-22423](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2024-22423)] [Prevent RCE when using `--exec` with `%q` on Windows](https://github.com/yt-dlp/yt-dlp/security/advisories/GHSA-hjq6-52gw-2g7p)\n - The shell escape function now properly escapes `%`, `\\` and `\\n`.\n - `utils.Popen` has been patched accordingly."
},
{
"action": "change",
"when": "41ba4a808b597a3afed78c89675a30deb6844450",
"short": "[ie/tiktok] Extract via mobile API only if extractor-arg is passed (#9938)",
"authors": ["bashonly"]
},
{
"action": "remove",
"when": "6e36d17f404556f0e3a43f441c477a71a91877d9"
},
{
"action": "change",
"when": "beaf832c7a9d57833f365ce18f6115b88071b296",
"short": "[ie/soundcloud] Add `formats` extractor-arg (#10004)",
"authors": ["bashonly", "Grub4K"]
},
{
"action": "change",
"when": "5c019f6328ad40d66561eac3c4de0b3cd070d0f6",
"short": "[cleanup] Misc (#9765)",
"authors": ["bashonly", "Grub4K", "seproDev"]
},
{
"action": "change",
"when": "e6a22834df1776ec4e486526f6df2bf53cb7e06f",
"short": "[ie/orf:on] Add `prefer_segments_playlist` extractor-arg (#10314)",
"authors": ["seproDev"]
},
{
"action": "add",
"when": "6aaf96a3d6e7d0d426e97e11a2fcf52fda00e733",
"short": "[priority] Security: [[CVE-2024-38519](https://nvd.nist.gov/vuln/detail/CVE-2024-38519)] [Properly sanitize file-extension to prevent file system modification and RCE](https://github.com/yt-dlp/yt-dlp/security/advisories/GHSA-79w7-vh3h-8g4j)\n - Unsafe extensions are now blocked from being downloaded"
},
{
"action": "add",
"when": "6075a029dba70a89675ae1250e7cdfd91f0eba41",
"short": "[priority] Security: [[ie/douyutv] Do not use dangerous javascript source/URL](https://github.com/yt-dlp/yt-dlp/security/advisories/GHSA-3v33-3wmw-3785)\n - A dependency on potentially malicious third-party JavaScript code has been removed from the Douyu extractors"
}
]

2
devscripts/cli_to_api.py Normal file → Executable file
View File

@@ -1,3 +1,5 @@
#!/usr/bin/env python3
# Allow direct execution
import os
import sys

81
devscripts/install_deps.py Executable file
View File

@@ -0,0 +1,81 @@
#!/usr/bin/env python3
# Allow execution from anywhere
import os
import sys
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import argparse
import re
import subprocess
from pathlib import Path
from devscripts.tomlparse import parse_toml
from devscripts.utils import read_file
def parse_args():
parser = argparse.ArgumentParser(description='Install dependencies for yt-dlp')
parser.add_argument(
'input', nargs='?', metavar='TOMLFILE', default=Path(__file__).parent.parent / 'pyproject.toml',
help='input file (default: %(default)s)')
parser.add_argument(
'-e', '--exclude', metavar='DEPENDENCY', action='append',
help='exclude a dependency')
parser.add_argument(
'-i', '--include', metavar='GROUP', action='append',
help='include an optional dependency group')
parser.add_argument(
'-o', '--only-optional', action='store_true',
help='only install optional dependencies')
parser.add_argument(
'-p', '--print', action='store_true',
help='only print requirements to stdout')
parser.add_argument(
'-u', '--user', action='store_true',
help='install with pip as --user')
return parser.parse_args()
def main():
args = parse_args()
project_table = parse_toml(read_file(args.input))['project']
recursive_pattern = re.compile(rf'{project_table["name"]}\[(?P<group_name>[\w-]+)\]')
optional_groups = project_table['optional-dependencies']
excludes = args.exclude or []
def yield_deps(group):
for dep in group:
if mobj := recursive_pattern.fullmatch(dep):
yield from optional_groups.get(mobj.group('group_name'), [])
else:
yield dep
targets = []
if not args.only_optional: # `-o` should exclude 'dependencies' and the 'default' group
targets.extend(project_table['dependencies'])
if 'default' not in excludes: # `--exclude default` should exclude entire 'default' group
targets.extend(yield_deps(optional_groups['default']))
for include in filter(None, map(optional_groups.get, args.include or [])):
targets.extend(yield_deps(include))
targets = [t for t in targets if re.match(r'[\w-]+', t).group(0).lower() not in excludes]
if args.print:
for target in targets:
print(target)
return
pip_args = [sys.executable, '-m', 'pip', 'install', '-U']
if args.user:
pip_args.append('--user')
pip_args.extend(targets)
return subprocess.call(pip_args)
if __name__ == '__main__':
sys.exit(main())

View File

@@ -31,57 +31,55 @@ class CommitGroup(enum.Enum):
EXTRACTOR = 'Extractor'
DOWNLOADER = 'Downloader'
POSTPROCESSOR = 'Postprocessor'
NETWORKING = 'Networking'
MISC = 'Misc.'
@classmethod
@property
def ignorable_prefixes(cls):
return ('core', 'downloader', 'extractor', 'misc', 'postprocessor', 'upstream')
@classmethod
@lru_cache
def commit_lookup(cls):
def subgroup_lookup(cls):
return {
name: group
for group, names in {
cls.PRIORITY: {'priority'},
cls.CORE: {
'aes',
'cache',
'compat_utils',
'compat',
'cookies',
'core',
'dependencies',
'jsinterp',
'outtmpl',
'plugins',
'update',
'upstream',
'utils',
},
cls.MISC: {
'build',
'ci',
'cleanup',
'devscripts',
'docs',
'misc',
'test',
},
cls.EXTRACTOR: {'extractor'},
cls.DOWNLOADER: {'downloader'},
cls.POSTPROCESSOR: {'postprocessor'},
cls.NETWORKING: {
'rh',
},
}.items()
for name in names
}
@classmethod
def get(cls, value):
result = cls.commit_lookup().get(value)
if result:
logger.debug(f'Mapped {value!r} => {result.name}')
@lru_cache
def group_lookup(cls):
result = {
'fd': cls.DOWNLOADER,
'ie': cls.EXTRACTOR,
'pp': cls.POSTPROCESSOR,
'upstream': cls.CORE,
}
result.update({item.name.lower(): item for item in iter(cls)})
return result
@classmethod
def get(cls, value: str) -> tuple[CommitGroup | None, str | None]:
group, _, subgroup = (group.strip().lower() for group in value.partition('/'))
result = cls.group_lookup().get(group)
if not result:
if subgroup:
return None, value
subgroup = group
result = cls.subgroup_lookup().get(subgroup)
return result, subgroup or None
@dataclass
class Commit:
@@ -196,19 +194,23 @@ def _prepare_cleanup_misc_items(self, items):
for commit_infos in cleanup_misc_items.values():
sorted_items.append(CommitInfo(
'cleanup', ('Miscellaneous',), ', '.join(
self._format_message_link(None, info.commit.hash).strip()
self._format_message_link(None, info.commit.hash)
for info in sorted(commit_infos, key=lambda item: item.commit.hash or '')),
[], Commit(None, '', commit_infos[0].commit.authors), []))
return sorted_items
def format_single_change(self, info):
message = self._format_message_link(info.message, info.commit.hash)
def format_single_change(self, info: CommitInfo):
message, sep, rest = info.message.partition('\n')
if '[' not in message:
# If the message doesn't already contain markdown links, try to add a link to the commit
message = self._format_message_link(message, info.commit.hash)
if info.issues:
message = message.replace('\n', f' ({self._format_issues(info.issues)})\n', 1)
message = f'{message} ({self._format_issues(info.issues)})'
if info.commit.authors:
message = message.replace('\n', f' by {self._format_authors(info.commit.authors)}\n', 1)
message = f'{message} by {self._format_authors(info.commit.authors)}'
if info.fixes:
fix_message = ', '.join(f'{self._format_message_link(None, fix.hash)}' for fix in info.fixes)
@@ -217,16 +219,14 @@ def format_single_change(self, info):
if authors != info.commit.authors:
fix_message = f'{fix_message} by {self._format_authors(authors)}'
message = message.replace('\n', f' (With fixes in {fix_message})\n', 1)
message = f'{message} (With fixes in {fix_message})'
return message[:-1]
return message if not sep else f'{message}{sep}{rest}'
def _format_message_link(self, message, hash):
assert message or hash, 'Improperly defined commit message or override'
message = message if message else hash[:HASH_LENGTH]
if not hash:
return f'{message}\n'
return f'[{message}\n'.replace('\n', f']({self.repo_url}/commit/{hash})\n', 1)
def _format_message_link(self, message, commit_hash):
assert message or commit_hash, 'Improperly defined commit message or override'
message = message if message else commit_hash[:HASH_LENGTH]
return f'[{message}]({self.repo_url}/commit/{commit_hash})' if commit_hash else message
def _format_issues(self, issues):
return ', '.join(f'[#{issue}]({self.repo_url}/issues/{issue})' for issue in issues)
@@ -247,12 +247,13 @@ class CommitRange:
AUTHOR_INDICATOR_RE = re.compile(r'Authored by:? ', re.IGNORECASE)
MESSAGE_RE = re.compile(r'''
(?:\[(?P<prefix>[^\]]+)\]\ )?
(?:(?P<sub_details>`?[^:`]+`?): )?
(?:(?P<sub_details>`?[\w.-]+`?): )?
(?P<message>.+?)
(?:\ \((?P<issues>\#\d+(?:,\ \#\d+)*)\))?
''', re.VERBOSE | re.DOTALL)
EXTRACTOR_INDICATOR_RE = re.compile(r'(?:Fix|Add)\s+Extractors?', re.IGNORECASE)
FIXES_RE = re.compile(r'(?i:Fix(?:es)?(?:\s+bugs?)?(?:\s+in|\s+for)?|Revert)\s+([\da-f]{40})')
REVERT_RE = re.compile(r'(?:\[[^\]]+\]\s+)?(?i:Revert)\s+([\da-f]{40})')
FIXES_RE = re.compile(r'(?i:Fix(?:es)?(?:\s+bugs?)?(?:\s+in|\s+for)?|Revert|Improve)\s+([\da-f]{40})')
UPSTREAM_MERGE_RE = re.compile(r'Update to ytdl-commit-([\da-f]+)')
def __init__(self, start, end, default_author=None):
@@ -279,7 +280,7 @@ def _get_commits_and_fixes(self, default_author):
self.COMMAND, 'log', f'--format=%H%n%s%n%b%n{self.COMMIT_SEPARATOR}',
f'{self._start}..{self._end}' if self._start else self._end).stdout
commits = {}
commits, reverts = {}, {}
fixes = defaultdict(list)
lines = iter(result.splitlines(False))
for i, commit_hash in enumerate(lines):
@@ -300,6 +301,11 @@ def _get_commits_and_fixes(self, default_author):
logger.debug(f'Reached Release commit, breaking: {commit}')
break
revert_match = self.REVERT_RE.fullmatch(commit.short)
if revert_match:
reverts[revert_match.group(1)] = commit
continue
fix_match = self.FIXES_RE.search(commit.short)
if fix_match:
commitish = fix_match.group(1)
@@ -307,6 +313,13 @@ def _get_commits_and_fixes(self, default_author):
commits[commit.hash] = commit
for commitish, revert_commit in reverts.items():
reverted = commits.pop(commitish, None)
if reverted:
logger.debug(f'{commitish} fully reverted {reverted}')
else:
commits[revert_commit.hash] = revert_commit
for commitish, fix_commits in fixes.items():
if commitish in commits:
hashes = ', '.join(commit.hash[:HASH_LENGTH] for commit in fix_commits)
@@ -322,7 +335,7 @@ def apply_overrides(self, overrides):
for override in overrides:
when = override.get('when')
if when and when not in self and when != self._start:
logger.debug(f'Ignored {when!r}, not in commits {self._start!r}')
logger.debug(f'Ignored {when!r} override')
continue
override_hash = override.get('hash') or when
@@ -343,14 +356,14 @@ def apply_overrides(self, overrides):
logger.info(f'CHANGE {self._commits[commit.hash]} -> {commit}')
self._commits[commit.hash] = commit
self._commits = {key: value for key, value in reversed(self._commits.items())}
self._commits = dict(reversed(self._commits.items()))
def groups(self):
group_dict = defaultdict(list)
for commit in self:
upstream_re = self.UPSTREAM_MERGE_RE.search(commit.short)
if upstream_re:
commit.short = f'[core/upstream] Merged with youtube-dl {upstream_re.group(1)}'
commit.short = f'[upstream] Merged with youtube-dl {upstream_re.group(1)}'
match = self.MESSAGE_RE.fullmatch(commit.short)
if not match:
@@ -377,9 +390,9 @@ def groups(self):
if not group:
if self.EXTRACTOR_INDICATOR_RE.search(commit.short):
group = CommitGroup.EXTRACTOR
logger.error(f'Assuming [ie] group for {commit.short!r}')
else:
group = CommitGroup.POSTPROCESSOR
logger.warning(f'Failed to map {commit.short!r}, selected {group.name.lower()}')
group = CommitGroup.CORE
commit_info = CommitInfo(
details, sub_details, message.strip(),
@@ -395,25 +408,20 @@ def details_from_prefix(prefix):
if not prefix:
return CommitGroup.CORE, None, ()
prefix, _, details = prefix.partition('/')
prefix = prefix.strip()
details = details.strip()
prefix, *sub_details = prefix.split(':')
group = CommitGroup.get(prefix.lower())
if group is CommitGroup.PRIORITY:
prefix, _, details = details.partition('/')
group, details = CommitGroup.get(prefix)
if group is CommitGroup.PRIORITY and details:
details = details.partition('/')[2].strip()
if not details and prefix and prefix not in CommitGroup.ignorable_prefixes:
logger.debug(f'Replaced details with {prefix!r}')
details = prefix or None
if details and '/' in details:
logger.error(f'Prefix is overnested, using first part: {prefix}')
details = details.partition('/')[0].strip()
if details == 'common':
details = None
if details:
details, *sub_details = details.split(':')
else:
sub_details = []
elif group is CommitGroup.NETWORKING and details == 'rh':
details = 'Request Handler'
return group, details, sub_details
@@ -437,7 +445,32 @@ def get_new_contributors(contributors_path, commits):
return sorted(new_contributors, key=str.casefold)
if __name__ == '__main__':
def create_changelog(args):
logging.basicConfig(
datefmt='%Y-%m-%d %H-%M-%S', format='{asctime} | {levelname:<8} | {message}',
level=logging.WARNING - 10 * args.verbosity, style='{', stream=sys.stderr)
commits = CommitRange(None, args.commitish, args.default_author)
if not args.no_override:
if args.override_path.exists():
overrides = json.loads(read_file(args.override_path))
commits.apply_overrides(overrides)
else:
logger.warning(f'File {args.override_path.as_posix()} does not exist')
logger.info(f'Loaded {len(commits)} commits')
new_contributors = get_new_contributors(args.contributors_path, commits)
if new_contributors:
if args.contributors:
write_file(args.contributors_path, '\n'.join(new_contributors) + '\n', mode='a')
logger.info(f'New contributors: {", ".join(new_contributors)}')
return Changelog(commits.groups(), args.repo, args.collapsible)
def create_parser():
import argparse
parser = argparse.ArgumentParser(
@@ -469,27 +502,9 @@ def get_new_contributors(contributors_path, commits):
parser.add_argument(
'--collapsible', action='store_true',
help='make changelog collapsible (default: %(default)s)')
args = parser.parse_args()
logging.basicConfig(
datefmt='%Y-%m-%d %H-%M-%S', format='{asctime} | {levelname:<8} | {message}',
level=logging.WARNING - 10 * args.verbosity, style='{', stream=sys.stderr)
return parser
commits = CommitRange(None, args.commitish, args.default_author)
if not args.no_override:
if args.override_path.exists():
overrides = json.loads(read_file(args.override_path))
commits.apply_overrides(overrides)
else:
logger.warning(f'File {args.override_path.as_posix()} does not exist')
logger.info(f'Loaded {len(commits)} commits')
new_contributors = get_new_contributors(args.contributors_path, commits)
if new_contributors:
if args.contributors:
write_file(args.contributors_path, '\n'.join(new_contributors) + '\n', mode='a')
logger.info(f'New contributors: {", ".join(new_contributors)}')
print(Changelog(commits.groups(), args.repo, args.collapsible))
if __name__ == '__main__':
print(create_changelog(create_parser().parse_args()))

View File

@@ -9,12 +9,7 @@
import re
from devscripts.utils import (
get_filename_args,
read_file,
read_version,
write_file,
)
from devscripts.utils import get_filename_args, read_file, write_file
VERBOSE_TMPL = '''
- type: checkboxes
@@ -35,19 +30,18 @@
description: |
It should start like this:
placeholder: |
[debug] Command-line config: ['-vU', 'test:youtube']
[debug] Portable config "yt-dlp.conf": ['-i']
[debug] Command-line config: ['-vU', 'https://www.youtube.com/watch?v=BaW_jenozKc']
[debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version %(version)s [9d339c4] (win32_exe)
[debug] yt-dlp version nightly@... from yt-dlp/yt-dlp [b634ba742] (win_exe)
[debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.22000-SP0
[debug] Checking exe version: ffmpeg -bsfs
[debug] Checking exe version: ffprobe -bsfs
[debug] exe versions: ffmpeg N-106550-g072101bd52-20220410 (fdk,setts), ffprobe N-106624-g391ce570c8-20220415, phantomjs 2.1.1
[debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3
[debug] Proxy map: {}
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: %(version)s, Current version: %(version)s
yt-dlp is up to date (%(version)s)
[debug] Request Handlers: urllib, requests
[debug] Loaded 1893 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest
yt-dlp is up to date (nightly@... from yt-dlp/yt-dlp-nightly-builds)
[youtube] Extracting URL: https://www.youtube.com/watch?v=BaW_jenozKc
<more lines>
render: shell
validations:
@@ -66,7 +60,7 @@
def main():
fields = {'version': read_version(), 'no_skip': NO_SKIP}
fields = {'no_skip': NO_SKIP}
fields['verbose'] = VERBOSE_TMPL % fields
fields['verbose_optional'] = re.sub(r'(\n\s+validations:)?\n\s+required: true', '', fields['verbose'])

View File

@@ -51,7 +51,7 @@ def apply_patch(text, patch):
),
( # Headings
r'(?m)^ (\w.+\n)( (?=\w))?',
r'## \1'
r'## \1',
),
( # Fixup `--date` formatting
rf'(?m)( --date DATE.+({delim}[^\[]+)*)\[.+({delim}.+)*$',
@@ -61,26 +61,26 @@ def apply_patch(text, patch):
),
( # Do not split URLs
rf'({delim[:-1]})? (?P<label>\[\S+\] )?(?P<url>https?({delim})?:({delim})?/({delim})?/(({delim})?\S+)+)\s',
lambda mobj: ''.join((delim, mobj.group('label') or '', re.sub(r'\s+', '', mobj.group('url')), '\n'))
lambda mobj: ''.join((delim, mobj.group('label') or '', re.sub(r'\s+', '', mobj.group('url')), '\n')),
),
( # Do not split "words"
rf'(?m)({delim}\S+)+$',
lambda mobj: ''.join((delim, mobj.group(0).replace(delim, '')))
lambda mobj: ''.join((delim, mobj.group(0).replace(delim, ''))),
),
( # Allow overshooting last line
rf'(?m)^(?P<prev>.+)${delim}(?P<current>.+)$(?!{delim})',
lambda mobj: (mobj.group().replace(delim, ' ')
if len(mobj.group()) - len(delim) + 1 <= max_width + ALLOWED_OVERSHOOT
else mobj.group())
else mobj.group()),
),
( # Avoid newline when a space is available b/w switch and description
DISABLE_PATCH, # This creates issues with prepare_manpage
r'(?m)^(\s{4}-.{%d})(%s)' % (switch_col_width - 6, delim),
r'\1 '
r'\1 ',
),
( # Replace brackets with a Markdown link
r'SponsorBlock API \((http.+)\)',
r'[SponsorBlock API](\1)'
r'[SponsorBlock API](\1)',
),
)

View File

@@ -24,7 +24,7 @@
# NAME
yt\-dlp \- A youtube-dl fork with additional features and patches
yt\-dlp \- A feature\-rich command\-line audio/video downloader
# SYNOPSIS
@@ -43,6 +43,27 @@ def filter_excluded_sections(readme):
'', readme)
def _convert_code_blocks(readme):
current_code_block = None
for line in readme.splitlines(True):
if current_code_block:
if line == current_code_block:
current_code_block = None
yield '\n'
else:
yield f' {line}'
elif line.startswith('```'):
current_code_block = line.count('`') * '`' + '\n'
yield '\n'
else:
yield line
def convert_code_blocks(readme):
return ''.join(_convert_code_blocks(readme))
def move_sections(readme):
MOVE_TAG_TEMPLATE = '<!-- MANPAGE: MOVE "%s" SECTION HERE -->'
sections = re.findall(r'(?m)^%s$' % (
@@ -65,8 +86,10 @@ def move_sections(readme):
def filter_options(readme):
section = re.search(r'(?sm)^# USAGE AND OPTIONS\n.+?(?=^# )', readme).group(0)
section_new = section.replace('*', R'\*')
options = '# OPTIONS\n'
for line in section.split('\n')[1:]:
for line in section_new.split('\n')[1:]:
mobj = re.fullmatch(r'''(?x)
\s{4}(?P<opt>-(?:,\s|[^\s])+)
(?:\s(?P<meta>(?:[^\s]|\s(?!\s))+))?
@@ -86,7 +109,7 @@ def filter_options(readme):
return readme.replace(section, options, 1)
TRANSFORM = compose_functions(filter_excluded_sections, move_sections, filter_options)
TRANSFORM = compose_functions(filter_excluded_sections, convert_code_blocks, move_sections, filter_options)
def main():

View File

@@ -1,17 +0,0 @@
@setlocal
@echo off
cd /d %~dp0..
if ["%~1"]==[""] (
set "test_set="test""
) else if ["%~1"]==["core"] (
set "test_set="-m not download""
) else if ["%~1"]==["download"] (
set "test_set="-m "download""
) else (
echo.Invalid test type "%~1". Use "core" ^| "download"
exit /b 1
)
set PYTHONWARNINGS=error
pytest %test_set%

75
devscripts/run_tests.py Executable file
View File

@@ -0,0 +1,75 @@
#!/usr/bin/env python3
import argparse
import functools
import os
import re
import shlex
import subprocess
import sys
from pathlib import Path
fix_test_name = functools.partial(re.compile(r'IE(_all|_\d+)?$').sub, r'\1')
def parse_args():
parser = argparse.ArgumentParser(description='Run selected yt-dlp tests')
parser.add_argument(
'test', help='a extractor tests, or one of "core" or "download"', nargs='*')
parser.add_argument(
'-k', help='run a test matching EXPRESSION. Same as "pytest -k"', metavar='EXPRESSION')
parser.add_argument(
'--pytest-args', help='arguments to passthrough to pytest')
return parser.parse_args()
def run_tests(*tests, pattern=None, ci=False):
run_core = 'core' in tests or (not pattern and not tests)
run_download = 'download' in tests
tests = list(map(fix_test_name, tests))
pytest_args = args.pytest_args or os.getenv('HATCH_TEST_ARGS', '')
arguments = ['pytest', '-Werror', '--tb=short', *shlex.split(pytest_args)]
if ci:
arguments.append('--color=yes')
if pattern:
arguments.extend(['-k', pattern])
if run_core:
arguments.extend(['-m', 'not download'])
elif run_download:
arguments.extend(['-m', 'download'])
else:
arguments.extend(
f'test/test_download.py::TestDownload::test_{test}' for test in tests)
print(f'Running {arguments}', flush=True)
try:
return subprocess.call(arguments)
except FileNotFoundError:
pass
arguments = [sys.executable, '-Werror', '-m', 'unittest']
if pattern:
arguments.extend(['-k', pattern])
if run_core:
print('"pytest" needs to be installed to run core tests', file=sys.stderr, flush=True)
return 1
elif run_download:
arguments.append('test.test_download')
else:
arguments.extend(
f'test.test_download.TestDownload.test_{test}' for test in tests)
print(f'Running {arguments}', flush=True)
return subprocess.call(arguments)
if __name__ == '__main__':
try:
args = parse_args()
os.chdir(Path(__file__).parent.parent)
sys.exit(run_tests(*args.test, pattern=args.k, ci=bool(os.getenv('CI'))))
except KeyboardInterrupt:
pass

View File

@@ -1,14 +0,0 @@
#!/usr/bin/env sh
if [ -z "$1" ]; then
test_set='test'
elif [ "$1" = 'core' ]; then
test_set="-m not download"
elif [ "$1" = 'download' ]; then
test_set="-m download"
else
echo 'Invalid test type "'"$1"'". Use "core" | "download"'
exit 1
fi
python3 -bb -Werror -m pytest "$test_set"

View File

@@ -30,7 +30,7 @@ def property_setter(name, value):
opts = parse_options()
transform = compose_functions(
property_setter('VARIANT', opts.variant),
property_setter('UPDATE_HINT', opts.update_message)
property_setter('UPDATE_HINT', opts.update_message),
)
write_file(VERSION_FILE, transform(read_file(VERSION_FILE)))

189
devscripts/tomlparse.py Executable file
View File

@@ -0,0 +1,189 @@
#!/usr/bin/env python3
"""
Simple parser for spec compliant toml files
A simple toml parser for files that comply with the spec.
Should only be used to parse `pyproject.toml` for `install_deps.py`.
IMPORTANT: INVALID FILES OR MULTILINE STRINGS ARE NOT SUPPORTED!
"""
from __future__ import annotations
import datetime as dt
import json
import re
WS = r'(?:[\ \t]*)'
STRING_RE = re.compile(r'"(?:\\.|[^\\"\n])*"|\'[^\'\n]*\'')
SINGLE_KEY_RE = re.compile(rf'{STRING_RE.pattern}|[A-Za-z0-9_-]+')
KEY_RE = re.compile(rf'{WS}(?:{SINGLE_KEY_RE.pattern}){WS}(?:\.{WS}(?:{SINGLE_KEY_RE.pattern}){WS})*')
EQUALS_RE = re.compile(rf'={WS}')
WS_RE = re.compile(WS)
_SUBTABLE = rf'(?P<subtable>^\[(?P<is_list>\[)?(?P<path>{KEY_RE.pattern})\]\]?)'
EXPRESSION_RE = re.compile(rf'^(?:{_SUBTABLE}|{KEY_RE.pattern}=)', re.MULTILINE)
LIST_WS_RE = re.compile(rf'{WS}((#[^\n]*)?\n{WS})*')
LEFTOVER_VALUE_RE = re.compile(r'[^,}\]\t\n#]+')
def parse_key(value: str):
for match in SINGLE_KEY_RE.finditer(value):
if match[0][0] == '"':
yield json.loads(match[0])
elif match[0][0] == '\'':
yield match[0][1:-1]
else:
yield match[0]
def get_target(root: dict, paths: list[str], is_list=False):
target = root
for index, key in enumerate(paths, 1):
use_list = is_list and index == len(paths)
result = target.get(key)
if result is None:
result = [] if use_list else {}
target[key] = result
if isinstance(result, dict):
target = result
elif use_list:
target = {}
result.append(target)
else:
target = result[-1]
assert isinstance(target, dict)
return target
def parse_enclosed(data: str, index: int, end: str, ws_re: re.Pattern):
index += 1
if match := ws_re.match(data, index):
index = match.end()
while data[index] != end:
index = yield True, index
if match := ws_re.match(data, index):
index = match.end()
if data[index] == ',':
index += 1
if match := ws_re.match(data, index):
index = match.end()
assert data[index] == end
yield False, index + 1
def parse_value(data: str, index: int):
if data[index] == '[':
result = []
indices = parse_enclosed(data, index, ']', LIST_WS_RE)
valid, index = next(indices)
while valid:
index, value = parse_value(data, index)
result.append(value)
valid, index = indices.send(index)
return index, result
if data[index] == '{':
result = {}
indices = parse_enclosed(data, index, '}', WS_RE)
valid, index = next(indices)
while valid:
valid, index = indices.send(parse_kv_pair(data, index, result))
return index, result
if match := STRING_RE.match(data, index):
return match.end(), json.loads(match[0]) if match[0][0] == '"' else match[0][1:-1]
match = LEFTOVER_VALUE_RE.match(data, index)
assert match
value = match[0].strip()
for func in [
int,
float,
dt.time.fromisoformat,
dt.date.fromisoformat,
dt.datetime.fromisoformat,
{'true': True, 'false': False}.get,
]:
try:
value = func(value)
break
except Exception:
pass
return match.end(), value
def parse_kv_pair(data: str, index: int, target: dict):
match = KEY_RE.match(data, index)
if not match:
return None
*keys, key = parse_key(match[0])
match = EQUALS_RE.match(data, match.end())
assert match
index = match.end()
index, value = parse_value(data, index)
get_target(target, keys)[key] = value
return index
def parse_toml(data: str):
root = {}
target = root
index = 0
while True:
match = EXPRESSION_RE.search(data, index)
if not match:
break
if match.group('subtable'):
index = match.end()
path, is_list = match.group('path', 'is_list')
target = get_target(root, list(parse_key(path)), bool(is_list))
continue
index = parse_kv_pair(data, match.start(), target)
assert index is not None
return root
def main():
import argparse
from pathlib import Path
parser = argparse.ArgumentParser()
parser.add_argument('infile', type=Path, help='The TOML file to read as input')
args = parser.parse_args()
with args.infile.open('r', encoding='utf-8') as file:
data = file.read()
def default(obj):
if isinstance(obj, (dt.date, dt.time, dt.datetime)):
return obj.isoformat()
print(json.dumps(parse_toml(data), default=default))
if __name__ == '__main__':
main()

View File

@@ -1,39 +0,0 @@
#!/usr/bin/env python3
"""
Usage: python3 ./devscripts/update-formulae.py <path-to-formulae-rb> <version>
version can be either 0-aligned (yt-dlp version) or normalized (PyPi version)
"""
# Allow direct execution
import os
import sys
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import json
import re
import urllib.request
from devscripts.utils import read_file, write_file
filename, version = sys.argv[1:]
normalized_version = '.'.join(str(int(x)) for x in version.split('.'))
pypi_release = json.loads(urllib.request.urlopen(
'https://pypi.org/pypi/yt-dlp/%s/json' % normalized_version
).read().decode())
tarball_file = next(x for x in pypi_release['urls'] if x['filename'].endswith('.tar.gz'))
sha256sum = tarball_file['digests']['sha256']
url = tarball_file['url']
formulae_text = read_file(filename)
formulae_text = re.sub(r'sha256 "[0-9a-f]*?"', 'sha256 "%s"' % sha256sum, formulae_text, count=1)
formulae_text = re.sub(r'url "[^"]*?"', 'url "%s"' % url, formulae_text, count=1)
write_file(filename, formulae_text)

View File

@@ -9,22 +9,22 @@
import argparse
import contextlib
import datetime as dt
import sys
from datetime import datetime
from devscripts.utils import read_version, run_process, write_file
def get_new_version(version, revision):
if not version:
version = datetime.utcnow().strftime('%Y.%m.%d')
version = dt.datetime.now(dt.timezone.utc).strftime('%Y.%m.%d')
if revision:
assert revision.isdigit(), 'Revision must be a number'
assert revision.isdecimal(), 'Revision must be a number'
else:
old_version = read_version().split('.')
if version.split('.') == old_version[:3]:
revision = str(int((old_version + [0])[3]) + 1)
revision = str(int(([*old_version, 0])[3]) + 1)
return f'{version}.{revision}' if revision else version
@@ -46,6 +46,10 @@ def get_git_head():
UPDATE_HINT = None
CHANNEL = {channel!r}
ORIGIN = {origin!r}
_pkg_version = {package_version!r}
'''
if __name__ == '__main__':
@@ -53,6 +57,12 @@ def get_git_head():
parser.add_argument(
'-c', '--channel', default='stable',
help='Select update channel (default: %(default)s)')
parser.add_argument(
'-r', '--origin', default='local',
help='Select origin/repository (default: %(default)s)')
parser.add_argument(
'-s', '--suffix', default='',
help='Add an alphanumeric suffix to the package version, e.g. "dev"')
parser.add_argument(
'-o', '--output', default='yt_dlp/version.py',
help='The output file to write to (default: %(default)s)')
@@ -66,6 +76,7 @@ def get_git_head():
args.version if args.version and '.' in args.version
else get_new_version(None, args.version))
write_file(args.output, VERSION_TEMPLATE.format(
version=version, git_head=git_head, channel=args.channel))
version=version, git_head=git_head, channel=args.channel, origin=args.origin,
package_version=f'{version}{args.suffix}'))
print(f'version={version} ({args.channel}), head={git_head}')

26
devscripts/update_changelog.py Executable file
View File

@@ -0,0 +1,26 @@
#!/usr/bin/env python3
# Allow direct execution
import os
import sys
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from pathlib import Path
from devscripts.make_changelog import create_changelog, create_parser
from devscripts.utils import read_file, read_version, write_file
# Always run after devscripts/update-version.py, and run before `make doc|pypi-files|tar|all`
if __name__ == '__main__':
parser = create_parser()
parser.description = 'Update an existing changelog file with an entry for a new release'
parser.add_argument(
'--changelog-path', type=Path, default=Path(__file__).parent.parent / 'Changelog.md',
help='path to the Changelog file')
args = parser.parse_args()
new_entry = create_changelog(args)
header, sep, changelog = read_file(args.changelog_path).partition('\n### ')
write_file(args.changelog_path, f'{header}{sep}{read_version()}\n{new_entry}\n{sep}{changelog}')

View File

@@ -13,10 +13,11 @@ def write_file(fname, content, mode='w'):
return f.write(content)
def read_version(fname='yt_dlp/version.py'):
def read_version(fname='yt_dlp/version.py', varname='__version__'):
"""Get the version without importing the package"""
exec(compile(read_file(fname), fname, 'exec'))
return locals()['__version__']
items = {}
exec(compile(read_file(fname), fname, 'exec'), items)
return items[varname]
def get_filename_args(has_infile=False, default_outfile=None):

View File

@@ -9,15 +9,15 @@
import yt_dlp
ZSH_COMPLETION_FILE = "completions/zsh/_yt-dlp"
ZSH_COMPLETION_TEMPLATE = "devscripts/zsh-completion.in"
ZSH_COMPLETION_FILE = 'completions/zsh/_yt-dlp'
ZSH_COMPLETION_TEMPLATE = 'devscripts/zsh-completion.in'
def build_completion(opt_parser):
opts = [opt for group in opt_parser.option_groups
for opt in group.option_list]
opts_file = [opt for opt in opts if opt.metavar == "FILE"]
opts_dir = [opt for opt in opts if opt.metavar == "DIR"]
opts_file = [opt for opt in opts if opt.metavar == 'FILE']
opts_dir = [opt for opt in opts if opt.metavar == 'DIR']
fileopts = []
for opt in opts_file:
@@ -38,11 +38,11 @@ def build_completion(opt_parser):
with open(ZSH_COMPLETION_TEMPLATE) as f:
template = f.read()
template = template.replace("{{fileopts}}", "|".join(fileopts))
template = template.replace("{{diropts}}", "|".join(diropts))
template = template.replace("{{flags}}", " ".join(flags))
template = template.replace('{{fileopts}}', '|'.join(fileopts))
template = template.replace('{{diropts}}', '|'.join(diropts))
template = template.replace('{{flags}}', ' '.join(flags))
with open(ZSH_COMPLETION_FILE, "w") as f:
with open(ZSH_COMPLETION_FILE, 'w') as f:
f.write(template)

View File

@@ -1,5 +1,384 @@
[build-system]
build-backend = 'setuptools.build_meta'
# https://github.com/yt-dlp/yt-dlp/issues/5941
# https://github.com/pypa/distutils/issues/17
requires = ['setuptools > 50']
requires = ["hatchling"]
build-backend = "hatchling.build"
[project]
name = "yt-dlp"
maintainers = [
{name = "pukkandan", email = "pukkandan.ytdlp@gmail.com"},
{name = "Grub4K", email = "contact@grub4k.xyz"},
{name = "bashonly", email = "bashonly@protonmail.com"},
{name = "coletdjnz", email = "coletdjnz@protonmail.com"},
]
description = "A feature-rich command-line audio/video downloader"
readme = "README.md"
requires-python = ">=3.8"
keywords = [
"youtube-dl",
"video-downloader",
"youtube-downloader",
"sponsorblock",
"youtube-dlc",
"yt-dlp",
]
license = {file = "LICENSE"}
classifiers = [
"Topic :: Multimedia :: Video",
"Development Status :: 5 - Production/Stable",
"Environment :: Console",
"Programming Language :: Python",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: Implementation",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: PyPy",
"License :: OSI Approved :: The Unlicense (Unlicense)",
"Operating System :: OS Independent",
]
dynamic = ["version"]
dependencies = [
"brotli; implementation_name=='cpython'",
"brotlicffi; implementation_name!='cpython'",
"certifi",
"mutagen",
"pycryptodomex",
"requests>=2.32.2,<3",
"urllib3>=1.26.17,<3",
"websockets>=12.0",
]
[project.optional-dependencies]
default = []
curl-cffi = ["curl-cffi==0.5.10; implementation_name=='cpython'"]
secretstorage = [
"cffi",
"secretstorage",
]
build = [
"build",
"hatchling",
"pip",
"setuptools",
"wheel",
]
dev = [
"pre-commit",
"yt-dlp[static-analysis]",
"yt-dlp[test]",
]
static-analysis = [
"autopep8~=2.0",
"ruff~=0.5.0",
]
test = [
"pytest~=8.1",
]
pyinstaller = [
"pyinstaller>=6.7.0", # for compat with setuptools>=70
]
py2exe = [
"py2exe>=0.12",
]
[project.urls]
Documentation = "https://github.com/yt-dlp/yt-dlp#readme"
Repository = "https://github.com/yt-dlp/yt-dlp"
Tracker = "https://github.com/yt-dlp/yt-dlp/issues"
Funding = "https://github.com/yt-dlp/yt-dlp/blob/master/Collaborators.md#collaborators"
[project.scripts]
yt-dlp = "yt_dlp:main"
[project.entry-points.pyinstaller40]
hook-dirs = "yt_dlp.__pyinstaller:get_hook_dirs"
[tool.hatch.build.targets.sdist]
include = [
"/yt_dlp",
"/devscripts",
"/test",
"/.gitignore", # included by default, needed for auto-excludes
"/Changelog.md",
"/LICENSE", # included as license
"/pyproject.toml", # included by default
"/README.md", # included as readme
"/setup.cfg",
"/supportedsites.md",
]
artifacts = [
"/yt_dlp/extractor/lazy_extractors.py",
"/completions",
"/AUTHORS", # included by default
"/README.txt",
"/yt-dlp.1",
]
[tool.hatch.build.targets.wheel]
packages = ["yt_dlp"]
artifacts = ["/yt_dlp/extractor/lazy_extractors.py"]
[tool.hatch.build.targets.wheel.shared-data]
"completions/bash/yt-dlp" = "share/bash-completion/completions/yt-dlp"
"completions/zsh/_yt-dlp" = "share/zsh/site-functions/_yt-dlp"
"completions/fish/yt-dlp.fish" = "share/fish/vendor_completions.d/yt-dlp.fish"
"README.txt" = "share/doc/yt_dlp/README.txt"
"yt-dlp.1" = "share/man/man1/yt-dlp.1"
[tool.hatch.version]
path = "yt_dlp/version.py"
pattern = "_pkg_version = '(?P<version>[^']+)'"
[tool.hatch.envs.default]
features = ["curl-cffi", "default"]
dependencies = ["pre-commit"]
path = ".venv"
installer = "uv"
[tool.hatch.envs.default.scripts]
setup = "pre-commit install --config .pre-commit-hatch.yaml"
yt-dlp = "python -Werror -Xdev -m yt_dlp {args}"
[tool.hatch.envs.hatch-static-analysis]
detached = true
features = ["static-analysis"]
dependencies = [] # override hatch ruff version
config-path = "pyproject.toml"
[tool.hatch.envs.hatch-static-analysis.scripts]
format-check = "autopep8 --diff {args:.}"
format-fix = "autopep8 --in-place {args:.}"
lint-check = "ruff check {args:.}"
lint-fix = "ruff check --fix {args:.}"
[tool.hatch.envs.hatch-test]
features = ["test"]
dependencies = [
"pytest-randomly~=3.15",
"pytest-rerunfailures~=14.0",
"pytest-xdist[psutil]~=3.5",
]
[tool.hatch.envs.hatch-test.scripts]
run = "python -m devscripts.run_tests {args}"
run-cov = "echo Code coverage not implemented && exit 1"
[[tool.hatch.envs.hatch-test.matrix]]
python = [
"3.8",
"3.9",
"3.10",
"3.11",
"3.12",
"pypy3.8",
"pypy3.9",
"pypy3.10",
]
[tool.ruff]
line-length = 120
[tool.ruff.lint]
ignore = [
"E402", # module-import-not-at-top-of-file
"E501", # line-too-long
"E731", # lambda-assignment
"E741", # ambiguous-variable-name
"UP036", # outdated-version-block
"B006", # mutable-argument-default
"B008", # function-call-in-default-argument
"B011", # assert-false
"B017", # assert-raises-exception
"B023", # function-uses-loop-variable (false positives)
"B028", # no-explicit-stacklevel
"B904", # raise-without-from-inside-except
"C401", # unnecessary-generator-set
"C402", # unnecessary-generator-dict
"PIE790", # unnecessary-placeholder
"SIM102", # collapsible-if
"SIM108", # if-else-block-instead-of-if-exp
"SIM112", # uncapitalized-environment-variables
"SIM113", # enumerate-for-loop
"SIM114", # if-with-same-arms
"SIM115", # open-file-with-context-handler
"SIM117", # multiple-with-statements
"SIM223", # expr-and-false
"SIM300", # yoda-conditions
"TD001", # invalid-todo-tag
"TD002", # missing-todo-author
"TD003", # missing-todo-link
"PLE0604", # invalid-all-object (false positives)
"PLE0643", # potential-index-error (false positives)
"PLW0603", # global-statement
"PLW1510", # subprocess-run-without-check
"PLW2901", # redefined-loop-name
"RUF001", # ambiguous-unicode-character-string
"RUF012", # mutable-class-default
"RUF100", # unused-noqa (flake8 has slightly different behavior)
]
select = [
"E", # pycodestyle Error
"W", # pycodestyle Warning
"F", # Pyflakes
"I", # isort
"Q", # flake8-quotes
"N803", # invalid-argument-name
"N804", # invalid-first-argument-name-for-class-method
"UP", # pyupgrade
"B", # flake8-bugbear
"A", # flake8-builtins
"COM", # flake8-commas
"C4", # flake8-comprehensions
"FA", # flake8-future-annotations
"ISC", # flake8-implicit-str-concat
"ICN003", # banned-import-from
"PIE", # flake8-pie
"T20", # flake8-print
"RSE", # flake8-raise
"RET504", # unnecessary-assign
"SIM", # flake8-simplify
"TID251", # banned-api
"TD", # flake8-todos
"PLC", # Pylint Convention
"PLE", # Pylint Error
"PLW", # Pylint Warning
"RUF", # Ruff-specific rules
]
[tool.ruff.lint.per-file-ignores]
"devscripts/lazy_load_template.py" = [
"F401", # unused-import
]
"!yt_dlp/extractor/**.py" = [
"I", # isort
"ICN003", # banned-import-from
"T20", # flake8-print
"A002", # builtin-argument-shadowing
"C408", # unnecessary-collection-call
]
"yt_dlp/jsinterp.py" = [
"UP031", # printf-string-formatting
]
[tool.ruff.lint.isort]
known-first-party = [
"bundle",
"devscripts",
"test",
]
relative-imports-order = "closest-to-furthest"
[tool.ruff.lint.flake8-quotes]
docstring-quotes = "double"
multiline-quotes = "single"
inline-quotes = "single"
avoid-escape = false
[tool.ruff.lint.pep8-naming]
classmethod-decorators = [
"yt_dlp.utils.classproperty",
]
[tool.ruff.lint.flake8-import-conventions]
banned-from = [
"base64",
"datetime",
"functools",
"glob",
"hashlib",
"itertools",
"json",
"math",
"os",
"pathlib",
"random",
"re",
"string",
"sys",
"time",
"urllib.parse",
"uuid",
"xml",
]
[tool.ruff.lint.flake8-tidy-imports.banned-api]
"yt_dlp.compat.compat_str".msg = "Use `str` instead."
"yt_dlp.compat.compat_b64decode".msg = "Use `base64.b64decode` instead."
"yt_dlp.compat.compat_urlparse".msg = "Use `urllib.parse` instead."
"yt_dlp.compat.compat_parse_qs".msg = "Use `urllib.parse.parse_qs` instead."
"yt_dlp.compat.compat_urllib_parse_unquote".msg = "Use `urllib.parse.unquote` instead."
"yt_dlp.compat.compat_urllib_parse_urlencode".msg = "Use `urllib.parse.urlencode` instead."
"yt_dlp.compat.compat_urllib_parse_urlparse".msg = "Use `urllib.parse.urlparse` instead."
"yt_dlp.compat.compat_shlex_quote".msg = "Use `yt_dlp.utils.shell_quote` instead."
"yt_dlp.utils.error_to_compat_str".msg = "Use `str` instead."
[tool.autopep8]
max_line_length = 120
recursive = true
exit-code = true
jobs = 0
select = [
"E101",
"E112",
"E113",
"E115",
"E116",
"E117",
"E121",
"E122",
"E123",
"E124",
"E125",
"E126",
"E127",
"E128",
"E129",
"E131",
"E201",
"E202",
"E203",
"E211",
"E221",
"E222",
"E223",
"E224",
"E225",
"E226",
"E227",
"E228",
"E231",
"E241",
"E242",
"E251",
"E252",
"E261",
"E262",
"E265",
"E266",
"E271",
"E272",
"E273",
"E274",
"E275",
"E301",
"E302",
"E303",
"E304",
"E305",
"E306",
"E502",
"E701",
"E702",
"E704",
"W391",
"W504",
]
[tool.pytest.ini_options]
addopts = "-ra -v --strict-markers"
markers = [
"download",
]

View File

@@ -1,6 +0,0 @@
mutagen
pycryptodomex
websockets
brotli; platform_python_implementation=='CPython'
brotlicffi; platform_python_implementation!='CPython'
certifi

View File

@@ -1,14 +1,9 @@
[wheel]
universal = true
[flake8]
exclude = build,venv,.tox,.git,.pytest_cache
ignore = E402,E501,E731,E741,W503
max_line_length = 120
per_file_ignores =
devscripts/lazy_load_template.py: F401
yt_dlp/utils/__init__.py: F401, F403
[autoflake]
@@ -19,15 +14,9 @@ remove-duplicate-keys = true
remove-unused-variables = true
[tool:pytest]
addopts = -ra -v --strict-markers
markers =
download
[tox:tox]
skipsdist = true
envlist = py{36,37,38,39,310,311},pypy{36,37,38,39}
envlist = py{38,39,310,311,312},pypy{38,39,310}
skip_missing_interpreters = true
[testenv] # tox
@@ -40,7 +29,7 @@ setenv =
[isort]
py_version = 37
py_version = 38
multi_line_output = VERTICAL_HANGING_INDENT
line_length = 80
reverse_relative = true

175
setup.py
View File

@@ -1,175 +0,0 @@
#!/usr/bin/env python3
# Allow execution from anywhere
import os
import sys
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
import subprocess
import warnings
try:
from setuptools import Command, find_packages, setup
setuptools_available = True
except ImportError:
from distutils.core import Command, setup
setuptools_available = False
from devscripts.utils import read_file, read_version
VERSION = read_version()
DESCRIPTION = 'A youtube-dl fork with additional features and patches'
LONG_DESCRIPTION = '\n\n'.join((
'Official repository: <https://github.com/yt-dlp/yt-dlp>',
'**PS**: Some links in this document will not work since this is a copy of the README.md from Github',
read_file('README.md')))
REQUIREMENTS = read_file('requirements.txt').splitlines()
def packages():
if setuptools_available:
return find_packages(exclude=('youtube_dl', 'youtube_dlc', 'test', 'ytdlp_plugins', 'devscripts'))
return [
'yt_dlp', 'yt_dlp.extractor', 'yt_dlp.downloader', 'yt_dlp.postprocessor', 'yt_dlp.compat',
]
def py2exe_params():
warnings.warn(
'py2exe builds do not support pycryptodomex and needs VC++14 to run. '
'It is recommended to run "pyinst.py" to build using pyinstaller instead')
return {
'console': [{
'script': './yt_dlp/__main__.py',
'dest_base': 'yt-dlp',
'icon_resources': [(1, 'devscripts/logo.ico')],
}],
'version_info': {
'version': VERSION,
'description': DESCRIPTION,
'comments': LONG_DESCRIPTION.split('\n')[0],
'product_name': 'yt-dlp',
'product_version': VERSION,
},
'options': {
'bundle_files': 0,
'compressed': 1,
'optimize': 2,
'dist_dir': './dist',
'excludes': ['Crypto', 'Cryptodome'], # py2exe cannot import Crypto
'dll_excludes': ['w9xpopen.exe', 'crypt32.dll'],
# Modules that are only imported dynamically must be added here
'includes': ['yt_dlp.compat._legacy'],
},
'zipfile': None,
}
def build_params():
files_spec = [
('share/bash-completion/completions', ['completions/bash/yt-dlp']),
('share/zsh/site-functions', ['completions/zsh/_yt-dlp']),
('share/fish/vendor_completions.d', ['completions/fish/yt-dlp.fish']),
('share/doc/yt_dlp', ['README.txt']),
('share/man/man1', ['yt-dlp.1'])
]
data_files = []
for dirname, files in files_spec:
resfiles = []
for fn in files:
if not os.path.exists(fn):
warnings.warn(f'Skipping file {fn} since it is not present. Try running " make pypi-files " first')
else:
resfiles.append(fn)
data_files.append((dirname, resfiles))
params = {'data_files': data_files}
if setuptools_available:
params['entry_points'] = {
'console_scripts': ['yt-dlp = yt_dlp:main'],
'pyinstaller40': ['hook-dirs = yt_dlp.__pyinstaller:get_hook_dirs'],
}
else:
params['scripts'] = ['yt-dlp']
return params
class build_lazy_extractors(Command):
description = 'Build the extractor lazy loading module'
user_options = []
def initialize_options(self):
pass
def finalize_options(self):
pass
def run(self):
if self.dry_run:
print('Skipping build of lazy extractors in dry run mode')
return
subprocess.run([sys.executable, 'devscripts/make_lazy_extractors.py'])
def main():
if sys.argv[1:2] == ['py2exe']:
params = py2exe_params()
try:
from py2exe import freeze
except ImportError:
import py2exe # noqa: F401
warnings.warn('You are using an outdated version of py2exe. Support for this version will be removed in the future')
params['console'][0].update(params.pop('version_info'))
params['options'] = {'py2exe': params.pop('options')}
else:
return freeze(**params)
else:
params = build_params()
setup(
name='yt-dlp',
version=VERSION,
maintainer='pukkandan',
maintainer_email='pukkandan.ytdlp@gmail.com',
description=DESCRIPTION,
long_description=LONG_DESCRIPTION,
long_description_content_type='text/markdown',
url='https://github.com/yt-dlp/yt-dlp',
packages=packages(),
install_requires=REQUIREMENTS,
python_requires='>=3.7',
project_urls={
'Documentation': 'https://github.com/yt-dlp/yt-dlp#readme',
'Source': 'https://github.com/yt-dlp/yt-dlp',
'Tracker': 'https://github.com/yt-dlp/yt-dlp/issues',
'Funding': 'https://github.com/yt-dlp/yt-dlp/blob/master/Collaborators.md#collaborators',
},
classifiers=[
'Topic :: Multimedia :: Video',
'Development Status :: 5 - Production/Stable',
'Environment :: Console',
'Programming Language :: Python',
'Programming Language :: Python :: 3.7',
'Programming Language :: Python :: 3.8',
'Programming Language :: Python :: 3.9',
'Programming Language :: Python :: 3.10',
'Programming Language :: Python :: 3.11',
'Programming Language :: Python :: Implementation',
'Programming Language :: Python :: Implementation :: CPython',
'Programming Language :: Python :: Implementation :: PyPy',
'License :: Public Domain',
'Operating System :: OS Independent',
],
cmdclass={'build_lazy_extractors': build_lazy_extractors},
**params
)
main()

File diff suppressed because it is too large Load Diff

64
test/conftest.py Normal file
View File

@@ -0,0 +1,64 @@
import inspect
import pytest
from yt_dlp.networking import RequestHandler
from yt_dlp.networking.common import _REQUEST_HANDLERS
from yt_dlp.utils._utils import _YDLLogger as FakeLogger
@pytest.fixture
def handler(request):
RH_KEY = getattr(request, 'param', None)
if not RH_KEY:
return
if inspect.isclass(RH_KEY) and issubclass(RH_KEY, RequestHandler):
handler = RH_KEY
elif RH_KEY in _REQUEST_HANDLERS:
handler = _REQUEST_HANDLERS[RH_KEY]
else:
pytest.skip(f'{RH_KEY} request handler is not available')
class HandlerWrapper(handler):
RH_KEY = handler.RH_KEY
def __init__(self, **kwargs):
super().__init__(logger=FakeLogger, **kwargs)
return HandlerWrapper
@pytest.fixture(autouse=True)
def skip_handler(request, handler):
"""usage: pytest.mark.skip_handler('my_handler', 'reason')"""
for marker in request.node.iter_markers('skip_handler'):
if marker.args[0] == handler.RH_KEY:
pytest.skip(marker.args[1] if len(marker.args) > 1 else '')
@pytest.fixture(autouse=True)
def skip_handler_if(request, handler):
"""usage: pytest.mark.skip_handler_if('my_handler', lambda request: True, 'reason')"""
for marker in request.node.iter_markers('skip_handler_if'):
if marker.args[0] == handler.RH_KEY and marker.args[1](request):
pytest.skip(marker.args[2] if len(marker.args) > 2 else '')
@pytest.fixture(autouse=True)
def skip_handlers_if(request, handler):
"""usage: pytest.mark.skip_handlers_if(lambda request, handler: True, 'reason')"""
for marker in request.node.iter_markers('skip_handlers_if'):
if handler and marker.args[0](request, handler):
pytest.skip(marker.args[1] if len(marker.args) > 1 else '')
def pytest_configure(config):
config.addinivalue_line(
'markers', 'skip_handler(handler): skip test for the given handler',
)
config.addinivalue_line(
'markers', 'skip_handler_if(handler): skip test for the given handler if condition is true',
)
config.addinivalue_line(
'markers', 'skip_handlers_if(handler): skip test for handlers when the condition is true',
)

View File

@@ -10,14 +10,14 @@
import yt_dlp.extractor
from yt_dlp import YoutubeDL
from yt_dlp.compat import compat_os_name
from yt_dlp.utils import preferredencoding, write_string
from yt_dlp.utils import preferredencoding, try_call, write_string, find_available_port
if 'pytest' in sys.modules:
import pytest
is_download_test = pytest.mark.download
else:
def is_download_test(testClass):
return testClass
def is_download_test(test_class):
return test_class
def get_params(override=None):
@@ -45,10 +45,10 @@ def try_rm(filename):
def report_warning(message, *args, **kwargs):
'''
"""
Print the message to stderr, it will be prefixed with 'WARNING:'
If stderr is a tty file the 'WARNING:' will be colored
'''
"""
if sys.stderr.isatty() and compat_os_name != 'nt':
_msg_header = '\033[0;33mWARNING:\033[0m'
else:
@@ -138,15 +138,14 @@ def expect_value(self, got, expected, field):
elif isinstance(expected, list) and isinstance(got, list):
self.assertEqual(
len(expected), len(got),
'Expect a list of length %d, but got a list of length %d for field %s' % (
len(expected), len(got), field))
f'Expect a list of length {len(expected)}, but got a list of length {len(got)} for field {field}')
for index, (item_got, item_expected) in enumerate(zip(got, expected)):
type_got = type(item_got)
type_expected = type(item_expected)
self.assertEqual(
type_expected, type_got,
'Type mismatch for list item at index %d for field %s, expected %r, got %r' % (
index, field, type_expected, type_got))
f'Type mismatch for list item at index {index} for field {field}, '
f'expected {type_expected!r}, got {type_got!r}')
expect_value(self, item_got, item_expected, field)
else:
if isinstance(expected, str) and expected.startswith('md5:'):
@@ -214,14 +213,23 @@ def sanitize(key, value):
test_info_dict = {
key: sanitize(key, value) for key, value in got_dict.items()
if value is not None and key not in IGNORED_FIELDS and not any(
key.startswith(f'{prefix}_') for prefix in IGNORED_PREFIXES)
if value is not None and key not in IGNORED_FIELDS and (
not any(key.startswith(f'{prefix}_') for prefix in IGNORED_PREFIXES)
or key == '_old_archive_ids')
}
# display_id may be generated from id
if test_info_dict.get('display_id') == test_info_dict.get('id'):
test_info_dict.pop('display_id')
# Remove deprecated fields
for old in YoutubeDL._deprecated_multivalue_fields:
test_info_dict.pop(old, None)
# release_year may be generated from release_date
if try_call(lambda: test_info_dict['release_year'] == int(test_info_dict['release_date'][:4])):
test_info_dict.pop('release_year')
# Check url for flat entries
if got_dict.get('_type', 'video') != 'video' and got_dict.get('url'):
test_info_dict['url'] = got_dict['url']
@@ -237,11 +245,11 @@ def expect_info_dict(self, got_dict, expected_dict):
if expected_dict.get('ext'):
mandatory_fields.extend(('url', 'ext'))
for key in mandatory_fields:
self.assertTrue(got_dict.get(key), 'Missing mandatory field %s' % key)
self.assertTrue(got_dict.get(key), f'Missing mandatory field {key}')
# Check for mandatory fields that are automatically set by YoutubeDL
if got_dict.get('_type', 'video') == 'video':
for key in ['webpage_url', 'extractor', 'extractor_key']:
self.assertTrue(got_dict.get(key), 'Missing field: %s' % key)
self.assertTrue(got_dict.get(key), f'Missing field: {key}')
test_info_dict = sanitize_got_info_dict(got_dict)
@@ -249,7 +257,7 @@ def expect_info_dict(self, got_dict, expected_dict):
if missing_keys:
def _repr(v):
if isinstance(v, str):
return "'%s'" % v.replace('\\', '\\\\').replace("'", "\\'").replace('\n', '\\n')
return "'{}'".format(v.replace('\\', '\\\\').replace("'", "\\'").replace('\n', '\\n'))
elif isinstance(v, type):
return v.__name__
else:
@@ -266,8 +274,7 @@ def _repr(v):
write_string(info_dict_str.replace('\n', '\n '), out=sys.stderr)
self.assertFalse(
missing_keys,
'Missing keys in test definition: %s' % (
', '.join(sorted(missing_keys))))
'Missing keys in test definition: {}'.format(', '.join(sorted(missing_keys))))
def assertRegexpMatches(self, text, regexp, msg=None):
@@ -276,9 +283,9 @@ def assertRegexpMatches(self, text, regexp, msg=None):
else:
m = re.match(regexp, text)
if not m:
note = 'Regexp didn\'t match: %r not found' % (regexp)
note = f'Regexp didn\'t match: {regexp!r} not found'
if len(text) < 1000:
note += ' in %r' % text
note += f' in {text!r}'
if msg is None:
msg = note
else:
@@ -301,7 +308,7 @@ def assertLessEqual(self, got, expected, msg=None):
def assertEqual(self, got, expected, msg=None):
if not (got == expected):
if got != expected:
if msg is None:
msg = f'{got!r} not equal to {expected!r}'
self.assertTrue(got == expected, msg)
@@ -324,3 +331,13 @@ def http_server_port(httpd):
else:
sock = httpd.socket
return sock.getsockname()[1]
def verify_address_availability(address):
if find_available_port(address) is None:
pytest.skip(f'Unable to bind to source address {address} (address may not exist)')
def validate_and_send(rh, req):
rh.validate(req)
return rh.send(req)

View File

@@ -262,19 +262,19 @@ def test_search_json_ld_realworld(self):
''',
{
'chapters': [
{"title": "Explosie Turnhout", "start_time": 70, "end_time": 440},
{"title": "Jaarwisseling", "start_time": 440, "end_time": 1179},
{"title": "Natuurbranden Colorado", "start_time": 1179, "end_time": 1263},
{"title": "Klimaatverandering", "start_time": 1263, "end_time": 1367},
{"title": "Zacht weer", "start_time": 1367, "end_time": 1383},
{"title": "Financiële balans", "start_time": 1383, "end_time": 1484},
{"title": "Club Brugge", "start_time": 1484, "end_time": 1575},
{"title": "Mentale gezondheid bij topsporters", "start_time": 1575, "end_time": 1728},
{"title": "Olympische Winterspelen", "start_time": 1728, "end_time": 1873},
{"title": "Sober oudjaar in Nederland", "start_time": 1873, "end_time": 2079.23}
{'title': 'Explosie Turnhout', 'start_time': 70, 'end_time': 440},
{'title': 'Jaarwisseling', 'start_time': 440, 'end_time': 1179},
{'title': 'Natuurbranden Colorado', 'start_time': 1179, 'end_time': 1263},
{'title': 'Klimaatverandering', 'start_time': 1263, 'end_time': 1367},
{'title': 'Zacht weer', 'start_time': 1367, 'end_time': 1383},
{'title': 'Financiële balans', 'start_time': 1383, 'end_time': 1484},
{'title': 'Club Brugge', 'start_time': 1484, 'end_time': 1575},
{'title': 'Mentale gezondheid bij topsporters', 'start_time': 1575, 'end_time': 1728},
{'title': 'Olympische Winterspelen', 'start_time': 1728, 'end_time': 1873},
{'title': 'Sober oudjaar in Nederland', 'start_time': 1873, 'end_time': 2079.23},
],
'title': 'Het journaal - Aflevering 365 (Seizoen 2021)'
}, {}
'title': 'Het journaal - Aflevering 365 (Seizoen 2021)',
}, {},
),
(
# test multiple thumbnails in a list
@@ -301,13 +301,13 @@ def test_search_json_ld_realworld(self):
'thumbnails': [{'url': 'https://www.rainews.it/cropgd/640x360/dl/img/2021/12/30/1640886376927_GettyImages.jpg'}],
},
{},
)
),
]
for html, expected_dict, search_json_ld_kwargs in _TESTS:
expect_dict(
self,
self.ie._search_json_ld(html, None, **search_json_ld_kwargs),
expected_dict
expected_dict,
)
def test_download_json(self):
@@ -366,7 +366,7 @@ def test_parse_html5_media_entries(self):
'height': 740,
'tbr': 1500,
}],
'thumbnail': '//pics.r18.com/digital/amateur/mgmr105/mgmr105jp.jpg'
'thumbnail': '//pics.r18.com/digital/amateur/mgmr105/mgmr105jp.jpg',
})
# from https://www.csfd.cz/
@@ -419,9 +419,9 @@ def test_parse_html5_media_entries(self):
'height': 1080,
}],
'subtitles': {
'cs': [{'url': 'https://video.csfd.cz/files/subtitles/163/344/163344115_4c388b.srt'}]
'cs': [{'url': 'https://video.csfd.cz/files/subtitles/163/344/163344115_4c388b.srt'}],
},
'thumbnail': 'https://img.csfd.cz/files/images/film/video/preview/163/344/163344118_748d20.png?h360'
'thumbnail': 'https://img.csfd.cz/files/images/film/video/preview/163/344/163344118_748d20.png?h360',
})
# from https://tamasha.com/v/Kkdjw
@@ -452,7 +452,7 @@ def test_parse_html5_media_entries(self):
'ext': 'mp4',
'format_id': '144p',
'height': 144,
}]
}],
})
# from https://www.directvnow.com
@@ -470,7 +470,7 @@ def test_parse_html5_media_entries(self):
'formats': [{
'ext': 'mp4',
'url': 'https://cdn.directv.com/content/dam/dtv/prod/website_directvnow-international/videos/DTVN_hdr_HBO_v3.mp4',
}]
}],
})
# from https://www.directvnow.com
@@ -488,7 +488,7 @@ def test_parse_html5_media_entries(self):
'formats': [{
'url': 'https://cdn.directv.com/content/dam/dtv/prod/website_directvnow-international/videos/DTVN_hdr_HBO_v3.mp4',
'ext': 'mp4',
}]
}],
})
# from https://www.klarna.com/uk/
@@ -547,8 +547,8 @@ def test_extract_jwplayer_data_realworld(self):
'id': 'XEgvuql4',
'formats': [{
'url': 'rtmp://192.138.214.154/live/sjclive',
'ext': 'flv'
}]
'ext': 'flv',
}],
})
# from https://www.pornoxo.com/videos/7564/striptease-from-sexy-secretary/
@@ -588,8 +588,8 @@ def test_extract_jwplayer_data_realworld(self):
'thumbnail': 'https://t03.vipstreamservice.com/thumbs/pxo-full/2009-12/14/a4b2157147afe5efa93ce1978e0265289c193874e02597.flv-full-13.jpg',
'formats': [{
'url': 'https://cdn.pornoxo.com/key=MF+oEbaxqTKb50P-w9G3nA,end=1489689259,ip=104.199.146.27/ip=104.199.146.27/speed=6573765/buffer=3.0/2009-12/4b2157147afe5efa93ce1978e0265289c193874e02597.flv',
'ext': 'flv'
}]
'ext': 'flv',
}],
})
# from http://www.indiedb.com/games/king-machine/videos
@@ -610,12 +610,12 @@ def test_extract_jwplayer_data_realworld(self):
'formats': [{
'url': 'http://cdn.dbolical.com/cache/videos/games/1/50/49678/encode_mp4/king-machine-trailer.mp4',
'height': 360,
'ext': 'mp4'
'ext': 'mp4',
}, {
'url': 'http://cdn.dbolical.com/cache/videos/games/1/50/49678/encode720p_mp4/king-machine-trailer.mp4',
'height': 720,
'ext': 'mp4'
}]
'ext': 'mp4',
}],
})
def test_parse_m3u8_formats(self):
@@ -866,7 +866,7 @@ def test_parse_m3u8_formats(self):
'height': 1080,
'vcodec': 'avc1.64002a',
}],
{}
{},
),
(
'bipbop_16x9',
@@ -990,45 +990,45 @@ def test_parse_m3u8_formats(self):
'en': [{
'url': 'https://devstreaming-cdn.apple.com/videos/streaming/examples/bipbop_16x9/subtitles/eng/prog_index.m3u8',
'ext': 'vtt',
'protocol': 'm3u8_native'
'protocol': 'm3u8_native',
}, {
'url': 'https://devstreaming-cdn.apple.com/videos/streaming/examples/bipbop_16x9/subtitles/eng_forced/prog_index.m3u8',
'ext': 'vtt',
'protocol': 'm3u8_native'
'protocol': 'm3u8_native',
}],
'fr': [{
'url': 'https://devstreaming-cdn.apple.com/videos/streaming/examples/bipbop_16x9/subtitles/fra/prog_index.m3u8',
'ext': 'vtt',
'protocol': 'm3u8_native'
'protocol': 'm3u8_native',
}, {
'url': 'https://devstreaming-cdn.apple.com/videos/streaming/examples/bipbop_16x9/subtitles/fra_forced/prog_index.m3u8',
'ext': 'vtt',
'protocol': 'm3u8_native'
'protocol': 'm3u8_native',
}],
'es': [{
'url': 'https://devstreaming-cdn.apple.com/videos/streaming/examples/bipbop_16x9/subtitles/spa/prog_index.m3u8',
'ext': 'vtt',
'protocol': 'm3u8_native'
'protocol': 'm3u8_native',
}, {
'url': 'https://devstreaming-cdn.apple.com/videos/streaming/examples/bipbop_16x9/subtitles/spa_forced/prog_index.m3u8',
'ext': 'vtt',
'protocol': 'm3u8_native'
'protocol': 'm3u8_native',
}],
'ja': [{
'url': 'https://devstreaming-cdn.apple.com/videos/streaming/examples/bipbop_16x9/subtitles/jpn/prog_index.m3u8',
'ext': 'vtt',
'protocol': 'm3u8_native'
'protocol': 'm3u8_native',
}, {
'url': 'https://devstreaming-cdn.apple.com/videos/streaming/examples/bipbop_16x9/subtitles/jpn_forced/prog_index.m3u8',
'ext': 'vtt',
'protocol': 'm3u8_native'
'protocol': 'm3u8_native',
}],
}
},
),
]
for m3u8_file, m3u8_url, expected_formats, expected_subs in _TEST_CASES:
with open('./test/testdata/m3u8/%s.m3u8' % m3u8_file, encoding='utf-8') as f:
with open(f'./test/testdata/m3u8/{m3u8_file}.m3u8', encoding='utf-8') as f:
formats, subs = self.ie._parse_m3u8_formats_and_subtitles(
f.read(), m3u8_url, ext='mp4')
self.ie._sort_formats(formats)
@@ -1366,14 +1366,14 @@ def test_parse_mpd_formats(self):
'url': 'https://sdn-global-streaming-cache-3qsdn.akamaized.net/stream/3144/files/17/07/672975/3144-kZT4LWMQw6Rh7Kpd.ism/manifest.mpd',
'fragment_base_url': 'https://sdn-global-streaming-cache-3qsdn.akamaized.net/stream/3144/files/17/07/672975/3144-kZT4LWMQw6Rh7Kpd.ism/dash/',
'protocol': 'http_dash_segments',
}
]
},
)
],
},
),
]
for mpd_file, mpd_url, mpd_base_url, expected_formats, expected_subtitles in _TEST_CASES:
with open('./test/testdata/mpd/%s.mpd' % mpd_file, encoding='utf-8') as f:
with open(f'./test/testdata/mpd/{mpd_file}.mpd', encoding='utf-8') as f:
formats, subtitles = self.ie._parse_mpd_formats_and_subtitles(
compat_etree_fromstring(f.read().encode()),
mpd_base_url=mpd_base_url, mpd_url=mpd_url)
@@ -1408,7 +1408,7 @@ def test_parse_ism_formats(self):
'sampling_rate': 48000,
'channels': 2,
'bits_per_sample': 16,
'nal_unit_length_field': 4
'nal_unit_length_field': 4,
},
}, {
'format_id': 'video-100',
@@ -1431,7 +1431,7 @@ def test_parse_ism_formats(self):
'codec_private_data': '00000001674D401FDA0544EFFC2D002CBC40000003004000000C03C60CA80000000168EF32C8',
'channels': 2,
'bits_per_sample': 16,
'nal_unit_length_field': 4
'nal_unit_length_field': 4,
},
}, {
'format_id': 'video-326',
@@ -1454,7 +1454,7 @@ def test_parse_ism_formats(self):
'codec_private_data': '00000001674D401FDA0241FE23FFC3BC83BA44000003000400000300C03C60CA800000000168EF32C8',
'channels': 2,
'bits_per_sample': 16,
'nal_unit_length_field': 4
'nal_unit_length_field': 4,
},
}, {
'format_id': 'video-698',
@@ -1477,7 +1477,7 @@ def test_parse_ism_formats(self):
'codec_private_data': '00000001674D401FDA0350BFB97FF06AF06AD1000003000100000300300F1832A00000000168EF32C8',
'channels': 2,
'bits_per_sample': 16,
'nal_unit_length_field': 4
'nal_unit_length_field': 4,
},
}, {
'format_id': 'video-1493',
@@ -1500,7 +1500,7 @@ def test_parse_ism_formats(self):
'codec_private_data': '00000001674D401FDA011C3DE6FFF0D890D871000003000100000300300F1832A00000000168EF32C8',
'channels': 2,
'bits_per_sample': 16,
'nal_unit_length_field': 4
'nal_unit_length_field': 4,
},
}, {
'format_id': 'video-4482',
@@ -1523,7 +1523,7 @@ def test_parse_ism_formats(self):
'codec_private_data': '00000001674D401FDA01A816F97FFC1ABC1AB440000003004000000C03C60CA80000000168EF32C8',
'channels': 2,
'bits_per_sample': 16,
'nal_unit_length_field': 4
'nal_unit_length_field': 4,
},
}],
{
@@ -1538,10 +1538,10 @@ def test_parse_ism_formats(self):
'duration': 8880746666,
'timescale': 10000000,
'fourcc': 'TTML',
'codec_private_data': ''
}
}
]
'codec_private_data': '',
},
},
],
},
),
(
@@ -1571,7 +1571,7 @@ def test_parse_ism_formats(self):
'sampling_rate': 48000,
'channels': 2,
'bits_per_sample': 16,
'nal_unit_length_field': 4
'nal_unit_length_field': 4,
},
}, {
'format_id': 'audio_deu_1-224',
@@ -1597,7 +1597,7 @@ def test_parse_ism_formats(self):
'sampling_rate': 48000,
'channels': 6,
'bits_per_sample': 16,
'nal_unit_length_field': 4
'nal_unit_length_field': 4,
},
}, {
'format_id': 'video_deu-23',
@@ -1622,7 +1622,7 @@ def test_parse_ism_formats(self):
'codec_private_data': '000000016742C00CDB06077E5C05A808080A00000300020000030009C0C02EE0177CC6300F142AE00000000168CA8DC8',
'channels': 2,
'bits_per_sample': 16,
'nal_unit_length_field': 4
'nal_unit_length_field': 4,
},
}, {
'format_id': 'video_deu-403',
@@ -1647,7 +1647,7 @@ def test_parse_ism_formats(self):
'codec_private_data': '00000001674D4014E98323B602D4040405000003000100000300320F1429380000000168EAECF2',
'channels': 2,
'bits_per_sample': 16,
'nal_unit_length_field': 4
'nal_unit_length_field': 4,
},
}, {
'format_id': 'video_deu-680',
@@ -1672,7 +1672,7 @@ def test_parse_ism_formats(self):
'codec_private_data': '00000001674D401EE981405FF2E02D4040405000000300100000030320F162D3800000000168EAECF2',
'channels': 2,
'bits_per_sample': 16,
'nal_unit_length_field': 4
'nal_unit_length_field': 4,
},
}, {
'format_id': 'video_deu-1253',
@@ -1698,7 +1698,7 @@ def test_parse_ism_formats(self):
'codec_private_data': '00000001674D401EE981405FF2E02D4040405000000300100000030320F162D3800000000168EAECF2',
'channels': 2,
'bits_per_sample': 16,
'nal_unit_length_field': 4
'nal_unit_length_field': 4,
},
}, {
'format_id': 'video_deu-2121',
@@ -1723,7 +1723,7 @@ def test_parse_ism_formats(self):
'codec_private_data': '00000001674D401EECA0601BD80B50101014000003000400000300C83C58B6580000000168E93B3C80',
'channels': 2,
'bits_per_sample': 16,
'nal_unit_length_field': 4
'nal_unit_length_field': 4,
},
}, {
'format_id': 'video_deu-3275',
@@ -1748,7 +1748,7 @@ def test_parse_ism_formats(self):
'codec_private_data': '00000001674D4020ECA02802DD80B501010140000003004000000C83C60C65800000000168E93B3C80',
'channels': 2,
'bits_per_sample': 16,
'nal_unit_length_field': 4
'nal_unit_length_field': 4,
},
}, {
'format_id': 'video_deu-5300',
@@ -1773,7 +1773,7 @@ def test_parse_ism_formats(self):
'codec_private_data': '00000001674D4028ECA03C0113F2E02D4040405000000300100000030320F18319600000000168E93B3C80',
'channels': 2,
'bits_per_sample': 16,
'nal_unit_length_field': 4
'nal_unit_length_field': 4,
},
}, {
'format_id': 'video_deu-8079',
@@ -1798,7 +1798,7 @@ def test_parse_ism_formats(self):
'codec_private_data': '00000001674D4028ECA03C0113F2E02D4040405000000300100000030320F18319600000000168E93B3C80',
'channels': 2,
'bits_per_sample': 16,
'nal_unit_length_field': 4
'nal_unit_length_field': 4,
},
}],
{},
@@ -1806,7 +1806,7 @@ def test_parse_ism_formats(self):
]
for ism_file, ism_url, expected_formats, expected_subtitles in _TEST_CASES:
with open('./test/testdata/ism/%s.Manifest' % ism_file, encoding='utf-8') as f:
with open(f'./test/testdata/ism/{ism_file}.Manifest', encoding='utf-8') as f:
formats, subtitles = self.ie._parse_ism_formats_and_subtitles(
compat_etree_fromstring(f.read().encode()), ism_url=ism_url)
self.ie._sort_formats(formats)
@@ -1827,12 +1827,12 @@ def test_parse_f4m_formats(self):
'tbr': 2148,
'width': 1280,
'height': 720,
}]
}],
),
]
for f4m_file, f4m_url, expected_formats in _TEST_CASES:
with open('./test/testdata/f4m/%s.f4m' % f4m_file, encoding='utf-8') as f:
with open(f'./test/testdata/f4m/{f4m_file}.f4m', encoding='utf-8') as f:
formats = self.ie._parse_f4m_formats(
compat_etree_fromstring(f.read().encode()),
f4m_url, None)
@@ -1873,13 +1873,13 @@ def test_parse_xspf(self):
}, {
'manifest_url': 'https://example.org/src/foo_xspf.xspf',
'url': 'https://example.com/track3.mp3',
}]
}]
}],
}],
),
]
for xspf_file, xspf_url, expected_entries in _TEST_CASES:
with open('./test/testdata/xspf/%s.xspf' % xspf_file, encoding='utf-8') as f:
with open(f'./test/testdata/xspf/{xspf_file}.xspf', encoding='utf-8') as f:
entries = self.ie._parse_xspf(
compat_etree_fromstring(f.read().encode()),
xspf_file, xspf_url=xspf_url, xspf_base_url=xspf_url)
@@ -1902,10 +1902,19 @@ def test_response_with_expected_status_returns_content(self):
server_thread.start()
(content, urlh) = self.ie._download_webpage_handle(
'http://127.0.0.1:%d/teapot' % port, None,
f'http://127.0.0.1:{port}/teapot', None,
expected_status=TEAPOT_RESPONSE_STATUS)
self.assertEqual(content, TEAPOT_RESPONSE_BODY)
def test_search_nextjs_data(self):
data = '<script id="__NEXT_DATA__" type="application/json">{"props":{}}</script>'
self.assertEqual(self.ie._search_nextjs_data(data, None), {'props': {}})
self.assertEqual(self.ie._search_nextjs_data('', None, fatal=False), {})
self.assertEqual(self.ie._search_nextjs_data('', None, default=None), None)
self.assertEqual(self.ie._search_nextjs_data('', None, default={}), {})
with self.assertWarns(DeprecationWarning):
self.assertEqual(self.ie._search_nextjs_data('', None, default='{}'), {})
if __name__ == '__main__':
unittest.main()

View File

@@ -8,10 +8,11 @@
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import contextlib
import copy
import json
from test.helper import FakeYDL, assertRegexpMatches
from test.helper import FakeYDL, assertRegexpMatches, try_rm
from yt_dlp import YoutubeDL
from yt_dlp.compat import compat_os_name
from yt_dlp.extractor import YoutubeIE
@@ -24,6 +25,7 @@
int_or_none,
match_filter_func,
)
from yt_dlp.utils.traversal import traverse_obj
TEST_URL = 'http://localhost/sample.mp4'
@@ -128,8 +130,8 @@ def test(inp, *expected, multi=False):
'allow_multiple_audio_streams': multi,
})
ydl.process_ie_result(info_dict.copy())
downloaded = map(lambda x: x['format_id'], ydl.downloaded_info_dicts)
self.assertEqual(list(downloaded), list(expected))
downloaded = [x['format_id'] for x in ydl.downloaded_info_dicts]
self.assertEqual(downloaded, list(expected))
test('20/47', '47')
test('20/71/worst', '35')
@@ -139,6 +141,8 @@ def test(inp, *expected, multi=False):
test('example-with-dashes', 'example-with-dashes')
test('all', '2', '47', '45', 'example-with-dashes', '35')
test('mergeall', '2+47+45+example-with-dashes+35', multi=True)
# See: https://github.com/yt-dlp/yt-dlp/pulls/8797
test('7_a/worst', '35')
def test_format_selection_audio(self):
formats = [
@@ -180,7 +184,7 @@ def test_format_selection_audio_exts(self):
]
info_dict = _make_result(formats)
ydl = YDL({'format': 'best'})
ydl = YDL({'format': 'best', 'format_sort': ['abr', 'ext']})
ydl.sort_formats(info_dict)
ydl.process_ie_result(copy.deepcopy(info_dict))
downloaded = ydl.downloaded_info_dicts[0]
@@ -192,7 +196,7 @@ def test_format_selection_audio_exts(self):
downloaded = ydl.downloaded_info_dicts[0]
self.assertEqual(downloaded['format_id'], 'mp3-64')
ydl = YDL({'prefer_free_formats': True})
ydl = YDL({'prefer_free_formats': True, 'format_sort': ['abr', 'ext']})
ydl.sort_formats(info_dict)
ydl.process_ie_result(copy.deepcopy(info_dict))
downloaded = ydl.downloaded_info_dicts[0]
@@ -512,10 +516,8 @@ def test_format_filtering(self):
self.assertEqual(downloaded_ids, ['D', 'C', 'B'])
ydl = YDL({'format': 'best[height<40]'})
try:
with contextlib.suppress(ExtractorError):
ydl.process_ie_result(info_dict)
except ExtractorError:
pass
self.assertEqual(ydl.downloaded_info_dicts, [])
def test_default_format_spec(self):
@@ -630,7 +632,6 @@ def test_add_extra_info(self):
self.assertEqual(test_dict['playlist'], 'funny videos')
outtmpl_info = {
'id': '1234',
'id': '1234',
'ext': 'mp4',
'width': None,
@@ -650,8 +651,8 @@ def test_add_extra_info(self):
'formats': [
{'id': 'id 1', 'height': 1080, 'width': 1920},
{'id': 'id 2', 'height': 720},
{'id': 'id 3'}
]
{'id': 'id 3'},
],
}
def test_prepare_outtmpl_and_filename(self):
@@ -684,7 +685,8 @@ def test(tmpl, expected, *, info=None, **params):
test('%(id)s.%(ext)s', '1234.mp4')
test('%(duration_string)s', ('27:46:40', '27-46-40'))
test('%(resolution)s', '1080p')
test('%(playlist_index)s', '001')
test('%(playlist_index|)s', '001')
test('%(playlist_index&{}!)s', '1!')
test('%(playlist_autonumber)s', '02')
test('%(autonumber)s', '00001')
test('%(autonumber+2)03d', '005', autonumber_start=3)
@@ -727,7 +729,7 @@ def expect_same_infodict(out):
self.assertEqual(got_dict.get(info_field), expected, info_field)
return True
test('%()j', (expect_same_infodict, str))
test('%()j', (expect_same_infodict, None))
# NA placeholder
NA_TEST_OUTTMPL = '%(uploader_date)s-%(width)d-%(x|def)s-%(id)s.%(ext)s'
@@ -770,7 +772,7 @@ def expect_same_infodict(out):
test('%(formats)j', (json.dumps(FORMATS), None))
test('%(formats)#j', (
json.dumps(FORMATS, indent=4),
json.dumps(FORMATS, indent=4).replace(':', '').replace('"', "").replace('\n', ' ')
json.dumps(FORMATS, indent=4).replace(':', '').replace('"', '').replace('\n', ' '),
))
test('%(title5).3B', 'á')
test('%(title5)U', 'áéí 𝐀')
@@ -783,9 +785,9 @@ def expect_same_infodict(out):
test('%(title4)#S', 'foo_bar_test')
test('%(title4).10S', ('foo bar ', 'foo bar' + ('#' if compat_os_name == 'nt' else ' ')))
if compat_os_name == 'nt':
test('%(title4)q', ('"foo \\"bar\\" test"', "foo bar test"))
test('%(formats.:.id)#q', ('"id 1" "id 2" "id 3"', 'id 1 id 2 id 3'))
test('%(formats.0.id)#q', ('"id 1"', 'id 1'))
test('%(title4)q', ('"foo ""bar"" test"', None))
test('%(formats.:.id)#q', ('"id 1" "id 2" "id 3"', None))
test('%(formats.0.id)#q', ('"id 1"', None))
else:
test('%(title4)q', ('\'foo "bar" test\'', '\'foo bar test\''))
test('%(formats.:.id)#q', "'id 1' 'id 2' 'id 3'")
@@ -796,6 +798,7 @@ def expect_same_infodict(out):
test('%(title|%)s %(title|%%)s', '% %%')
test('%(id+1-height+3)05d', '00158')
test('%(width+100)05d', 'NA')
test('%(filesize*8)d', '8192')
test('%(formats.0) 15s', ('% 15s' % FORMATS[0], None))
test('%(formats.0)r', (repr(FORMATS[0]), None))
test('%(height.0)03d', '001')
@@ -829,6 +832,7 @@ def expect_same_infodict(out):
test('%(id&hi {:>10} {}|)s', 'hi 1234 1234')
test(R'%(id&{0} {}|)s', 'NA')
test(R'%(id&{0.1}|)s', 'NA')
test('%(height&{:,d})S', '1,080')
# Laziness
def gen():
@@ -838,8 +842,8 @@ def gen():
# Empty filename
test('%(foo|)s-%(bar|)s.%(ext)s', '-.mp4')
# test('%(foo|)s.%(ext)s', ('.mp4', '_.mp4')) # fixme
# test('%(foo|)s', ('', '_')) # fixme
# test('%(foo|)s.%(ext)s', ('.mp4', '_.mp4')) # FIXME: ?
# test('%(foo|)s', ('', '_')) # FIXME: ?
# Environment variable expansion for prepare_filename
os.environ['__yt_dlp_var'] = 'expanded'
@@ -856,7 +860,7 @@ def gen():
test('Hello %(title1)s', 'Hello $PATH')
test('Hello %(title2)s', 'Hello %PATH%')
test('%(title3)s', ('foo/bar\\test', 'foobartest'))
test('folder/%(title3)s', ('folder/foo/bar\\test', 'folder%sfoobartest' % os.path.sep))
test('folder/%(title3)s', ('folder/foo/bar\\test', f'folder{os.path.sep}foobartest'))
def test_format_note(self):
ydl = YoutubeDL()
@@ -878,22 +882,22 @@ def run(self, info):
f.write('EXAMPLE')
return [info['filepath']], info
def run_pp(params, PP):
def run_pp(params, pp):
with open(filename, 'w') as f:
f.write('EXAMPLE')
ydl = YoutubeDL(params)
ydl.add_post_processor(PP())
ydl.add_post_processor(pp())
ydl.post_process(filename, {'filepath': filename})
run_pp({'keepvideo': True}, SimplePP)
self.assertTrue(os.path.exists(filename), '%s doesn\'t exist' % filename)
self.assertTrue(os.path.exists(audiofile), '%s doesn\'t exist' % audiofile)
self.assertTrue(os.path.exists(filename), f'{filename} doesn\'t exist')
self.assertTrue(os.path.exists(audiofile), f'{audiofile} doesn\'t exist')
os.unlink(filename)
os.unlink(audiofile)
run_pp({'keepvideo': False}, SimplePP)
self.assertFalse(os.path.exists(filename), '%s exists' % filename)
self.assertTrue(os.path.exists(audiofile), '%s doesn\'t exist' % audiofile)
self.assertFalse(os.path.exists(filename), f'{filename} exists')
self.assertTrue(os.path.exists(audiofile), f'{audiofile} doesn\'t exist')
os.unlink(audiofile)
class ModifierPP(PostProcessor):
@@ -903,7 +907,7 @@ def run(self, info):
return [], info
run_pp({'keepvideo': False}, ModifierPP)
self.assertTrue(os.path.exists(filename), '%s doesn\'t exist' % filename)
self.assertTrue(os.path.exists(filename), f'{filename} doesn\'t exist')
os.unlink(filename)
def test_match_filter(self):
@@ -915,7 +919,7 @@ def test_match_filter(self):
'duration': 30,
'filesize': 10 * 1024,
'playlist_id': '42',
'uploader': "變態妍字幕版 太妍 тест",
'uploader': '變態妍字幕版 太妍 тест',
'creator': "тест ' 123 ' тест--",
'webpage_url': 'http://example.com/watch?v=shenanigans',
}
@@ -928,7 +932,7 @@ def test_match_filter(self):
'description': 'foo',
'filesize': 5 * 1024,
'playlist_id': '43',
'uploader': "тест 123",
'uploader': 'тест 123',
'webpage_url': 'http://example.com/watch?v=SHENANIGANS',
}
videos = [first, second]
@@ -936,7 +940,7 @@ def test_match_filter(self):
def get_videos(filter_=None):
ydl = YDL({'match_filter': filter_, 'simulate': True})
for v in videos:
ydl.process_ie_result(v, download=True)
ydl.process_ie_result(v.copy(), download=True)
return [v['id'] for v in ydl.downloaded_info_dicts]
res = get_videos()
@@ -1175,7 +1179,7 @@ def _real_extract(self, url):
})
return {
'id': video_id,
'title': 'Video %s' % video_id,
'title': f'Video {video_id}',
'formats': formats,
}
@@ -1189,8 +1193,8 @@ def _entries(self):
'_type': 'url_transparent',
'ie_key': VideoIE.ie_key(),
'id': video_id,
'url': 'video:%s' % video_id,
'title': 'Video Transparent %s' % video_id,
'url': f'video:{video_id}',
'title': f'Video Transparent {video_id}',
}
def _real_extract(self, url):
@@ -1213,6 +1217,129 @@ def _real_extract(self, url):
self.assertEqual(downloaded['extractor'], 'Video')
self.assertEqual(downloaded['extractor_key'], 'Video')
def test_header_cookies(self):
from http.cookiejar import Cookie
ydl = FakeYDL()
ydl.report_warning = lambda *_, **__: None
def cookie(name, value, version=None, domain='', path='', secure=False, expires=None):
return Cookie(
version or 0, name, value, None, False,
domain, bool(domain), bool(domain), path, bool(path),
secure, expires, False, None, None, rest={})
_test_url = 'https://yt.dlp/test'
def test(encoded_cookies, cookies, *, headers=False, round_trip=None, error_re=None):
def _test():
ydl.cookiejar.clear()
ydl._load_cookies(encoded_cookies, autoscope=headers)
if headers:
ydl._apply_header_cookies(_test_url)
data = {'url': _test_url}
ydl._calc_headers(data)
self.assertCountEqual(
map(vars, ydl.cookiejar), map(vars, cookies),
'Extracted cookiejar.Cookie is not the same')
if not headers:
self.assertEqual(
data.get('cookies'), round_trip or encoded_cookies,
'Cookie is not the same as round trip')
ydl.__dict__['_YoutubeDL__header_cookies'] = []
with self.subTest(msg=encoded_cookies):
if not error_re:
_test()
return
with self.assertRaisesRegex(Exception, error_re):
_test()
test('test=value; Domain=.yt.dlp', [cookie('test', 'value', domain='.yt.dlp')])
test('test=value', [cookie('test', 'value')], error_re=r'Unscoped cookies are not allowed')
test('cookie1=value1; Domain=.yt.dlp; Path=/test; cookie2=value2; Domain=.yt.dlp; Path=/', [
cookie('cookie1', 'value1', domain='.yt.dlp', path='/test'),
cookie('cookie2', 'value2', domain='.yt.dlp', path='/')])
test('test=value; Domain=.yt.dlp; Path=/test; Secure; Expires=9999999999', [
cookie('test', 'value', domain='.yt.dlp', path='/test', secure=True, expires=9999999999)])
test('test="value; "; path=/test; domain=.yt.dlp', [
cookie('test', 'value; ', domain='.yt.dlp', path='/test')],
round_trip='test="value\\073 "; Domain=.yt.dlp; Path=/test')
test('name=; Domain=.yt.dlp', [cookie('name', '', domain='.yt.dlp')],
round_trip='name=""; Domain=.yt.dlp')
test('test=value', [cookie('test', 'value', domain='.yt.dlp')], headers=True)
test('cookie1=value; Domain=.yt.dlp; cookie2=value', [], headers=True, error_re=r'Invalid syntax')
ydl.deprecated_feature = ydl.report_error
test('test=value', [], headers=True, error_re=r'Passing cookies as a header is a potential security risk')
def test_infojson_cookies(self):
TEST_FILE = 'test_infojson_cookies.info.json'
TEST_URL = 'https://example.com/example.mp4'
COOKIES = 'a=b; Domain=.example.com; c=d; Domain=.example.com'
COOKIE_HEADER = {'Cookie': 'a=b; c=d'}
ydl = FakeYDL()
ydl.process_info = lambda x: ydl._write_info_json('test', x, TEST_FILE)
def make_info(info_header_cookies=False, fmts_header_cookies=False, cookies_field=False):
fmt = {'url': TEST_URL}
if fmts_header_cookies:
fmt['http_headers'] = COOKIE_HEADER
if cookies_field:
fmt['cookies'] = COOKIES
return _make_result([fmt], http_headers=COOKIE_HEADER if info_header_cookies else None)
def test(initial_info, note):
result = {}
result['processed'] = ydl.process_ie_result(initial_info)
self.assertTrue(ydl.cookiejar.get_cookies_for_url(TEST_URL),
msg=f'No cookies set in cookiejar after initial process when {note}')
ydl.cookiejar.clear()
with open(TEST_FILE) as infojson:
result['loaded'] = ydl.sanitize_info(json.load(infojson), True)
result['final'] = ydl.process_ie_result(result['loaded'].copy(), download=False)
self.assertTrue(ydl.cookiejar.get_cookies_for_url(TEST_URL),
msg=f'No cookies set in cookiejar after final process when {note}')
ydl.cookiejar.clear()
for key in ('processed', 'loaded', 'final'):
info = result[key]
self.assertIsNone(
traverse_obj(info, ((None, ('formats', 0)), 'http_headers', 'Cookie'), casesense=False, get_all=False),
msg=f'Cookie header not removed in {key} result when {note}')
self.assertEqual(
traverse_obj(info, ((None, ('formats', 0)), 'cookies'), get_all=False), COOKIES,
msg=f'No cookies field found in {key} result when {note}')
test({'url': TEST_URL, 'http_headers': COOKIE_HEADER, 'id': '1', 'title': 'x'}, 'no formats field')
test(make_info(info_header_cookies=True), 'info_dict header cokies')
test(make_info(fmts_header_cookies=True), 'format header cookies')
test(make_info(info_header_cookies=True, fmts_header_cookies=True), 'info_dict and format header cookies')
test(make_info(info_header_cookies=True, fmts_header_cookies=True, cookies_field=True), 'all cookies fields')
test(make_info(cookies_field=True), 'cookies format field')
test({'url': TEST_URL, 'cookies': COOKIES, 'id': '1', 'title': 'x'}, 'info_dict cookies field only')
try_rm(TEST_FILE)
def test_add_headers_cookie(self):
def check_for_cookie_header(result):
return traverse_obj(result, ((None, ('formats', 0)), 'http_headers', 'Cookie'), casesense=False, get_all=False)
ydl = FakeYDL({'http_headers': {'Cookie': 'a=b'}})
ydl._apply_header_cookies(_make_result([])['webpage_url']) # Scope to input webpage URL: .example.com
fmt = {'url': 'https://example.com/video.mp4'}
result = ydl.process_ie_result(_make_result([fmt]), download=False)
self.assertIsNone(check_for_cookie_header(result), msg='http_headers cookies in result info_dict')
self.assertEqual(result.get('cookies'), 'a=b; Domain=.example.com', msg='No cookies were set in cookies field')
self.assertIn('a=b', ydl.cookiejar.get_cookie_header(fmt['url']), msg='No cookies were set in cookiejar')
fmt = {'url': 'https://wrong.com/video.mp4'}
result = ydl.process_ie_result(_make_result([fmt]), download=False)
self.assertIsNone(check_for_cookie_header(result), msg='http_headers cookies for wrong domain')
self.assertFalse(result.get('cookies'), msg='Cookies set in cookies field for wrong domain')
self.assertFalse(ydl.cookiejar.get_cookie_header(fmt['url']), msg='Cookies set in cookiejar for wrong domain')
if __name__ == '__main__':
unittest.main()

View File

@@ -17,10 +17,10 @@
class TestYoutubeDLCookieJar(unittest.TestCase):
def test_keep_session_cookies(self):
cookiejar = YoutubeDLCookieJar('./test/testdata/cookies/session_cookies.txt')
cookiejar.load(ignore_discard=True, ignore_expires=True)
cookiejar.load()
tf = tempfile.NamedTemporaryFile(delete=False)
try:
cookiejar.save(filename=tf.name, ignore_discard=True, ignore_expires=True)
cookiejar.save(filename=tf.name)
temp = tf.read().decode()
self.assertTrue(re.search(
r'www\.foobar\.foobar\s+FALSE\s+/\s+TRUE\s+0\s+YoutubeDLExpiresEmpty\s+YoutubeDLExpiresEmptyValue', temp))
@@ -32,7 +32,7 @@ def test_keep_session_cookies(self):
def test_strip_httponly_prefix(self):
cookiejar = YoutubeDLCookieJar('./test/testdata/cookies/httponly_cookies.txt')
cookiejar.load(ignore_discard=True, ignore_expires=True)
cookiejar.load()
def assert_cookie_has_value(key):
self.assertEqual(cookiejar._cookies['www.foobar.foobar']['/'][key].value, key + '_VALUE')
@@ -42,17 +42,25 @@ def assert_cookie_has_value(key):
def test_malformed_cookies(self):
cookiejar = YoutubeDLCookieJar('./test/testdata/cookies/malformed_cookies.txt')
cookiejar.load(ignore_discard=True, ignore_expires=True)
cookiejar.load()
# Cookies should be empty since all malformed cookie file entries
# will be ignored
self.assertFalse(cookiejar._cookies)
def test_get_cookie_header(self):
cookiejar = YoutubeDLCookieJar('./test/testdata/cookies/httponly_cookies.txt')
cookiejar.load(ignore_discard=True, ignore_expires=True)
cookiejar.load()
header = cookiejar.get_cookie_header('https://www.foobar.foobar')
self.assertIn('HTTPONLY_COOKIE', header)
def test_get_cookies_for_url(self):
cookiejar = YoutubeDLCookieJar('./test/testdata/cookies/session_cookies.txt')
cookiejar.load()
cookies = cookiejar.get_cookies_for_url('https://www.foobar.foobar/')
self.assertEqual(len(cookies), 2)
cookies = cookiejar.get_cookies_for_url('https://foobar.foobar/')
self.assertFalse(cookies)
if __name__ == '__main__':
unittest.main()

View File

@@ -87,7 +87,7 @@ def test_decrypt_text(self):
password = intlist_to_bytes(self.key).decode()
encrypted = base64.b64encode(
intlist_to_bytes(self.iv[:8])
+ b'\x17\x15\x93\xab\x8d\x80V\xcdV\xe0\t\xcdo\xc2\xa5\xd8ksM\r\xe27N\xae'
+ b'\x17\x15\x93\xab\x8d\x80V\xcdV\xe0\t\xcdo\xc2\xa5\xd8ksM\r\xe27N\xae',
).decode()
decrypted = (aes_decrypt_text(encrypted, password, 16))
self.assertEqual(decrypted, self.secret_msg)
@@ -95,7 +95,7 @@ def test_decrypt_text(self):
password = intlist_to_bytes(self.key).decode()
encrypted = base64.b64encode(
intlist_to_bytes(self.iv[:8])
+ b'\x0b\xe6\xa4\xd9z\x0e\xb8\xb9\xd0\xd4i_\x85\x1d\x99\x98_\xe5\x80\xe7.\xbf\xa5\x83'
+ b'\x0b\xe6\xa4\xd9z\x0e\xb8\xb9\xd0\xd4i_\x85\x1d\x99\x98_\xe5\x80\xe7.\xbf\xa5\x83',
).decode()
decrypted = (aes_decrypt_text(encrypted, password, 32))
self.assertEqual(decrypted, self.secret_msg)
@@ -132,16 +132,16 @@ def test_pad_block(self):
block = [0x21, 0xA0, 0x43, 0xFF]
self.assertEqual(pad_block(block, 'pkcs7'),
block + [0x0C, 0x0C, 0x0C, 0x0C, 0x0C, 0x0C, 0x0C, 0x0C, 0x0C, 0x0C, 0x0C, 0x0C])
[*block, 0x0C, 0x0C, 0x0C, 0x0C, 0x0C, 0x0C, 0x0C, 0x0C, 0x0C, 0x0C, 0x0C, 0x0C])
self.assertEqual(pad_block(block, 'iso7816'),
block + [0x80, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00])
[*block, 0x80, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00])
self.assertEqual(pad_block(block, 'whitespace'),
block + [0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20])
[*block, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20])
self.assertEqual(pad_block(block, 'zero'),
block + [0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00])
[*block, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00])
block = list(range(16))
for mode in ('pkcs7', 'iso7816', 'whitespace', 'zero'):

View File

@@ -9,30 +9,30 @@
import struct
import urllib.parse
from yt_dlp import compat
from yt_dlp.compat import urllib # isort: split
from yt_dlp.compat import (
compat_etree_fromstring,
compat_expanduser,
compat_urllib_parse_unquote,
compat_urllib_parse_urlencode,
compat_urllib_parse_unquote, # noqa: TID251
compat_urllib_parse_urlencode, # noqa: TID251
)
from yt_dlp.compat.urllib.request import getproxies
class TestCompat(unittest.TestCase):
def test_compat_passthrough(self):
with self.assertWarns(DeprecationWarning):
compat.compat_basestring
_ = compat.compat_basestring
with self.assertWarns(DeprecationWarning):
compat.WINDOWS_VT_MODE
_ = compat.WINDOWS_VT_MODE
# TODO: Test submodule
# compat.asyncio.events # Must not raise error
self.assertEqual(urllib.request.getproxies, getproxies)
with self.assertWarns(DeprecationWarning):
compat.compat_pycrypto_AES # Must not raise error
_ = compat.compat_pycrypto_AES # Must not raise error
def test_compat_expanduser(self):
old_home = os.environ.get('HOME')

View File

@@ -71,7 +71,7 @@ def _generate_expected_groups():
Path('/etc/yt-dlp.conf'),
Path('/etc/yt-dlp/config'),
Path('/etc/yt-dlp/config.txt'),
]
],
}

View File

@@ -1,5 +1,5 @@
import datetime as dt
import unittest
from datetime import datetime, timezone
from yt_dlp import cookies
from yt_dlp.cookies import (
@@ -67,6 +67,7 @@ def test_get_desktop_environment(self):
({'XDG_CURRENT_DESKTOP': 'GNOME'}, _LinuxDesktopEnvironment.GNOME),
({'XDG_CURRENT_DESKTOP': 'GNOME:GNOME-Classic'}, _LinuxDesktopEnvironment.GNOME),
({'XDG_CURRENT_DESKTOP': 'GNOME : GNOME-Classic'}, _LinuxDesktopEnvironment.GNOME),
({'XDG_CURRENT_DESKTOP': 'ubuntu:GNOME'}, _LinuxDesktopEnvironment.GNOME),
({'XDG_CURRENT_DESKTOP': 'Unity', 'DESKTOP_SESSION': 'gnome-fallback'}, _LinuxDesktopEnvironment.GNOME),
({'XDG_CURRENT_DESKTOP': 'KDE', 'KDE_SESSION_VERSION': '5'}, _LinuxDesktopEnvironment.KDE5),
@@ -106,7 +107,7 @@ def test_chrome_cookie_decryptor_linux_v11(self):
def test_chrome_cookie_decryptor_windows_v10(self):
with MonkeyPatch(cookies, {
'_get_windows_v10_key': lambda *args, **kwargs: b'Y\xef\xad\xad\xeerp\xf0Y\xe6\x9b\x12\xc2<z\x16]\n\xbb\xb8\xcb\xd7\x9bA\xc3\x14e\x99{\xd6\xf4&'
'_get_windows_v10_key': lambda *args, **kwargs: b'Y\xef\xad\xad\xeerp\xf0Y\xe6\x9b\x12\xc2<z\x16]\n\xbb\xb8\xcb\xd7\x9bA\xc3\x14e\x99{\xd6\xf4&',
}):
encrypted_value = b'v10T\xb8\xf3\xb8\x01\xa7TtcV\xfc\x88\xb8\xb8\xef\x05\xb5\xfd\x18\xc90\x009\xab\xb1\x893\x85)\x87\xe1\xa9-\xa3\xad='
value = '32101439'
@@ -121,24 +122,24 @@ def test_chrome_cookie_decryptor_mac_v10(self):
self.assertEqual(decryptor.decrypt(encrypted_value), value)
def test_safari_cookie_parsing(self):
cookies = \
b'cook\x00\x00\x00\x01\x00\x00\x00i\x00\x00\x01\x00\x01\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00Y' \
b'\x00\x00\x00\x00\x00\x00\x00 \x00\x00\x00\x00\x00\x00\x008\x00\x00\x00B\x00\x00\x00F\x00\x00\x00H' \
b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x80\x03\xa5>\xc3A\x00\x00\x80\xc3\x07:\xc3A' \
b'localhost\x00foo\x00/\x00test%20%3Bcookie\x00\x00\x00\x054\x07\x17 \x05\x00\x00\x00Kbplist00\xd1\x01' \
b'\x02_\x10\x18NSHTTPCookieAcceptPolicy\x10\x02\x08\x0b&\x00\x00\x00\x00\x00\x00\x01\x01\x00\x00\x00' \
b'\x00\x00\x00\x00\x03\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00('
cookies = (
b'cook\x00\x00\x00\x01\x00\x00\x00i\x00\x00\x01\x00\x01\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00Y'
b'\x00\x00\x00\x00\x00\x00\x00 \x00\x00\x00\x00\x00\x00\x008\x00\x00\x00B\x00\x00\x00F\x00\x00\x00H'
b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x80\x03\xa5>\xc3A\x00\x00\x80\xc3\x07:\xc3A'
b'localhost\x00foo\x00/\x00test%20%3Bcookie\x00\x00\x00\x054\x07\x17 \x05\x00\x00\x00Kbplist00\xd1\x01'
b'\x02_\x10\x18NSHTTPCookieAcceptPolicy\x10\x02\x08\x0b&\x00\x00\x00\x00\x00\x00\x01\x01\x00\x00\x00'
b'\x00\x00\x00\x00\x03\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00(')
jar = parse_safari_cookies(cookies)
self.assertEqual(len(jar), 1)
cookie = list(jar)[0]
cookie = next(iter(jar))
self.assertEqual(cookie.domain, 'localhost')
self.assertEqual(cookie.port, None)
self.assertEqual(cookie.path, '/')
self.assertEqual(cookie.name, 'foo')
self.assertEqual(cookie.value, 'test%20%3Bcookie')
self.assertFalse(cookie.secure)
expected_expiration = datetime(2021, 6, 18, 21, 39, 19, tzinfo=timezone.utc)
expected_expiration = dt.datetime(2021, 6, 18, 21, 39, 19, tzinfo=dt.timezone.utc)
self.assertEqual(cookie.expires, int(expected_expiration.timestamp()))
def test_pbkdf2_sha1(self):
@@ -164,7 +165,7 @@ def _run_tests(self, *cases):
attributes = {
key: value
for key, value in dict(morsel).items()
if value != ""
if value != ''
}
self.assertEqual(attributes, expected_attributes, message)
@@ -174,133 +175,133 @@ def test_parsing(self):
self._run_tests(
# Copied from https://github.com/python/cpython/blob/v3.10.7/Lib/test/test_http_cookies.py
(
"Test basic cookie",
"chips=ahoy; vienna=finger",
{"chips": "ahoy", "vienna": "finger"},
'Test basic cookie',
'chips=ahoy; vienna=finger',
{'chips': 'ahoy', 'vienna': 'finger'},
),
(
"Test quoted cookie",
'Test quoted cookie',
'keebler="E=mc2; L=\\"Loves\\"; fudge=\\012;"',
{"keebler": 'E=mc2; L="Loves"; fudge=\012;'},
{'keebler': 'E=mc2; L="Loves"; fudge=\012;'},
),
(
"Allow '=' in an unquoted value",
"keebler=E=mc2",
{"keebler": "E=mc2"},
'keebler=E=mc2',
{'keebler': 'E=mc2'},
),
(
"Allow cookies with ':' in their name",
"key:term=value:term",
{"key:term": "value:term"},
'key:term=value:term',
{'key:term': 'value:term'},
),
(
"Allow '[' and ']' in cookie values",
"a=b; c=[; d=r; f=h",
{"a": "b", "c": "[", "d": "r", "f": "h"},
'a=b; c=[; d=r; f=h',
{'a': 'b', 'c': '[', 'd': 'r', 'f': 'h'},
),
(
"Test basic cookie attributes",
'Test basic cookie attributes',
'Customer="WILE_E_COYOTE"; Version=1; Path=/acme',
{"Customer": ("WILE_E_COYOTE", {"version": "1", "path": "/acme"})},
{'Customer': ('WILE_E_COYOTE', {'version': '1', 'path': '/acme'})},
),
(
"Test flag only cookie attributes",
'Test flag only cookie attributes',
'Customer="WILE_E_COYOTE"; HttpOnly; Secure',
{"Customer": ("WILE_E_COYOTE", {"httponly": True, "secure": True})},
{'Customer': ('WILE_E_COYOTE', {'httponly': True, 'secure': True})},
),
(
"Test flag only attribute with values",
"eggs=scrambled; httponly=foo; secure=bar; Path=/bacon",
{"eggs": ("scrambled", {"httponly": "foo", "secure": "bar", "path": "/bacon"})},
'Test flag only attribute with values',
'eggs=scrambled; httponly=foo; secure=bar; Path=/bacon',
{'eggs': ('scrambled', {'httponly': 'foo', 'secure': 'bar', 'path': '/bacon'})},
),
(
"Test special case for 'expires' attribute, 4 digit year",
'Customer="W"; expires=Wed, 01 Jan 2010 00:00:00 GMT',
{"Customer": ("W", {"expires": "Wed, 01 Jan 2010 00:00:00 GMT"})},
{'Customer': ('W', {'expires': 'Wed, 01 Jan 2010 00:00:00 GMT'})},
),
(
"Test special case for 'expires' attribute, 2 digit year",
'Customer="W"; expires=Wed, 01 Jan 98 00:00:00 GMT',
{"Customer": ("W", {"expires": "Wed, 01 Jan 98 00:00:00 GMT"})},
{'Customer': ('W', {'expires': 'Wed, 01 Jan 98 00:00:00 GMT'})},
),
(
"Test extra spaces in keys and values",
"eggs = scrambled ; secure ; path = bar ; foo=foo ",
{"eggs": ("scrambled", {"secure": True, "path": "bar"}), "foo": "foo"},
'Test extra spaces in keys and values',
'eggs = scrambled ; secure ; path = bar ; foo=foo ',
{'eggs': ('scrambled', {'secure': True, 'path': 'bar'}), 'foo': 'foo'},
),
(
"Test quoted attributes",
'Test quoted attributes',
'Customer="WILE_E_COYOTE"; Version="1"; Path="/acme"',
{"Customer": ("WILE_E_COYOTE", {"version": "1", "path": "/acme"})}
{'Customer': ('WILE_E_COYOTE', {'version': '1', 'path': '/acme'})},
),
# Our own tests that CPython passes
(
"Allow ';' in quoted value",
'chips="a;hoy"; vienna=finger',
{"chips": "a;hoy", "vienna": "finger"},
{'chips': 'a;hoy', 'vienna': 'finger'},
),
(
"Keep only the last set value",
"a=c; a=b",
{"a": "b"},
'Keep only the last set value',
'a=c; a=b',
{'a': 'b'},
),
)
def test_lenient_parsing(self):
self._run_tests(
(
"Ignore and try to skip invalid cookies",
'Ignore and try to skip invalid cookies',
'chips={"ahoy;": 1}; vienna="finger;"',
{"vienna": "finger;"},
{'vienna': 'finger;'},
),
(
"Ignore cookies without a name",
"a=b; unnamed; c=d",
{"a": "b", "c": "d"},
'Ignore cookies without a name',
'a=b; unnamed; c=d',
{'a': 'b', 'c': 'd'},
),
(
"Ignore '\"' cookie without name",
'a=b; "; c=d',
{"a": "b", "c": "d"},
{'a': 'b', 'c': 'd'},
),
(
"Skip all space separated values",
"x a=b c=d x; e=f",
{"a": "b", "c": "d", "e": "f"},
'Skip all space separated values',
'x a=b c=d x; e=f',
{'a': 'b', 'c': 'd', 'e': 'f'},
),
(
"Skip all space separated values",
'Skip all space separated values',
'x a=b; data={"complex": "json", "with": "key=value"}; x c=d x',
{"a": "b", "c": "d"},
{'a': 'b', 'c': 'd'},
),
(
"Expect quote mending",
'Expect quote mending',
'a=b; invalid="; c=d',
{"a": "b", "c": "d"},
{'a': 'b', 'c': 'd'},
),
(
"Reset morsel after invalid to not capture attributes",
"a=b; invalid; Version=1; c=d",
{"a": "b", "c": "d"},
'Reset morsel after invalid to not capture attributes',
'a=b; invalid; Version=1; c=d',
{'a': 'b', 'c': 'd'},
),
(
"Reset morsel after invalid to not capture attributes",
"a=b; $invalid; $Version=1; c=d",
{"a": "b", "c": "d"},
'Reset morsel after invalid to not capture attributes',
'a=b; $invalid; $Version=1; c=d',
{'a': 'b', 'c': 'd'},
),
(
"Continue after non-flag attribute without value",
"a=b; path; Version=1; c=d",
{"a": "b", "c": "d"},
'Continue after non-flag attribute without value',
'a=b; path; Version=1; c=d',
{'a': 'b', 'c': 'd'},
),
(
"Allow cookie attributes with `$` prefix",
'Allow cookie attributes with `$` prefix',
'Customer="WILE_E_COYOTE"; $Version=1; $Secure; $Path=/acme',
{"Customer": ("WILE_E_COYOTE", {"version": "1", "secure": True, "path": "/acme"})},
{'Customer': ('WILE_E_COYOTE', {'version': '1', 'secure': True, 'path': '/acme'})},
),
(
"Invalid Morsel keys should not result in an error",
"Key=Value; [Invalid]=Value; Another=Value",
{"Key": "Value", "Another": "Value"},
'Invalid Morsel keys should not result in an error',
'Key=Value; [Invalid]=Value; Another=Value',
{'Key': 'Value', 'Another': 'Value'},
),
)

View File

@@ -10,10 +10,7 @@
import collections
import hashlib
import http.client
import json
import socket
import urllib.error
from test.helper import (
assertGreaterEqual,
@@ -23,16 +20,17 @@
gettestcases,
getwebpagetestcases,
is_download_test,
report_warning,
try_rm,
)
import yt_dlp.YoutubeDL # isort: split
from yt_dlp.extractor import get_info_extractor
from yt_dlp.networking.exceptions import HTTPError, TransportError
from yt_dlp.utils import (
DownloadError,
ExtractorError,
UnavailableVideoError,
YoutubeDLError,
format_bytes,
join_nonempty,
)
@@ -95,13 +93,15 @@ def test_template(self):
'playlist', [] if is_playlist else [test_case])
def print_skipping(reason):
print('Skipping %s: %s' % (test_case['name'], reason))
print('Skipping {}: {}'.format(test_case['name'], reason))
self.skipTest(reason)
if not ie.working():
print_skipping('IE marked as not _WORKING')
for tc in test_cases:
if tc.get('expected_exception'):
continue
info_dict = tc.get('info_dict', {})
params = tc.get('params', {})
if not info_dict.get('id'):
@@ -116,7 +116,7 @@ def print_skipping(reason):
for other_ie in other_ies:
if not other_ie.working():
print_skipping('test depends on %sIE, marked as not WORKING' % other_ie.ie_key())
print_skipping(f'test depends on {other_ie.ie_key()}IE, marked as not WORKING')
params = get_params(test_case.get('params', {}))
params['outtmpl'] = tname + '_' + params['outtmpl']
@@ -141,6 +141,14 @@ def get_tc_filename(tc):
res_dict = None
def match_exception(err):
expected_exception = test_case.get('expected_exception')
if not expected_exception:
return False
if err.__class__.__name__ == expected_exception:
return True
return any(exc.__class__.__name__ == expected_exception for exc in err.exc_info)
def try_rm_tcs_files(tcs=None):
if tcs is None:
tcs = test_cases
@@ -162,18 +170,22 @@ def try_rm_tcs_files(tcs=None):
force_generic_extractor=params.get('force_generic_extractor', False))
except (DownloadError, ExtractorError) as err:
# Check if the exception is not a network related one
if (err.exc_info[0] not in (urllib.error.URLError, socket.timeout, UnavailableVideoError, http.client.BadStatusLine)
or (err.exc_info[0] == urllib.error.HTTPError and err.exc_info[1].code == 503)):
if not isinstance(err.exc_info[1], (TransportError, UnavailableVideoError)) or (isinstance(err.exc_info[1], HTTPError) and err.exc_info[1].status == 503):
if match_exception(err):
return
err.msg = f'{getattr(err, "msg", err)} ({tname})'
raise
if try_num == RETRIES:
report_warning('%s failed due to network errors, skipping...' % tname)
return
raise
print(f'Retrying: {try_num} failed tries\n\n##########\n\n')
try_num += 1
except YoutubeDLError as err:
if match_exception(err):
return
raise
else:
break
@@ -227,9 +239,8 @@ def try_rm_tcs_files(tcs=None):
got_fsize = os.path.getsize(tc_filename)
assertGreaterEqual(
self, got_fsize, expected_minsize,
'Expected %s to be at least %s, but it\'s only %s ' %
(tc_filename, format_bytes(expected_minsize),
format_bytes(got_fsize)))
f'Expected {tc_filename} to be at least {format_bytes(expected_minsize)}, '
f'but it\'s only {format_bytes(got_fsize)} ')
if 'md5' in tc:
md5_for_file = _file_md5(tc_filename)
self.assertEqual(tc['md5'], md5_for_file)
@@ -238,7 +249,7 @@ def try_rm_tcs_files(tcs=None):
info_json_fn = os.path.splitext(tc_filename)[0] + '.info.json'
self.assertTrue(
os.path.exists(info_json_fn),
'Missing info file %s' % info_json_fn)
f'Missing info file {info_json_fn}')
with open(info_json_fn, encoding='utf-8') as infof:
info_dict = json.load(infof)
expect_info_dict(self, info_dict, tc.get('info_dict', {}))
@@ -249,7 +260,7 @@ def try_rm_tcs_files(tcs=None):
# extractor returns full results even with extract_flat
res_tcs = [{'info_dict': e} for e in res_dict['entries']]
try_rm_tcs_files(res_tcs)
ydl.close()
return test_template

View File

@@ -0,0 +1,139 @@
#!/usr/bin/env python3
# Allow direct execution
import os
import sys
import unittest
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import http.cookiejar
from test.helper import FakeYDL
from yt_dlp.downloader.external import (
Aria2cFD,
AxelFD,
CurlFD,
FFmpegFD,
HttpieFD,
WgetFD,
)
TEST_COOKIE = {
'version': 0,
'name': 'test',
'value': 'ytdlp',
'port': None,
'port_specified': False,
'domain': '.example.com',
'domain_specified': True,
'domain_initial_dot': False,
'path': '/',
'path_specified': True,
'secure': False,
'expires': None,
'discard': False,
'comment': None,
'comment_url': None,
'rest': {},
}
TEST_INFO = {'url': 'http://www.example.com/'}
class TestHttpieFD(unittest.TestCase):
def test_make_cmd(self):
with FakeYDL() as ydl:
downloader = HttpieFD(ydl, {})
self.assertEqual(
downloader._make_cmd('test', TEST_INFO),
['http', '--download', '--output', 'test', 'http://www.example.com/'])
# Test cookie header is added
ydl.cookiejar.set_cookie(http.cookiejar.Cookie(**TEST_COOKIE))
self.assertEqual(
downloader._make_cmd('test', TEST_INFO),
['http', '--download', '--output', 'test', 'http://www.example.com/', 'Cookie:test=ytdlp'])
class TestAxelFD(unittest.TestCase):
def test_make_cmd(self):
with FakeYDL() as ydl:
downloader = AxelFD(ydl, {})
self.assertEqual(
downloader._make_cmd('test', TEST_INFO),
['axel', '-o', 'test', '--', 'http://www.example.com/'])
# Test cookie header is added
ydl.cookiejar.set_cookie(http.cookiejar.Cookie(**TEST_COOKIE))
self.assertEqual(
downloader._make_cmd('test', TEST_INFO),
['axel', '-o', 'test', '-H', 'Cookie: test=ytdlp', '--max-redirect=0', '--', 'http://www.example.com/'])
class TestWgetFD(unittest.TestCase):
def test_make_cmd(self):
with FakeYDL() as ydl:
downloader = WgetFD(ydl, {})
self.assertNotIn('--load-cookies', downloader._make_cmd('test', TEST_INFO))
# Test cookiejar tempfile arg is added
ydl.cookiejar.set_cookie(http.cookiejar.Cookie(**TEST_COOKIE))
self.assertIn('--load-cookies', downloader._make_cmd('test', TEST_INFO))
class TestCurlFD(unittest.TestCase):
def test_make_cmd(self):
with FakeYDL() as ydl:
downloader = CurlFD(ydl, {})
self.assertNotIn('--cookie', downloader._make_cmd('test', TEST_INFO))
# Test cookie header is added
ydl.cookiejar.set_cookie(http.cookiejar.Cookie(**TEST_COOKIE))
self.assertIn('--cookie', downloader._make_cmd('test', TEST_INFO))
self.assertIn('test=ytdlp', downloader._make_cmd('test', TEST_INFO))
class TestAria2cFD(unittest.TestCase):
def test_make_cmd(self):
with FakeYDL() as ydl:
downloader = Aria2cFD(ydl, {})
downloader._make_cmd('test', TEST_INFO)
self.assertFalse(hasattr(downloader, '_cookies_tempfile'))
# Test cookiejar tempfile arg is added
ydl.cookiejar.set_cookie(http.cookiejar.Cookie(**TEST_COOKIE))
cmd = downloader._make_cmd('test', TEST_INFO)
self.assertIn(f'--load-cookies={downloader._cookies_tempfile}', cmd)
@unittest.skipUnless(FFmpegFD.available(), 'ffmpeg not found')
class TestFFmpegFD(unittest.TestCase):
_args = []
def _test_cmd(self, args):
self._args = args
def test_make_cmd(self):
with FakeYDL() as ydl:
downloader = FFmpegFD(ydl, {})
downloader._debug_cmd = self._test_cmd
downloader._call_downloader('test', {**TEST_INFO, 'ext': 'mp4'})
self.assertEqual(self._args, [
'ffmpeg', '-y', '-hide_banner', '-i', 'http://www.example.com/',
'-c', 'copy', '-f', 'mp4', 'file:test'])
# Test cookies arg is added
ydl.cookiejar.set_cookie(http.cookiejar.Cookie(**TEST_COOKIE))
downloader._call_downloader('test', {**TEST_INFO, 'ext': 'mp4'})
self.assertEqual(self._args, [
'ffmpeg', '-y', '-hide_banner', '-cookies', 'test=ytdlp; path=/; domain=.example.com;\r\n',
'-i', 'http://www.example.com/', '-c', 'copy', '-f', 'mp4', 'file:test'])
# Test with non-url input (ffmpeg reads from stdin '-' for websockets)
downloader._call_downloader('test', {'url': 'x', 'ext': 'mp4'})
self.assertEqual(self._args, [
'ffmpeg', '-y', '-hide_banner', '-i', 'x', '-c', 'copy', '-f', 'mp4', 'file:test'])
if __name__ == '__main__':
unittest.main()

View File

@@ -16,6 +16,7 @@
from yt_dlp import YoutubeDL
from yt_dlp.downloader.http import HttpFD
from yt_dlp.utils import encodeFilename
from yt_dlp.utils._utils import _YDLLogger as FakeLogger
TEST_DIR = os.path.dirname(os.path.abspath(__file__))
@@ -37,9 +38,9 @@ def send_content_range(self, total=None):
end = int(mobj.group(2))
valid_range = start is not None and end is not None
if valid_range:
content_range = 'bytes %d-%d' % (start, end)
content_range = f'bytes {start}-{end}'
if total:
content_range += '/%d' % total
content_range += f'/{total}'
self.send_header('Content-Range', content_range)
return (end - start + 1) if valid_range else total
@@ -67,17 +68,6 @@ def do_GET(self):
assert False
class FakeLogger:
def debug(self, msg):
pass
def warning(self, msg):
pass
def error(self, msg):
pass
class TestHttpFD(unittest.TestCase):
def setUp(self):
self.httpd = http.server.HTTPServer(
@@ -94,7 +84,7 @@ def download(self, params, ep):
filename = 'testfile.mp4'
try_rm(encodeFilename(filename))
self.assertTrue(downloader.real_download(filename, {
'url': 'http://127.0.0.1:%d/%s' % (self.port, ep),
'url': f'http://127.0.0.1:{self.port}/{ep}',
}), ep)
self.assertEqual(os.path.getsize(encodeFilename(filename)), TEST_SIZE, ep)
try_rm(encodeFilename(filename))

View File

@@ -45,6 +45,9 @@ def test_lazy_extractors(self):
self.assertTrue(os.path.exists(LAZY_EXTRACTORS))
_, stderr = self.run_yt_dlp(opts=('-s', 'test:'))
# `MIN_RECOMMENDED` emits a deprecated feature warning for deprecated Python versions
if stderr and stderr.startswith('Deprecated Feature: Support for Python'):
stderr = ''
self.assertFalse(stderr)
subprocess.check_call([sys.executable, 'test/test_all_urls.py'], cwd=rootDir, stdout=subprocess.DEVNULL)

View File

@@ -1,500 +0,0 @@
#!/usr/bin/env python3
# Allow direct execution
import os
import sys
import unittest
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import gzip
import http.cookiejar
import http.server
import io
import pathlib
import ssl
import tempfile
import threading
import urllib.error
import urllib.request
import zlib
from test.helper import http_server_port
from yt_dlp import YoutubeDL
from yt_dlp.dependencies import brotli
from yt_dlp.utils import sanitized_Request, urlencode_postdata
from .helper import FakeYDL
TEST_DIR = os.path.dirname(os.path.abspath(__file__))
class HTTPTestRequestHandler(http.server.BaseHTTPRequestHandler):
protocol_version = 'HTTP/1.1'
def log_message(self, format, *args):
pass
def _headers(self):
payload = str(self.headers).encode('utf-8')
self.send_response(200)
self.send_header('Content-Type', 'application/json')
self.send_header('Content-Length', str(len(payload)))
self.end_headers()
self.wfile.write(payload)
def _redirect(self):
self.send_response(int(self.path[len('/redirect_'):]))
self.send_header('Location', '/method')
self.send_header('Content-Length', '0')
self.end_headers()
def _method(self, method, payload=None):
self.send_response(200)
self.send_header('Content-Length', str(len(payload or '')))
self.send_header('Method', method)
self.end_headers()
if payload:
self.wfile.write(payload)
def _status(self, status):
payload = f'<html>{status} NOT FOUND</html>'.encode()
self.send_response(int(status))
self.send_header('Content-Type', 'text/html; charset=utf-8')
self.send_header('Content-Length', str(len(payload)))
self.end_headers()
self.wfile.write(payload)
def _read_data(self):
if 'Content-Length' in self.headers:
return self.rfile.read(int(self.headers['Content-Length']))
def do_POST(self):
data = self._read_data()
if self.path.startswith('/redirect_'):
self._redirect()
elif self.path.startswith('/method'):
self._method('POST', data)
elif self.path.startswith('/headers'):
self._headers()
else:
self._status(404)
def do_HEAD(self):
if self.path.startswith('/redirect_'):
self._redirect()
elif self.path.startswith('/method'):
self._method('HEAD')
else:
self._status(404)
def do_PUT(self):
data = self._read_data()
if self.path.startswith('/redirect_'):
self._redirect()
elif self.path.startswith('/method'):
self._method('PUT', data)
else:
self._status(404)
def do_GET(self):
if self.path == '/video.html':
payload = b'<html><video src="/vid.mp4" /></html>'
self.send_response(200)
self.send_header('Content-Type', 'text/html; charset=utf-8')
self.send_header('Content-Length', str(len(payload))) # required for persistent connections
self.end_headers()
self.wfile.write(payload)
elif self.path == '/vid.mp4':
payload = b'\x00\x00\x00\x00\x20\x66\x74[video]'
self.send_response(200)
self.send_header('Content-Type', 'video/mp4')
self.send_header('Content-Length', str(len(payload)))
self.end_headers()
self.wfile.write(payload)
elif self.path == '/%E4%B8%AD%E6%96%87.html':
payload = b'<html><video src="/vid.mp4" /></html>'
self.send_response(200)
self.send_header('Content-Type', 'text/html; charset=utf-8')
self.send_header('Content-Length', str(len(payload)))
self.end_headers()
self.wfile.write(payload)
elif self.path == '/%c7%9f':
payload = b'<html><video src="/vid.mp4" /></html>'
self.send_response(200)
self.send_header('Content-Type', 'text/html; charset=utf-8')
self.send_header('Content-Length', str(len(payload)))
self.end_headers()
self.wfile.write(payload)
elif self.path.startswith('/redirect_'):
self._redirect()
elif self.path.startswith('/method'):
self._method('GET')
elif self.path.startswith('/headers'):
self._headers()
elif self.path == '/trailing_garbage':
payload = b'<html><video src="/vid.mp4" /></html>'
self.send_response(200)
self.send_header('Content-Type', 'text/html; charset=utf-8')
self.send_header('Content-Encoding', 'gzip')
buf = io.BytesIO()
with gzip.GzipFile(fileobj=buf, mode='wb') as f:
f.write(payload)
compressed = buf.getvalue() + b'trailing garbage'
self.send_header('Content-Length', str(len(compressed)))
self.end_headers()
self.wfile.write(compressed)
elif self.path == '/302-non-ascii-redirect':
new_url = f'http://127.0.0.1:{http_server_port(self.server)}/中文.html'
self.send_response(301)
self.send_header('Location', new_url)
self.send_header('Content-Length', '0')
self.end_headers()
elif self.path == '/content-encoding':
encodings = self.headers.get('ytdl-encoding', '')
payload = b'<html><video src="/vid.mp4" /></html>'
for encoding in filter(None, (e.strip() for e in encodings.split(','))):
if encoding == 'br' and brotli:
payload = brotli.compress(payload)
elif encoding == 'gzip':
buf = io.BytesIO()
with gzip.GzipFile(fileobj=buf, mode='wb') as f:
f.write(payload)
payload = buf.getvalue()
elif encoding == 'deflate':
payload = zlib.compress(payload)
elif encoding == 'unsupported':
payload = b'raw'
break
else:
self._status(415)
return
self.send_response(200)
self.send_header('Content-Encoding', encodings)
self.send_header('Content-Length', str(len(payload)))
self.end_headers()
self.wfile.write(payload)
else:
self._status(404)
def send_header(self, keyword, value):
"""
Forcibly allow HTTP server to send non percent-encoded non-ASCII characters in headers.
This is against what is defined in RFC 3986, however we need to test we support this
since some sites incorrectly do this.
"""
if keyword.lower() == 'connection':
return super().send_header(keyword, value)
if not hasattr(self, '_headers_buffer'):
self._headers_buffer = []
self._headers_buffer.append(f'{keyword}: {value}\r\n'.encode())
class FakeLogger:
def debug(self, msg):
pass
def warning(self, msg):
pass
def error(self, msg):
pass
class TestHTTP(unittest.TestCase):
def setUp(self):
# HTTP server
self.http_httpd = http.server.ThreadingHTTPServer(
('127.0.0.1', 0), HTTPTestRequestHandler)
self.http_port = http_server_port(self.http_httpd)
self.http_server_thread = threading.Thread(target=self.http_httpd.serve_forever)
# FIXME: we should probably stop the http server thread after each test
# See: https://github.com/yt-dlp/yt-dlp/pull/7094#discussion_r1199746041
self.http_server_thread.daemon = True
self.http_server_thread.start()
# HTTPS server
certfn = os.path.join(TEST_DIR, 'testcert.pem')
self.https_httpd = http.server.ThreadingHTTPServer(
('127.0.0.1', 0), HTTPTestRequestHandler)
sslctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER)
sslctx.load_cert_chain(certfn, None)
self.https_httpd.socket = sslctx.wrap_socket(self.https_httpd.socket, server_side=True)
self.https_port = http_server_port(self.https_httpd)
self.https_server_thread = threading.Thread(target=self.https_httpd.serve_forever)
self.https_server_thread.daemon = True
self.https_server_thread.start()
def test_nocheckcertificate(self):
with FakeYDL({'logger': FakeLogger()}) as ydl:
with self.assertRaises(urllib.error.URLError):
ydl.urlopen(sanitized_Request(f'https://127.0.0.1:{self.https_port}/headers'))
with FakeYDL({'logger': FakeLogger(), 'nocheckcertificate': True}) as ydl:
r = ydl.urlopen(sanitized_Request(f'https://127.0.0.1:{self.https_port}/headers'))
self.assertEqual(r.status, 200)
r.close()
def test_percent_encode(self):
with FakeYDL() as ydl:
# Unicode characters should be encoded with uppercase percent-encoding
res = ydl.urlopen(sanitized_Request(f'http://127.0.0.1:{self.http_port}/中文.html'))
self.assertEqual(res.status, 200)
res.close()
# don't normalize existing percent encodings
res = ydl.urlopen(sanitized_Request(f'http://127.0.0.1:{self.http_port}/%c7%9f'))
self.assertEqual(res.status, 200)
res.close()
def test_unicode_path_redirection(self):
with FakeYDL() as ydl:
r = ydl.urlopen(sanitized_Request(f'http://127.0.0.1:{self.http_port}/302-non-ascii-redirect'))
self.assertEqual(r.url, f'http://127.0.0.1:{self.http_port}/%E4%B8%AD%E6%96%87.html')
r.close()
def test_redirect(self):
with FakeYDL() as ydl:
def do_req(redirect_status, method):
data = b'testdata' if method in ('POST', 'PUT') else None
res = ydl.urlopen(sanitized_Request(
f'http://127.0.0.1:{self.http_port}/redirect_{redirect_status}', method=method, data=data))
return res.read().decode('utf-8'), res.headers.get('method', '')
# A 303 must either use GET or HEAD for subsequent request
self.assertEqual(do_req(303, 'POST'), ('', 'GET'))
self.assertEqual(do_req(303, 'HEAD'), ('', 'HEAD'))
self.assertEqual(do_req(303, 'PUT'), ('', 'GET'))
# 301 and 302 turn POST only into a GET
self.assertEqual(do_req(301, 'POST'), ('', 'GET'))
self.assertEqual(do_req(301, 'HEAD'), ('', 'HEAD'))
self.assertEqual(do_req(302, 'POST'), ('', 'GET'))
self.assertEqual(do_req(302, 'HEAD'), ('', 'HEAD'))
self.assertEqual(do_req(301, 'PUT'), ('testdata', 'PUT'))
self.assertEqual(do_req(302, 'PUT'), ('testdata', 'PUT'))
# 307 and 308 should not change method
for m in ('POST', 'PUT'):
self.assertEqual(do_req(307, m), ('testdata', m))
self.assertEqual(do_req(308, m), ('testdata', m))
self.assertEqual(do_req(307, 'HEAD'), ('', 'HEAD'))
self.assertEqual(do_req(308, 'HEAD'), ('', 'HEAD'))
# These should not redirect and instead raise an HTTPError
for code in (300, 304, 305, 306):
with self.assertRaises(urllib.error.HTTPError):
do_req(code, 'GET')
def test_content_type(self):
# https://github.com/yt-dlp/yt-dlp/commit/379a4f161d4ad3e40932dcf5aca6e6fb9715ab28
with FakeYDL({'nocheckcertificate': True}) as ydl:
# method should be auto-detected as POST
r = sanitized_Request(f'https://localhost:{self.https_port}/headers', data=urlencode_postdata({'test': 'test'}))
headers = ydl.urlopen(r).read().decode('utf-8')
self.assertIn('Content-Type: application/x-www-form-urlencoded', headers)
# test http
r = sanitized_Request(f'http://localhost:{self.http_port}/headers', data=urlencode_postdata({'test': 'test'}))
headers = ydl.urlopen(r).read().decode('utf-8')
self.assertIn('Content-Type: application/x-www-form-urlencoded', headers)
def test_cookiejar(self):
with FakeYDL() as ydl:
ydl.cookiejar.set_cookie(http.cookiejar.Cookie(
0, 'test', 'ytdlp', None, False, '127.0.0.1', True,
False, '/headers', True, False, None, False, None, None, {}))
data = ydl.urlopen(sanitized_Request(f'http://127.0.0.1:{self.http_port}/headers')).read()
self.assertIn(b'Cookie: test=ytdlp', data)
def test_no_compression_compat_header(self):
with FakeYDL() as ydl:
data = ydl.urlopen(
sanitized_Request(
f'http://127.0.0.1:{self.http_port}/headers',
headers={'Youtubedl-no-compression': True})).read()
self.assertIn(b'Accept-Encoding: identity', data)
self.assertNotIn(b'youtubedl-no-compression', data.lower())
def test_gzip_trailing_garbage(self):
# https://github.com/ytdl-org/youtube-dl/commit/aa3e950764337ef9800c936f4de89b31c00dfcf5
# https://github.com/ytdl-org/youtube-dl/commit/6f2ec15cee79d35dba065677cad9da7491ec6e6f
with FakeYDL() as ydl:
data = ydl.urlopen(sanitized_Request(f'http://localhost:{self.http_port}/trailing_garbage')).read().decode('utf-8')
self.assertEqual(data, '<html><video src="/vid.mp4" /></html>')
@unittest.skipUnless(brotli, 'brotli support is not installed')
def test_brotli(self):
with FakeYDL() as ydl:
res = ydl.urlopen(
sanitized_Request(
f'http://127.0.0.1:{self.http_port}/content-encoding',
headers={'ytdl-encoding': 'br'}))
self.assertEqual(res.headers.get('Content-Encoding'), 'br')
self.assertEqual(res.read(), b'<html><video src="/vid.mp4" /></html>')
def test_deflate(self):
with FakeYDL() as ydl:
res = ydl.urlopen(
sanitized_Request(
f'http://127.0.0.1:{self.http_port}/content-encoding',
headers={'ytdl-encoding': 'deflate'}))
self.assertEqual(res.headers.get('Content-Encoding'), 'deflate')
self.assertEqual(res.read(), b'<html><video src="/vid.mp4" /></html>')
def test_gzip(self):
with FakeYDL() as ydl:
res = ydl.urlopen(
sanitized_Request(
f'http://127.0.0.1:{self.http_port}/content-encoding',
headers={'ytdl-encoding': 'gzip'}))
self.assertEqual(res.headers.get('Content-Encoding'), 'gzip')
self.assertEqual(res.read(), b'<html><video src="/vid.mp4" /></html>')
def test_multiple_encodings(self):
# https://www.rfc-editor.org/rfc/rfc9110.html#section-8.4
with FakeYDL() as ydl:
for pair in ('gzip,deflate', 'deflate, gzip', 'gzip, gzip', 'deflate, deflate'):
res = ydl.urlopen(
sanitized_Request(
f'http://127.0.0.1:{self.http_port}/content-encoding',
headers={'ytdl-encoding': pair}))
self.assertEqual(res.headers.get('Content-Encoding'), pair)
self.assertEqual(res.read(), b'<html><video src="/vid.mp4" /></html>')
def test_unsupported_encoding(self):
# it should return the raw content
with FakeYDL() as ydl:
res = ydl.urlopen(
sanitized_Request(
f'http://127.0.0.1:{self.http_port}/content-encoding',
headers={'ytdl-encoding': 'unsupported'}))
self.assertEqual(res.headers.get('Content-Encoding'), 'unsupported')
self.assertEqual(res.read(), b'raw')
class TestClientCert(unittest.TestCase):
def setUp(self):
certfn = os.path.join(TEST_DIR, 'testcert.pem')
self.certdir = os.path.join(TEST_DIR, 'testdata', 'certificate')
cacertfn = os.path.join(self.certdir, 'ca.crt')
self.httpd = http.server.HTTPServer(('127.0.0.1', 0), HTTPTestRequestHandler)
sslctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER)
sslctx.verify_mode = ssl.CERT_REQUIRED
sslctx.load_verify_locations(cafile=cacertfn)
sslctx.load_cert_chain(certfn, None)
self.httpd.socket = sslctx.wrap_socket(self.httpd.socket, server_side=True)
self.port = http_server_port(self.httpd)
self.server_thread = threading.Thread(target=self.httpd.serve_forever)
self.server_thread.daemon = True
self.server_thread.start()
def _run_test(self, **params):
ydl = YoutubeDL({
'logger': FakeLogger(),
# Disable client-side validation of unacceptable self-signed testcert.pem
# The test is of a check on the server side, so unaffected
'nocheckcertificate': True,
**params,
})
r = ydl.extract_info(f'https://127.0.0.1:{self.port}/video.html')
self.assertEqual(r['url'], f'https://127.0.0.1:{self.port}/vid.mp4')
def test_certificate_combined_nopass(self):
self._run_test(client_certificate=os.path.join(self.certdir, 'clientwithkey.crt'))
def test_certificate_nocombined_nopass(self):
self._run_test(client_certificate=os.path.join(self.certdir, 'client.crt'),
client_certificate_key=os.path.join(self.certdir, 'client.key'))
def test_certificate_combined_pass(self):
self._run_test(client_certificate=os.path.join(self.certdir, 'clientwithencryptedkey.crt'),
client_certificate_password='foobar')
def test_certificate_nocombined_pass(self):
self._run_test(client_certificate=os.path.join(self.certdir, 'client.crt'),
client_certificate_key=os.path.join(self.certdir, 'clientencrypted.key'),
client_certificate_password='foobar')
def _build_proxy_handler(name):
class HTTPTestRequestHandler(http.server.BaseHTTPRequestHandler):
proxy_name = name
def log_message(self, format, *args):
pass
def do_GET(self):
self.send_response(200)
self.send_header('Content-Type', 'text/plain; charset=utf-8')
self.end_headers()
self.wfile.write(f'{self.proxy_name}: {self.path}'.encode())
return HTTPTestRequestHandler
class TestProxy(unittest.TestCase):
def setUp(self):
self.proxy = http.server.HTTPServer(
('127.0.0.1', 0), _build_proxy_handler('normal'))
self.port = http_server_port(self.proxy)
self.proxy_thread = threading.Thread(target=self.proxy.serve_forever)
self.proxy_thread.daemon = True
self.proxy_thread.start()
self.geo_proxy = http.server.HTTPServer(
('127.0.0.1', 0), _build_proxy_handler('geo'))
self.geo_port = http_server_port(self.geo_proxy)
self.geo_proxy_thread = threading.Thread(target=self.geo_proxy.serve_forever)
self.geo_proxy_thread.daemon = True
self.geo_proxy_thread.start()
def test_proxy(self):
geo_proxy = f'127.0.0.1:{self.geo_port}'
ydl = YoutubeDL({
'proxy': f'127.0.0.1:{self.port}',
'geo_verification_proxy': geo_proxy,
})
url = 'http://foo.com/bar'
response = ydl.urlopen(url).read().decode()
self.assertEqual(response, f'normal: {url}')
req = urllib.request.Request(url)
req.add_header('Ytdl-request-proxy', geo_proxy)
response = ydl.urlopen(req).read().decode()
self.assertEqual(response, f'geo: {url}')
def test_proxy_with_idn(self):
ydl = YoutubeDL({
'proxy': f'127.0.0.1:{self.port}',
})
url = 'http://中文.tw/'
response = ydl.urlopen(url).read().decode()
# b'xn--fiq228c' is '中文'.encode('idna')
self.assertEqual(response, 'normal: http://xn--fiq228c.tw/')
class TestFileURL(unittest.TestCase):
# See https://github.com/ytdl-org/youtube-dl/issues/8227
def test_file_urls(self):
tf = tempfile.NamedTemporaryFile(delete=False)
tf.write(b'foobar')
tf.close()
url = pathlib.Path(tf.name).as_uri()
with FakeYDL() as ydl:
self.assertRaisesRegex(
urllib.error.URLError, 'file:// URLs are explicitly disabled in yt-dlp for security reasons', ydl.urlopen, url)
with FakeYDL({'enable_file_urls': True}) as ydl:
res = ydl.urlopen(url)
self.assertEqual(res.read(), b'foobar')
res.close()
os.unlink(tf.name)
if __name__ == '__main__':
unittest.main()

380
test/test_http_proxy.py Normal file
View File

@@ -0,0 +1,380 @@
import abc
import base64
import contextlib
import functools
import json
import os
import random
import ssl
import threading
from http.server import BaseHTTPRequestHandler
from socketserver import ThreadingTCPServer
import pytest
from test.helper import http_server_port, verify_address_availability
from test.test_networking import TEST_DIR
from test.test_socks import IPv6ThreadingTCPServer
from yt_dlp.dependencies import urllib3
from yt_dlp.networking import Request
from yt_dlp.networking.exceptions import HTTPError, ProxyError, SSLError
class HTTPProxyAuthMixin:
def proxy_auth_error(self):
self.send_response(407)
self.send_header('Proxy-Authenticate', 'Basic realm="test http proxy"')
self.end_headers()
return False
def do_proxy_auth(self, username, password):
if username is None and password is None:
return True
proxy_auth_header = self.headers.get('Proxy-Authorization', None)
if proxy_auth_header is None:
return self.proxy_auth_error()
if not proxy_auth_header.startswith('Basic '):
return self.proxy_auth_error()
auth = proxy_auth_header[6:]
try:
auth_username, auth_password = base64.b64decode(auth).decode().split(':', 1)
except Exception:
return self.proxy_auth_error()
if auth_username != (username or '') or auth_password != (password or ''):
return self.proxy_auth_error()
return True
class HTTPProxyHandler(BaseHTTPRequestHandler, HTTPProxyAuthMixin):
def __init__(self, *args, proxy_info=None, username=None, password=None, request_handler=None, **kwargs):
self.username = username
self.password = password
self.proxy_info = proxy_info
super().__init__(*args, **kwargs)
def do_GET(self):
if not self.do_proxy_auth(self.username, self.password):
self.server.close_request(self.request)
return
if self.path.endswith('/proxy_info'):
payload = json.dumps(self.proxy_info or {
'client_address': self.client_address,
'connect': False,
'connect_host': None,
'connect_port': None,
'headers': dict(self.headers),
'path': self.path,
'proxy': ':'.join(str(y) for y in self.connection.getsockname()),
})
self.send_response(200)
self.send_header('Content-Type', 'application/json; charset=utf-8')
self.send_header('Content-Length', str(len(payload)))
self.end_headers()
self.wfile.write(payload.encode())
else:
self.send_response(404)
self.end_headers()
self.server.close_request(self.request)
if urllib3:
import urllib3.util.ssltransport
class SSLTransport(urllib3.util.ssltransport.SSLTransport):
"""
Modified version of urllib3 SSLTransport to support server side SSL
This allows us to chain multiple TLS connections.
"""
def __init__(self, socket, ssl_context, server_hostname=None, suppress_ragged_eofs=True, server_side=False):
self.incoming = ssl.MemoryBIO()
self.outgoing = ssl.MemoryBIO()
self.suppress_ragged_eofs = suppress_ragged_eofs
self.socket = socket
self.sslobj = ssl_context.wrap_bio(
self.incoming,
self.outgoing,
server_hostname=server_hostname,
server_side=server_side,
)
self._ssl_io_loop(self.sslobj.do_handshake)
@property
def _io_refs(self):
return self.socket._io_refs
@_io_refs.setter
def _io_refs(self, value):
self.socket._io_refs = value
def shutdown(self, *args, **kwargs):
self.socket.shutdown(*args, **kwargs)
else:
SSLTransport = None
class HTTPSProxyHandler(HTTPProxyHandler):
def __init__(self, request, *args, **kwargs):
certfn = os.path.join(TEST_DIR, 'testcert.pem')
sslctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER)
sslctx.load_cert_chain(certfn, None)
if isinstance(request, ssl.SSLSocket):
request = SSLTransport(request, ssl_context=sslctx, server_side=True)
else:
request = sslctx.wrap_socket(request, server_side=True)
super().__init__(request, *args, **kwargs)
class HTTPConnectProxyHandler(BaseHTTPRequestHandler, HTTPProxyAuthMixin):
protocol_version = 'HTTP/1.1'
default_request_version = 'HTTP/1.1'
def __init__(self, *args, username=None, password=None, request_handler=None, **kwargs):
self.username = username
self.password = password
self.request_handler = request_handler
super().__init__(*args, **kwargs)
def do_CONNECT(self):
if not self.do_proxy_auth(self.username, self.password):
self.server.close_request(self.request)
return
self.send_response(200)
self.end_headers()
proxy_info = {
'client_address': self.client_address,
'connect': True,
'connect_host': self.path.split(':')[0],
'connect_port': int(self.path.split(':')[1]),
'headers': dict(self.headers),
'path': self.path,
'proxy': ':'.join(str(y) for y in self.connection.getsockname()),
}
self.request_handler(self.request, self.client_address, self.server, proxy_info=proxy_info)
self.server.close_request(self.request)
class HTTPSConnectProxyHandler(HTTPConnectProxyHandler):
def __init__(self, request, *args, **kwargs):
certfn = os.path.join(TEST_DIR, 'testcert.pem')
sslctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER)
sslctx.load_cert_chain(certfn, None)
request = sslctx.wrap_socket(request, server_side=True)
self._original_request = request
super().__init__(request, *args, **kwargs)
def do_CONNECT(self):
super().do_CONNECT()
self.server.close_request(self._original_request)
@contextlib.contextmanager
def proxy_server(proxy_server_class, request_handler, bind_ip=None, **proxy_server_kwargs):
server = server_thread = None
try:
bind_address = bind_ip or '127.0.0.1'
server_type = ThreadingTCPServer if '.' in bind_address else IPv6ThreadingTCPServer
server = server_type(
(bind_address, 0), functools.partial(proxy_server_class, request_handler=request_handler, **proxy_server_kwargs))
server_port = http_server_port(server)
server_thread = threading.Thread(target=server.serve_forever)
server_thread.daemon = True
server_thread.start()
if '.' not in bind_address:
yield f'[{bind_address}]:{server_port}'
else:
yield f'{bind_address}:{server_port}'
finally:
server.shutdown()
server.server_close()
server_thread.join(2.0)
class HTTPProxyTestContext(abc.ABC):
REQUEST_HANDLER_CLASS = None
REQUEST_PROTO = None
def http_server(self, server_class, *args, **kwargs):
return proxy_server(server_class, self.REQUEST_HANDLER_CLASS, *args, **kwargs)
@abc.abstractmethod
def proxy_info_request(self, handler, target_domain=None, target_port=None, **req_kwargs) -> dict:
"""return a dict of proxy_info"""
class HTTPProxyHTTPTestContext(HTTPProxyTestContext):
# Standard HTTP Proxy for http requests
REQUEST_HANDLER_CLASS = HTTPProxyHandler
REQUEST_PROTO = 'http'
def proxy_info_request(self, handler, target_domain=None, target_port=None, **req_kwargs):
request = Request(f'http://{target_domain or "127.0.0.1"}:{target_port or "40000"}/proxy_info', **req_kwargs)
handler.validate(request)
return json.loads(handler.send(request).read().decode())
class HTTPProxyHTTPSTestContext(HTTPProxyTestContext):
# HTTP Connect proxy, for https requests
REQUEST_HANDLER_CLASS = HTTPSProxyHandler
REQUEST_PROTO = 'https'
def proxy_info_request(self, handler, target_domain=None, target_port=None, **req_kwargs):
request = Request(f'https://{target_domain or "127.0.0.1"}:{target_port or "40000"}/proxy_info', **req_kwargs)
handler.validate(request)
return json.loads(handler.send(request).read().decode())
CTX_MAP = {
'http': HTTPProxyHTTPTestContext,
'https': HTTPProxyHTTPSTestContext,
}
@pytest.fixture(scope='module')
def ctx(request):
return CTX_MAP[request.param]()
@pytest.mark.parametrize(
'handler', ['Urllib', 'Requests', 'CurlCFFI'], indirect=True)
@pytest.mark.parametrize('ctx', ['http'], indirect=True) # pure http proxy can only support http
class TestHTTPProxy:
def test_http_no_auth(self, handler, ctx):
with ctx.http_server(HTTPProxyHandler) as server_address:
with handler(proxies={ctx.REQUEST_PROTO: f'http://{server_address}'}) as rh:
proxy_info = ctx.proxy_info_request(rh)
assert proxy_info['proxy'] == server_address
assert proxy_info['connect'] is False
assert 'Proxy-Authorization' not in proxy_info['headers']
def test_http_auth(self, handler, ctx):
with ctx.http_server(HTTPProxyHandler, username='test', password='test') as server_address:
with handler(proxies={ctx.REQUEST_PROTO: f'http://test:test@{server_address}'}) as rh:
proxy_info = ctx.proxy_info_request(rh)
assert proxy_info['proxy'] == server_address
assert 'Proxy-Authorization' in proxy_info['headers']
def test_http_bad_auth(self, handler, ctx):
with ctx.http_server(HTTPProxyHandler, username='test', password='test') as server_address:
with handler(proxies={ctx.REQUEST_PROTO: f'http://test:bad@{server_address}'}) as rh:
with pytest.raises(HTTPError) as exc_info:
ctx.proxy_info_request(rh)
assert exc_info.value.response.status == 407
exc_info.value.response.close()
def test_http_source_address(self, handler, ctx):
with ctx.http_server(HTTPProxyHandler) as server_address:
source_address = f'127.0.0.{random.randint(5, 255)}'
verify_address_availability(source_address)
with handler(proxies={ctx.REQUEST_PROTO: f'http://{server_address}'},
source_address=source_address) as rh:
proxy_info = ctx.proxy_info_request(rh)
assert proxy_info['proxy'] == server_address
assert proxy_info['client_address'][0] == source_address
@pytest.mark.skip_handler('Urllib', 'urllib does not support https proxies')
def test_https(self, handler, ctx):
with ctx.http_server(HTTPSProxyHandler) as server_address:
with handler(verify=False, proxies={ctx.REQUEST_PROTO: f'https://{server_address}'}) as rh:
proxy_info = ctx.proxy_info_request(rh)
assert proxy_info['proxy'] == server_address
assert proxy_info['connect'] is False
assert 'Proxy-Authorization' not in proxy_info['headers']
@pytest.mark.skip_handler('Urllib', 'urllib does not support https proxies')
def test_https_verify_failed(self, handler, ctx):
with ctx.http_server(HTTPSProxyHandler) as server_address:
with handler(verify=True, proxies={ctx.REQUEST_PROTO: f'https://{server_address}'}) as rh:
# Accept SSLError as may not be feasible to tell if it is proxy or request error.
# note: if request proto also does ssl verification, this may also be the error of the request.
# Until we can support passing custom cacerts to handlers, we cannot properly test this for all cases.
with pytest.raises((ProxyError, SSLError)):
ctx.proxy_info_request(rh)
def test_http_with_idn(self, handler, ctx):
with ctx.http_server(HTTPProxyHandler) as server_address:
with handler(proxies={ctx.REQUEST_PROTO: f'http://{server_address}'}) as rh:
proxy_info = ctx.proxy_info_request(rh, target_domain='中文.tw')
assert proxy_info['proxy'] == server_address
assert proxy_info['path'].startswith('http://xn--fiq228c.tw')
assert proxy_info['headers']['Host'].split(':', 1)[0] == 'xn--fiq228c.tw'
@pytest.mark.parametrize(
'handler,ctx', [
('Requests', 'https'),
('CurlCFFI', 'https'),
], indirect=True)
class TestHTTPConnectProxy:
def test_http_connect_no_auth(self, handler, ctx):
with ctx.http_server(HTTPConnectProxyHandler) as server_address:
with handler(verify=False, proxies={ctx.REQUEST_PROTO: f'http://{server_address}'}) as rh:
proxy_info = ctx.proxy_info_request(rh)
assert proxy_info['proxy'] == server_address
assert proxy_info['connect'] is True
assert 'Proxy-Authorization' not in proxy_info['headers']
def test_http_connect_auth(self, handler, ctx):
with ctx.http_server(HTTPConnectProxyHandler, username='test', password='test') as server_address:
with handler(verify=False, proxies={ctx.REQUEST_PROTO: f'http://test:test@{server_address}'}) as rh:
proxy_info = ctx.proxy_info_request(rh)
assert proxy_info['proxy'] == server_address
assert 'Proxy-Authorization' in proxy_info['headers']
@pytest.mark.skip_handler(
'Requests',
'bug in urllib3 causes unclosed socket: https://github.com/urllib3/urllib3/issues/3374',
)
def test_http_connect_bad_auth(self, handler, ctx):
with ctx.http_server(HTTPConnectProxyHandler, username='test', password='test') as server_address:
with handler(verify=False, proxies={ctx.REQUEST_PROTO: f'http://test:bad@{server_address}'}) as rh:
with pytest.raises(ProxyError):
ctx.proxy_info_request(rh)
def test_http_connect_source_address(self, handler, ctx):
with ctx.http_server(HTTPConnectProxyHandler) as server_address:
source_address = f'127.0.0.{random.randint(5, 255)}'
verify_address_availability(source_address)
with handler(proxies={ctx.REQUEST_PROTO: f'http://{server_address}'},
source_address=source_address,
verify=False) as rh:
proxy_info = ctx.proxy_info_request(rh)
assert proxy_info['proxy'] == server_address
assert proxy_info['client_address'][0] == source_address
@pytest.mark.skipif(urllib3 is None, reason='requires urllib3 to test')
def test_https_connect_proxy(self, handler, ctx):
with ctx.http_server(HTTPSConnectProxyHandler) as server_address:
with handler(verify=False, proxies={ctx.REQUEST_PROTO: f'https://{server_address}'}) as rh:
proxy_info = ctx.proxy_info_request(rh)
assert proxy_info['proxy'] == server_address
assert proxy_info['connect'] is True
assert 'Proxy-Authorization' not in proxy_info['headers']
@pytest.mark.skipif(urllib3 is None, reason='requires urllib3 to test')
def test_https_connect_verify_failed(self, handler, ctx):
with ctx.http_server(HTTPSConnectProxyHandler) as server_address:
with handler(verify=True, proxies={ctx.REQUEST_PROTO: f'https://{server_address}'}) as rh:
# Accept SSLError as may not be feasible to tell if it is proxy or request error.
# note: if request proto also does ssl verification, this may also be the error of the request.
# Until we can support passing custom cacerts to handlers, we cannot properly test this for all cases.
with pytest.raises((ProxyError, SSLError)):
ctx.proxy_info_request(rh)
@pytest.mark.skipif(urllib3 is None, reason='requires urllib3 to test')
def test_https_connect_proxy_auth(self, handler, ctx):
with ctx.http_server(HTTPSConnectProxyHandler, username='test', password='test') as server_address:
with handler(verify=False, proxies={ctx.REQUEST_PROTO: f'https://test:test@{server_address}'}) as rh:
proxy_info = ctx.proxy_info_request(rh)
assert proxy_info['proxy'] == server_address
assert 'Proxy-Authorization' in proxy_info['headers']

View File

@@ -29,11 +29,11 @@ def error(self, msg):
@is_download_test
class TestIqiyiSDKInterpreter(unittest.TestCase):
def test_iqiyi_sdk_interpreter(self):
'''
"""
Test the functionality of IqiyiSDKInterpreter by trying to log in
If `sign` is incorrect, /validate call throws an HTTP 556 error
'''
"""
logger = WarningLogger()
ie = IqiyiIE(FakeYDL({'logger': logger}))
ie._perform_login('foo', 'bar')

View File

@@ -92,6 +92,7 @@ def test_operators(self):
self._test('function f(){return 0 && 1 || 2;}', 2)
self._test('function f(){return 0 ?? 42;}', 0)
self._test('function f(){return "life, the universe and everything" < 42;}', False)
self._test('function f(){return 0 - 7 * - 6;}', 42)
def test_array_access(self):
self._test('function f(){var x = [1,2,3]; x[0] = 4; x[0] = 5; x[2.0] = 7; return x;}', [5, 2, 7])
@@ -375,6 +376,33 @@ def test_packed(self):
jsi = JSInterpreter('''function f(p,a,c,k,e,d){while(c--)if(k[c])p=p.replace(new RegExp('\\b'+c.toString(a)+'\\b','g'),k[c]);return p}''')
self.assertEqual(jsi.call_function('f', '''h 7=g("1j");7.7h({7g:[{33:"w://7f-7e-7d-7c.v.7b/7a/79/78/77/76.74?t=73&s=2s&e=72&f=2t&71=70.0.0.1&6z=6y&6x=6w"}],6v:"w://32.v.u/6u.31",16:"r%",15:"r%",6t:"6s",6r:"",6q:"l",6p:"l",6o:"6n",6m:\'6l\',6k:"6j",9:[{33:"/2u?b=6i&n=50&6h=w://32.v.u/6g.31",6f:"6e"}],1y:{6d:1,6c:\'#6b\',6a:\'#69\',68:"67",66:30,65:r,},"64":{63:"%62 2m%m%61%5z%5y%5x.u%5w%5v%5u.2y%22 2k%m%1o%22 5t%m%1o%22 5s%m%1o%22 2j%m%5r%22 16%m%5q%22 15%m%5p%22 5o%2z%5n%5m%2z",5l:"w://v.u/d/1k/5k.2y",5j:[]},\'5i\':{"5h":"5g"},5f:"5e",5d:"w://v.u",5c:{},5b:l,1x:[0.25,0.50,0.75,1,1.25,1.5,2]});h 1m,1n,5a;h 59=0,58=0;h 7=g("1j");h 2x=0,57=0,56=0;$.55({54:{\'53-52\':\'2i-51\'}});7.j(\'4z\',6(x){c(5>0&&x.1l>=5&&1n!=1){1n=1;$(\'q.4y\').4x(\'4w\')}});7.j(\'13\',6(x){2x=x.1l});7.j(\'2g\',6(x){2w(x)});7.j(\'4v\',6(){$(\'q.2v\').4u()});6 2w(x){$(\'q.2v\').4t();c(1m)19;1m=1;17=0;c(4s.4r===l){17=1}$.4q(\'/2u?b=4p&2l=1k&4o=2t-4n-4m-2s-4l&4k=&4j=&4i=&17=\'+17,6(2r){$(\'#4h\').4g(2r)});$(\'.3-8-4f-4e:4d("4c")\').2h(6(e){2q();g().4b(0);g().4a(l)});6 2q(){h $14=$("<q />").2p({1l:"49",16:"r%",15:"r%",48:0,2n:0,2o:47,46:"45(10%, 10%, 10%, 0.4)","44-43":"42"});$("<41 />").2p({16:"60%",15:"60%",2o:40,"3z-2n":"3y"}).3x({\'2m\':\'/?b=3w&2l=1k\',\'2k\':\'0\',\'2j\':\'2i\'}).2f($14);$14.2h(6(){$(3v).3u();g().2g()});$14.2f($(\'#1j\'))}g().13(0);}6 3t(){h 9=7.1b(2e);2d.2c(9);c(9.n>1){1r(i=0;i<9.n;i++){c(9[i].1a==2e){2d.2c(\'!!=\'+i);7.1p(i)}}}}7.j(\'3s\',6(){g().1h("/2a/3r.29","3q 10 28",6(){g().13(g().27()+10)},"2b");$("q[26=2b]").23().21(\'.3-20-1z\');g().1h("/2a/3p.29","3o 10 28",6(){h 12=g().27()-10;c(12<0)12=0;g().13(12)},"24");$("q[26=24]").23().21(\'.3-20-1z\');});6 1i(){}7.j(\'3n\',6(){1i()});7.j(\'3m\',6(){1i()});7.j("k",6(y){h 9=7.1b();c(9.n<2)19;$(\'.3-8-3l-3k\').3j(6(){$(\'#3-8-a-k\').1e(\'3-8-a-z\');$(\'.3-a-k\').p(\'o-1f\',\'11\')});7.1h("/3i/3h.3g","3f 3e",6(){$(\'.3-1w\').3d(\'3-8-1v\');$(\'.3-8-1y, .3-8-1x\').p(\'o-1g\',\'11\');c($(\'.3-1w\').3c(\'3-8-1v\')){$(\'.3-a-k\').p(\'o-1g\',\'l\');$(\'.3-a-k\').p(\'o-1f\',\'l\');$(\'.3-8-a\').1e(\'3-8-a-z\');$(\'.3-8-a:1u\').3b(\'3-8-a-z\')}3a{$(\'.3-a-k\').p(\'o-1g\',\'11\');$(\'.3-a-k\').p(\'o-1f\',\'11\');$(\'.3-8-a:1u\').1e(\'3-8-a-z\')}},"39");7.j("38",6(y){1d.37(\'1c\',y.9[y.36].1a)});c(1d.1t(\'1c\')){35("1s(1d.1t(\'1c\'));",34)}});h 18;6 1s(1q){h 9=7.1b();c(9.n>1){1r(i=0;i<9.n;i++){c(9[i].1a==1q){c(i==18){19}18=i;7.1p(i)}}}}',36,270,'|||jw|||function|player|settings|tracks|submenu||if||||jwplayer|var||on|audioTracks|true|3D|length|aria|attr|div|100|||sx|filemoon|https||event|active||false|tt|seek|dd|height|width|adb|current_audio|return|name|getAudioTracks|default_audio|localStorage|removeClass|expanded|checked|addButton|callMeMaybe|vplayer|0fxcyc2ajhp1|position|vvplay|vvad|220|setCurrentAudioTrack|audio_name|for|audio_set|getItem|last|open|controls|playbackRates|captions|rewind|icon|insertAfter||detach|ff00||button|getPosition|sec|png|player8|ff11|log|console|track_name|appendTo|play|click|no|scrolling|frameborder|file_code|src|top|zIndex|css|showCCform|data|1662367683|383371|dl|video_ad|doPlay|prevt|mp4|3E||jpg|thumbs|file|300|setTimeout|currentTrack|setItem|audioTrackChanged|dualSound|else|addClass|hasClass|toggleClass|Track|Audio|svg|dualy|images|mousedown|buttons|topbar|playAttemptFailed|beforePlay|Rewind|fr|Forward|ff|ready|set_audio_track|remove|this|upload_srt|prop|50px|margin|1000001|iframe|center|align|text|rgba|background|1000000|left|absolute|pause|setCurrentCaptions|Upload|contains|item|content|html|fviews|referer|prem|embed|3e57249ef633e0d03bf76ceb8d8a4b65|216|83|hash|view|get|TokenZir|window|hide|show|complete|slow|fadeIn|video_ad_fadein|time||cache|Cache|Content|headers|ajaxSetup|v2done|tott|vastdone2|vastdone1|vvbefore|playbackRateControls|cast|aboutlink|FileMoon|abouttext|UHD|1870|qualityLabels|sites|GNOME_POWER|link|2Fiframe|3C|allowfullscreen|22360|22640|22no|marginheight|marginwidth|2FGNOME_POWER|2F0fxcyc2ajhp1|2Fe|2Ffilemoon|2F|3A||22https|3Ciframe|code|sharing|fontOpacity|backgroundOpacity|Tahoma|fontFamily|303030|backgroundColor|FFFFFF|color|userFontScale|thumbnails|kind|0fxcyc2ajhp10000|url|get_slides|start|startparam|none|preload|html5|primary|hlshtml|androidhls|duration|uniform|stretching|0fxcyc2ajhp1_xt|image|2048|sp|6871|asn|127|srv|43200|_g3XlBcu2lmD9oDexD2NLWSmah2Nu3XcDrl93m9PwXY|m3u8||master|0fxcyc2ajhp1_x|00076|01|hls2|to|s01|delivery|storage|moon|sources|setup'''.split('|')))
def test_join(self):
test_input = list('test')
tests = [
'function f(a, b){return a.join(b)}',
'function f(a, b){return Array.prototype.join.call(a, b)}',
'function f(a, b){return Array.prototype.join.apply(a, [b])}',
]
for test in tests:
jsi = JSInterpreter(test)
self._test(jsi, 'test', args=[test_input, ''])
self._test(jsi, 't-e-s-t', args=[test_input, '-'])
self._test(jsi, '', args=[[], '-'])
def test_split(self):
test_result = list('test')
tests = [
'function f(a, b){return a.split(b)}',
'function f(a, b){return String.prototype.split.call(a, b)}',
'function f(a, b){return String.prototype.split.apply(a, [b])}',
]
for test in tests:
jsi = JSInterpreter(test)
self._test(jsi, test_result, args=['test', ''])
self._test(jsi, test_result, args=['t-e-s-t', '-'])
self._test(jsi, [''], args=['', '-'])
self._test(jsi, [], args=['', ''])
if __name__ == '__main__':
unittest.main()

View File

@@ -21,7 +21,7 @@ def test_netrc_present(self):
continue
self.assertTrue(
ie._NETRC_MACHINE,
'Extractor %s supports login, but is missing a _NETRC_MACHINE property' % ie.IE_NAME)
f'Extractor {ie.IE_NAME} supports login, but is missing a _NETRC_MACHINE property')
if __name__ == '__main__':

1982
test/test_networking.py Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,208 @@
#!/usr/bin/env python3
# Allow direct execution
import os
import sys
import pytest
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import io
import random
import ssl
from yt_dlp.cookies import YoutubeDLCookieJar
from yt_dlp.dependencies import certifi
from yt_dlp.networking import Response
from yt_dlp.networking._helper import (
InstanceStoreMixin,
add_accept_encoding_header,
get_redirect_method,
make_socks_proxy_opts,
select_proxy,
ssl_load_certs,
)
from yt_dlp.networking.exceptions import (
HTTPError,
IncompleteRead,
)
from yt_dlp.socks import ProxyType
from yt_dlp.utils.networking import HTTPHeaderDict
TEST_DIR = os.path.dirname(os.path.abspath(__file__))
class TestNetworkingUtils:
def test_select_proxy(self):
proxies = {
'all': 'socks5://example.com',
'http': 'http://example.com:1080',
'no': 'bypass.example.com,yt-dl.org',
}
assert select_proxy('https://example.com', proxies) == proxies['all']
assert select_proxy('http://example.com', proxies) == proxies['http']
assert select_proxy('http://bypass.example.com', proxies) is None
assert select_proxy('https://yt-dl.org', proxies) is None
@pytest.mark.parametrize('socks_proxy,expected', [
('socks5h://example.com', {
'proxytype': ProxyType.SOCKS5,
'addr': 'example.com',
'port': 1080,
'rdns': True,
'username': None,
'password': None,
}),
('socks5://user:@example.com:5555', {
'proxytype': ProxyType.SOCKS5,
'addr': 'example.com',
'port': 5555,
'rdns': False,
'username': 'user',
'password': '',
}),
('socks4://u%40ser:pa%20ss@127.0.0.1:1080', {
'proxytype': ProxyType.SOCKS4,
'addr': '127.0.0.1',
'port': 1080,
'rdns': False,
'username': 'u@ser',
'password': 'pa ss',
}),
('socks4a://:pa%20ss@127.0.0.1', {
'proxytype': ProxyType.SOCKS4A,
'addr': '127.0.0.1',
'port': 1080,
'rdns': True,
'username': '',
'password': 'pa ss',
}),
])
def test_make_socks_proxy_opts(self, socks_proxy, expected):
assert make_socks_proxy_opts(socks_proxy) == expected
def test_make_socks_proxy_unknown(self):
with pytest.raises(ValueError, match='Unknown SOCKS proxy version: socks'):
make_socks_proxy_opts('socks://127.0.0.1')
@pytest.mark.skipif(not certifi, reason='certifi is not installed')
def test_load_certifi(self):
context_certifi = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT)
context_certifi.load_verify_locations(cafile=certifi.where())
context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT)
ssl_load_certs(context, use_certifi=True)
assert context.get_ca_certs() == context_certifi.get_ca_certs()
context_default = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT)
context_default.load_default_certs()
context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT)
ssl_load_certs(context, use_certifi=False)
assert context.get_ca_certs() == context_default.get_ca_certs()
if context_default.get_ca_certs() == context_certifi.get_ca_certs():
pytest.skip('System uses certifi as default. The test is not valid')
@pytest.mark.parametrize('method,status,expected', [
('GET', 303, 'GET'),
('HEAD', 303, 'HEAD'),
('PUT', 303, 'GET'),
('POST', 301, 'GET'),
('HEAD', 301, 'HEAD'),
('POST', 302, 'GET'),
('HEAD', 302, 'HEAD'),
('PUT', 302, 'PUT'),
('POST', 308, 'POST'),
('POST', 307, 'POST'),
('HEAD', 308, 'HEAD'),
('HEAD', 307, 'HEAD'),
])
def test_get_redirect_method(self, method, status, expected):
assert get_redirect_method(method, status) == expected
@pytest.mark.parametrize('headers,supported_encodings,expected', [
({'Accept-Encoding': 'br'}, ['gzip', 'br'], {'Accept-Encoding': 'br'}),
({}, ['gzip', 'br'], {'Accept-Encoding': 'gzip, br'}),
({'Content-type': 'application/json'}, [], {'Content-type': 'application/json', 'Accept-Encoding': 'identity'}),
])
def test_add_accept_encoding_header(self, headers, supported_encodings, expected):
headers = HTTPHeaderDict(headers)
add_accept_encoding_header(headers, supported_encodings)
assert headers == HTTPHeaderDict(expected)
class TestInstanceStoreMixin:
class FakeInstanceStoreMixin(InstanceStoreMixin):
def _create_instance(self, **kwargs):
return random.randint(0, 1000000)
def _close_instance(self, instance):
pass
def test_mixin(self):
mixin = self.FakeInstanceStoreMixin()
assert mixin._get_instance(d={'a': 1, 'b': 2, 'c': {'d', 4}}) == mixin._get_instance(d={'a': 1, 'b': 2, 'c': {'d', 4}})
assert mixin._get_instance(d={'a': 1, 'b': 2, 'c': {'e', 4}}) != mixin._get_instance(d={'a': 1, 'b': 2, 'c': {'d', 4}})
assert mixin._get_instance(d={'a': 1, 'b': 2, 'c': {'d', 4}} != mixin._get_instance(d={'a': 1, 'b': 2, 'g': {'d', 4}}))
assert mixin._get_instance(d={'a': 1}, e=[1, 2, 3]) == mixin._get_instance(d={'a': 1}, e=[1, 2, 3])
assert mixin._get_instance(d={'a': 1}, e=[1, 2, 3]) != mixin._get_instance(d={'a': 1}, e=[1, 2, 3, 4])
cookiejar = YoutubeDLCookieJar()
assert mixin._get_instance(b=[1, 2], c=cookiejar) == mixin._get_instance(b=[1, 2], c=cookiejar)
assert mixin._get_instance(b=[1, 2], c=cookiejar) != mixin._get_instance(b=[1, 2], c=YoutubeDLCookieJar())
# Different order
assert mixin._get_instance(c=cookiejar, b=[1, 2]) == mixin._get_instance(b=[1, 2], c=cookiejar)
m = mixin._get_instance(t=1234)
assert mixin._get_instance(t=1234) == m
mixin._clear_instances()
assert mixin._get_instance(t=1234) != m
class TestNetworkingExceptions:
@staticmethod
def create_response(status):
return Response(fp=io.BytesIO(b'test'), url='http://example.com', headers={'tesT': 'test'}, status=status)
def test_http_error(self):
response = self.create_response(403)
error = HTTPError(response)
assert error.status == 403
assert str(error) == error.msg == 'HTTP Error 403: Forbidden'
assert error.reason == response.reason
assert error.response is response
data = error.response.read()
assert data == b'test'
assert repr(error) == '<HTTPError 403: Forbidden>'
def test_redirect_http_error(self):
response = self.create_response(301)
error = HTTPError(response, redirect_loop=True)
assert str(error) == error.msg == 'HTTP Error 301: Moved Permanently (redirect loop detected)'
assert error.reason == 'Moved Permanently'
def test_incomplete_read_error(self):
error = IncompleteRead(4, 3, cause='test')
assert isinstance(error, IncompleteRead)
assert repr(error) == '<IncompleteRead: 4 bytes read, 3 more expected>'
assert str(error) == error.msg == '4 bytes read, 3 more expected'
assert error.partial == 4
assert error.expected == 3
assert error.cause == 'test'
error = IncompleteRead(3)
assert repr(error) == '<IncompleteRead: 3 bytes read>'
assert str(error) == '3 bytes read'

View File

@@ -27,7 +27,7 @@ def test_default_overwrites(self):
[
sys.executable, 'yt_dlp/__main__.py',
'-o', 'test.webm',
'https://www.youtube.com/watch?v=jNQXAC9IVRw'
'https://www.youtube.com/watch?v=jNQXAC9IVRw',
], cwd=root_dir, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
sout, serr = outp.communicate()
self.assertTrue(b'has already been downloaded' in sout)
@@ -39,7 +39,7 @@ def test_yes_overwrites(self):
[
sys.executable, 'yt_dlp/__main__.py', '--yes-overwrites',
'-o', 'test.webm',
'https://www.youtube.com/watch?v=jNQXAC9IVRw'
'https://www.youtube.com/watch?v=jNQXAC9IVRw',
], cwd=root_dir, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
sout, serr = outp.communicate()
self.assertTrue(b'has already been downloaded' not in sout)

View File

@@ -31,7 +31,7 @@ def test_extractor_classes(self):
# don't load modules with underscore prefix
self.assertFalse(
f'{PACKAGE_NAME}.extractor._ignore' in sys.modules.keys(),
f'{PACKAGE_NAME}.extractor._ignore' in sys.modules,
'loaded module beginning with underscore')
self.assertNotIn('IgnorePluginIE', plugins_ie.keys())

View File

@@ -59,7 +59,7 @@ def hook_two(self, filename):
def hook_three(self, filename):
self.files.append(filename)
raise Exception('Test exception for \'%s\'' % filename)
raise Exception(f'Test exception for \'{filename}\'')
def tearDown(self):
for f in self.files:

View File

@@ -9,7 +9,7 @@
from yt_dlp import YoutubeDL
from yt_dlp.compat import compat_shlex_quote
from yt_dlp.utils import shell_quote
from yt_dlp.postprocessor import (
ExecPP,
FFmpegThumbnailsConvertorPP,
@@ -65,7 +65,7 @@ class TestExec(unittest.TestCase):
def test_parse_cmd(self):
pp = ExecPP(YoutubeDL(), '')
info = {'filepath': 'file name'}
cmd = 'echo %s' % compat_shlex_quote(info['filepath'])
cmd = 'echo {}'.format(shell_quote(info['filepath']))
self.assertEqual(pp.parse_cmd('echo', info), cmd)
self.assertEqual(pp.parse_cmd('echo {}', info), cmd)
@@ -125,7 +125,8 @@ def test_remove_marked_arrange_sponsors_CanGetThroughUnaltered(self):
self._remove_marked_arrange_sponsors_test_impl(chapters, chapters, [])
def test_remove_marked_arrange_sponsors_ChapterWithSponsors(self):
chapters = self._chapters([70], ['c']) + [
chapters = [
*self._chapters([70], ['c']),
self._sponsor_chapter(10, 20, 'sponsor'),
self._sponsor_chapter(30, 40, 'preview'),
self._sponsor_chapter(50, 60, 'filler')]
@@ -136,7 +137,8 @@ def test_remove_marked_arrange_sponsors_ChapterWithSponsors(self):
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, [])
def test_remove_marked_arrange_sponsors_SponsorBlockChapters(self):
chapters = self._chapters([70], ['c']) + [
chapters = [
*self._chapters([70], ['c']),
self._sponsor_chapter(10, 20, 'chapter', title='sb c1'),
self._sponsor_chapter(15, 16, 'chapter', title='sb c2'),
self._sponsor_chapter(30, 40, 'preview'),
@@ -149,10 +151,14 @@ def test_remove_marked_arrange_sponsors_SponsorBlockChapters(self):
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, [])
def test_remove_marked_arrange_sponsors_UniqueNamesForOverlappingSponsors(self):
chapters = self._chapters([120], ['c']) + [
self._sponsor_chapter(10, 45, 'sponsor'), self._sponsor_chapter(20, 40, 'selfpromo'),
self._sponsor_chapter(50, 70, 'sponsor'), self._sponsor_chapter(60, 85, 'selfpromo'),
self._sponsor_chapter(90, 120, 'selfpromo'), self._sponsor_chapter(100, 110, 'sponsor')]
chapters = [
*self._chapters([120], ['c']),
self._sponsor_chapter(10, 45, 'sponsor'),
self._sponsor_chapter(20, 40, 'selfpromo'),
self._sponsor_chapter(50, 70, 'sponsor'),
self._sponsor_chapter(60, 85, 'selfpromo'),
self._sponsor_chapter(90, 120, 'selfpromo'),
self._sponsor_chapter(100, 110, 'sponsor')]
expected = self._chapters(
[10, 20, 40, 45, 50, 60, 70, 85, 90, 100, 110, 120],
['c', '[SponsorBlock]: Sponsor', '[SponsorBlock]: Sponsor, Unpaid/Self Promotion',
@@ -172,7 +178,8 @@ def test_remove_marked_arrange_sponsors_ChapterWithCuts(self):
chapters, self._chapters([40], ['c']), cuts)
def test_remove_marked_arrange_sponsors_ChapterWithSponsorsAndCuts(self):
chapters = self._chapters([70], ['c']) + [
chapters = [
*self._chapters([70], ['c']),
self._sponsor_chapter(10, 20, 'sponsor'),
self._sponsor_chapter(30, 40, 'selfpromo', remove=True),
self._sponsor_chapter(50, 60, 'interaction')]
@@ -185,24 +192,29 @@ def test_remove_marked_arrange_sponsors_ChapterWithSponsorsAndCuts(self):
def test_remove_marked_arrange_sponsors_ChapterWithSponsorCutInTheMiddle(self):
cuts = [self._sponsor_chapter(20, 30, 'selfpromo', remove=True),
self._chapter(40, 50, remove=True)]
chapters = self._chapters([70], ['c']) + [self._sponsor_chapter(10, 60, 'sponsor')] + cuts
chapters = [
*self._chapters([70], ['c']),
self._sponsor_chapter(10, 60, 'sponsor'),
*cuts]
expected = self._chapters(
[10, 40, 50], ['c', '[SponsorBlock]: Sponsor', 'c'])
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, cuts)
def test_remove_marked_arrange_sponsors_ChapterWithCutHidingSponsor(self):
cuts = [self._sponsor_chapter(20, 50, 'selfpromo', remove=True)]
chapters = self._chapters([60], ['c']) + [
chapters = [
*self._chapters([60], ['c']),
self._sponsor_chapter(10, 20, 'intro'),
self._sponsor_chapter(30, 40, 'sponsor'),
self._sponsor_chapter(50, 60, 'outro'),
] + cuts
*cuts]
expected = self._chapters(
[10, 20, 30], ['c', '[SponsorBlock]: Intermission/Intro Animation', '[SponsorBlock]: Endcards/Credits'])
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, cuts)
def test_remove_marked_arrange_sponsors_ChapterWithAdjacentSponsors(self):
chapters = self._chapters([70], ['c']) + [
chapters = [
*self._chapters([70], ['c']),
self._sponsor_chapter(10, 20, 'sponsor'),
self._sponsor_chapter(20, 30, 'selfpromo'),
self._sponsor_chapter(30, 40, 'interaction')]
@@ -213,7 +225,8 @@ def test_remove_marked_arrange_sponsors_ChapterWithAdjacentSponsors(self):
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, [])
def test_remove_marked_arrange_sponsors_ChapterWithAdjacentCuts(self):
chapters = self._chapters([70], ['c']) + [
chapters = [
*self._chapters([70], ['c']),
self._sponsor_chapter(10, 20, 'sponsor'),
self._sponsor_chapter(20, 30, 'interaction', remove=True),
self._chapter(30, 40, remove=True),
@@ -226,7 +239,8 @@ def test_remove_marked_arrange_sponsors_ChapterWithAdjacentCuts(self):
chapters, expected, [self._chapter(20, 50, remove=True)])
def test_remove_marked_arrange_sponsors_ChapterWithOverlappingSponsors(self):
chapters = self._chapters([70], ['c']) + [
chapters = [
*self._chapters([70], ['c']),
self._sponsor_chapter(10, 30, 'sponsor'),
self._sponsor_chapter(20, 50, 'selfpromo'),
self._sponsor_chapter(40, 60, 'interaction')]
@@ -238,7 +252,8 @@ def test_remove_marked_arrange_sponsors_ChapterWithOverlappingSponsors(self):
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, [])
def test_remove_marked_arrange_sponsors_ChapterWithOverlappingCuts(self):
chapters = self._chapters([70], ['c']) + [
chapters = [
*self._chapters([70], ['c']),
self._sponsor_chapter(10, 30, 'sponsor', remove=True),
self._sponsor_chapter(20, 50, 'selfpromo', remove=True),
self._sponsor_chapter(40, 60, 'interaction', remove=True)]
@@ -246,7 +261,8 @@ def test_remove_marked_arrange_sponsors_ChapterWithOverlappingCuts(self):
chapters, self._chapters([20], ['c']), [self._chapter(10, 60, remove=True)])
def test_remove_marked_arrange_sponsors_ChapterWithRunsOfOverlappingSponsors(self):
chapters = self._chapters([170], ['c']) + [
chapters = [
*self._chapters([170], ['c']),
self._sponsor_chapter(0, 30, 'intro'),
self._sponsor_chapter(20, 50, 'sponsor'),
self._sponsor_chapter(40, 60, 'selfpromo'),
@@ -267,7 +283,8 @@ def test_remove_marked_arrange_sponsors_ChapterWithRunsOfOverlappingSponsors(sel
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, [])
def test_remove_marked_arrange_sponsors_ChapterWithRunsOfOverlappingCuts(self):
chapters = self._chapters([170], ['c']) + [
chapters = [
*self._chapters([170], ['c']),
self._chapter(0, 30, remove=True),
self._sponsor_chapter(20, 50, 'sponsor', remove=True),
self._chapter(40, 60, remove=True),
@@ -284,7 +301,8 @@ def test_remove_marked_arrange_sponsors_ChapterWithRunsOfOverlappingCuts(self):
chapters, self._chapters([20], ['c']), expected_cuts)
def test_remove_marked_arrange_sponsors_OverlappingSponsorsDifferentTitlesAfterCut(self):
chapters = self._chapters([60], ['c']) + [
chapters = [
*self._chapters([60], ['c']),
self._sponsor_chapter(10, 60, 'sponsor'),
self._sponsor_chapter(10, 40, 'intro'),
self._sponsor_chapter(30, 50, 'interaction'),
@@ -297,7 +315,8 @@ def test_remove_marked_arrange_sponsors_OverlappingSponsorsDifferentTitlesAfterC
chapters, expected, [self._chapter(30, 50, remove=True)])
def test_remove_marked_arrange_sponsors_SponsorsNoLongerOverlapAfterCut(self):
chapters = self._chapters([70], ['c']) + [
chapters = [
*self._chapters([70], ['c']),
self._sponsor_chapter(10, 30, 'sponsor'),
self._sponsor_chapter(20, 50, 'interaction'),
self._sponsor_chapter(30, 50, 'selfpromo', remove=True),
@@ -310,7 +329,8 @@ def test_remove_marked_arrange_sponsors_SponsorsNoLongerOverlapAfterCut(self):
chapters, expected, [self._chapter(30, 50, remove=True)])
def test_remove_marked_arrange_sponsors_SponsorsStillOverlapAfterCut(self):
chapters = self._chapters([70], ['c']) + [
chapters = [
*self._chapters([70], ['c']),
self._sponsor_chapter(10, 60, 'sponsor'),
self._sponsor_chapter(20, 60, 'interaction'),
self._sponsor_chapter(30, 50, 'selfpromo', remove=True)]
@@ -321,7 +341,8 @@ def test_remove_marked_arrange_sponsors_SponsorsStillOverlapAfterCut(self):
chapters, expected, [self._chapter(30, 50, remove=True)])
def test_remove_marked_arrange_sponsors_ChapterWithRunsOfOverlappingSponsorsAndCuts(self):
chapters = self._chapters([200], ['c']) + [
chapters = [
*self._chapters([200], ['c']),
self._sponsor_chapter(10, 40, 'sponsor'),
self._sponsor_chapter(10, 30, 'intro'),
self._chapter(20, 30, remove=True),
@@ -347,8 +368,9 @@ def test_remove_marked_arrange_sponsors_ChapterWithRunsOfOverlappingSponsorsAndC
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, expected_cuts)
def test_remove_marked_arrange_sponsors_SponsorOverlapsMultipleChapters(self):
chapters = (self._chapters([20, 40, 60, 80, 100], ['c1', 'c2', 'c3', 'c4', 'c5'])
+ [self._sponsor_chapter(10, 90, 'sponsor')])
chapters = [
*self._chapters([20, 40, 60, 80, 100], ['c1', 'c2', 'c3', 'c4', 'c5']),
self._sponsor_chapter(10, 90, 'sponsor')]
expected = self._chapters([10, 90, 100], ['c1', '[SponsorBlock]: Sponsor', 'c5'])
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, [])
@@ -359,9 +381,10 @@ def test_remove_marked_arrange_sponsors_CutOverlapsMultipleChapters(self):
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, cuts)
def test_remove_marked_arrange_sponsors_SponsorsWithinSomeChaptersAndOverlappingOthers(self):
chapters = (self._chapters([10, 40, 60, 80], ['c1', 'c2', 'c3', 'c4'])
+ [self._sponsor_chapter(20, 30, 'sponsor'),
self._sponsor_chapter(50, 70, 'selfpromo')])
chapters = [
*self._chapters([10, 40, 60, 80], ['c1', 'c2', 'c3', 'c4']),
self._sponsor_chapter(20, 30, 'sponsor'),
self._sponsor_chapter(50, 70, 'selfpromo')]
expected = self._chapters([10, 20, 30, 40, 50, 70, 80],
['c1', 'c2', '[SponsorBlock]: Sponsor', 'c2', 'c3',
'[SponsorBlock]: Unpaid/Self Promotion', 'c4'])
@@ -374,8 +397,9 @@ def test_remove_marked_arrange_sponsors_CutsWithinSomeChaptersAndOverlappingOthe
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, cuts)
def test_remove_marked_arrange_sponsors_ChaptersAfterLastSponsor(self):
chapters = (self._chapters([20, 40, 50, 60], ['c1', 'c2', 'c3', 'c4'])
+ [self._sponsor_chapter(10, 30, 'music_offtopic')])
chapters = [
*self._chapters([20, 40, 50, 60], ['c1', 'c2', 'c3', 'c4']),
self._sponsor_chapter(10, 30, 'music_offtopic')]
expected = self._chapters(
[10, 30, 40, 50, 60],
['c1', '[SponsorBlock]: Non-Music Section', 'c2', 'c3', 'c4'])
@@ -388,8 +412,9 @@ def test_remove_marked_arrange_sponsors_ChaptersAfterLastCut(self):
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, cuts)
def test_remove_marked_arrange_sponsors_SponsorStartsAtChapterStart(self):
chapters = (self._chapters([10, 20, 40], ['c1', 'c2', 'c3'])
+ [self._sponsor_chapter(20, 30, 'sponsor')])
chapters = [
*self._chapters([10, 20, 40], ['c1', 'c2', 'c3']),
self._sponsor_chapter(20, 30, 'sponsor')]
expected = self._chapters([10, 20, 30, 40], ['c1', 'c2', '[SponsorBlock]: Sponsor', 'c3'])
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, [])
@@ -400,8 +425,9 @@ def test_remove_marked_arrange_sponsors_CutStartsAtChapterStart(self):
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, cuts)
def test_remove_marked_arrange_sponsors_SponsorEndsAtChapterEnd(self):
chapters = (self._chapters([10, 30, 40], ['c1', 'c2', 'c3'])
+ [self._sponsor_chapter(20, 30, 'sponsor')])
chapters = [
*self._chapters([10, 30, 40], ['c1', 'c2', 'c3']),
self._sponsor_chapter(20, 30, 'sponsor')]
expected = self._chapters([10, 20, 30, 40], ['c1', 'c2', '[SponsorBlock]: Sponsor', 'c3'])
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, [])
@@ -412,8 +438,9 @@ def test_remove_marked_arrange_sponsors_CutEndsAtChapterEnd(self):
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, cuts)
def test_remove_marked_arrange_sponsors_SponsorCoincidesWithChapters(self):
chapters = (self._chapters([10, 20, 30, 40], ['c1', 'c2', 'c3', 'c4'])
+ [self._sponsor_chapter(10, 30, 'sponsor')])
chapters = [
*self._chapters([10, 20, 30, 40], ['c1', 'c2', 'c3', 'c4']),
self._sponsor_chapter(10, 30, 'sponsor')]
expected = self._chapters([10, 30, 40], ['c1', '[SponsorBlock]: Sponsor', 'c4'])
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, [])
@@ -424,8 +451,9 @@ def test_remove_marked_arrange_sponsors_CutCoincidesWithChapters(self):
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, cuts)
def test_remove_marked_arrange_sponsors_SponsorsAtVideoBoundaries(self):
chapters = (self._chapters([20, 40, 60], ['c1', 'c2', 'c3'])
+ [self._sponsor_chapter(0, 10, 'intro'), self._sponsor_chapter(50, 60, 'outro')])
chapters = [
*self._chapters([20, 40, 60], ['c1', 'c2', 'c3']),
self._sponsor_chapter(0, 10, 'intro'), self._sponsor_chapter(50, 60, 'outro')]
expected = self._chapters(
[10, 20, 40, 50, 60], ['[SponsorBlock]: Intermission/Intro Animation', 'c1', 'c2', 'c3', '[SponsorBlock]: Endcards/Credits'])
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, [])
@@ -437,8 +465,10 @@ def test_remove_marked_arrange_sponsors_CutsAtVideoBoundaries(self):
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, cuts)
def test_remove_marked_arrange_sponsors_SponsorsOverlapChaptersAtVideoBoundaries(self):
chapters = (self._chapters([10, 40, 50], ['c1', 'c2', 'c3'])
+ [self._sponsor_chapter(0, 20, 'intro'), self._sponsor_chapter(30, 50, 'outro')])
chapters = [
*self._chapters([10, 40, 50], ['c1', 'c2', 'c3']),
self._sponsor_chapter(0, 20, 'intro'),
self._sponsor_chapter(30, 50, 'outro')]
expected = self._chapters(
[20, 30, 50], ['[SponsorBlock]: Intermission/Intro Animation', 'c2', '[SponsorBlock]: Endcards/Credits'])
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, [])
@@ -450,8 +480,10 @@ def test_remove_marked_arrange_sponsors_CutsOverlapChaptersAtVideoBoundaries(sel
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, cuts)
def test_remove_marked_arrange_sponsors_EverythingSponsored(self):
chapters = (self._chapters([10, 20, 30, 40], ['c1', 'c2', 'c3', 'c4'])
+ [self._sponsor_chapter(0, 20, 'intro'), self._sponsor_chapter(20, 40, 'outro')])
chapters = [
*self._chapters([10, 20, 30, 40], ['c1', 'c2', 'c3', 'c4']),
self._sponsor_chapter(0, 20, 'intro'),
self._sponsor_chapter(20, 40, 'outro')]
expected = self._chapters([20, 40], ['[SponsorBlock]: Intermission/Intro Animation', '[SponsorBlock]: Endcards/Credits'])
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, [])
@@ -491,38 +523,39 @@ def test_remove_marked_arrange_sponsors_TinyChapterAtTheStartPrependedToTheNext(
chapters, self._chapters([2.5], ['c2']), cuts)
def test_remove_marked_arrange_sponsors_TinyChaptersResultingFromSponsorOverlapAreIgnored(self):
chapters = self._chapters([1, 3, 4], ['c1', 'c2', 'c3']) + [
chapters = [
*self._chapters([1, 3, 4], ['c1', 'c2', 'c3']),
self._sponsor_chapter(1.5, 2.5, 'sponsor')]
self._remove_marked_arrange_sponsors_test_impl(
chapters, self._chapters([1.5, 2.5, 4], ['c1', '[SponsorBlock]: Sponsor', 'c3']), [])
def test_remove_marked_arrange_sponsors_TinySponsorsOverlapsAreIgnored(self):
chapters = self._chapters([2, 3, 5], ['c1', 'c2', 'c3']) + [
chapters = [
*self._chapters([2, 3, 5], ['c1', 'c2', 'c3']),
self._sponsor_chapter(1, 3, 'sponsor'),
self._sponsor_chapter(2.5, 4, 'selfpromo')
]
self._sponsor_chapter(2.5, 4, 'selfpromo')]
self._remove_marked_arrange_sponsors_test_impl(
chapters, self._chapters([1, 3, 4, 5], [
'c1', '[SponsorBlock]: Sponsor', '[SponsorBlock]: Unpaid/Self Promotion', 'c3']), [])
def test_remove_marked_arrange_sponsors_TinySponsorsPrependedToTheNextSponsor(self):
chapters = self._chapters([4], ['c']) + [
chapters = [
*self._chapters([4], ['c']),
self._sponsor_chapter(1.5, 2, 'sponsor'),
self._sponsor_chapter(2, 4, 'selfpromo')
]
self._sponsor_chapter(2, 4, 'selfpromo')]
self._remove_marked_arrange_sponsors_test_impl(
chapters, self._chapters([1.5, 4], ['c', '[SponsorBlock]: Unpaid/Self Promotion']), [])
def test_remove_marked_arrange_sponsors_SmallestSponsorInTheOverlapGetsNamed(self):
self._pp._sponsorblock_chapter_title = '[SponsorBlock]: %(name)s'
chapters = self._chapters([10], ['c']) + [
chapters = [
*self._chapters([10], ['c']),
self._sponsor_chapter(2, 8, 'sponsor'),
self._sponsor_chapter(4, 6, 'selfpromo')
]
self._sponsor_chapter(4, 6, 'selfpromo')]
self._remove_marked_arrange_sponsors_test_impl(
chapters, self._chapters([2, 4, 6, 8, 10], [
'c', '[SponsorBlock]: Sponsor', '[SponsorBlock]: Unpaid/Self Promotion',
'[SponsorBlock]: Sponsor', 'c'
'[SponsorBlock]: Sponsor', 'c',
]), [])
def test_make_concat_opts_CommonCase(self):

View File

@@ -1,113 +1,471 @@
#!/usr/bin/env python3
# Allow direct execution
import os
import sys
import threading
import unittest
import pytest
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import abc
import contextlib
import enum
import functools
import http.server
import json
import random
import subprocess
import urllib.request
import socket
import struct
import time
from socketserver import (
BaseRequestHandler,
StreamRequestHandler,
ThreadingTCPServer,
)
from test.helper import FakeYDL, get_params, is_download_test
from test.helper import http_server_port, verify_address_availability
from yt_dlp.networking import Request
from yt_dlp.networking.exceptions import ProxyError, TransportError
from yt_dlp.socks import (
SOCKS4_REPLY_VERSION,
SOCKS4_VERSION,
SOCKS5_USER_AUTH_SUCCESS,
SOCKS5_USER_AUTH_VERSION,
SOCKS5_VERSION,
Socks5AddressType,
Socks5Auth,
)
SOCKS5_USER_AUTH_FAILURE = 0x1
@is_download_test
class TestMultipleSocks(unittest.TestCase):
@staticmethod
def _check_params(attrs):
params = get_params()
for attr in attrs:
if attr not in params:
print('Missing %s. Skipping.' % attr)
return
return params
def test_proxy_http(self):
params = self._check_params(['primary_proxy', 'primary_server_ip'])
if params is None:
return
ydl = FakeYDL({
'proxy': params['primary_proxy']
})
self.assertEqual(
ydl.urlopen('http://yt-dl.org/ip').read().decode(),
params['primary_server_ip'])
def test_proxy_https(self):
params = self._check_params(['primary_proxy', 'primary_server_ip'])
if params is None:
return
ydl = FakeYDL({
'proxy': params['primary_proxy']
})
self.assertEqual(
ydl.urlopen('https://yt-dl.org/ip').read().decode(),
params['primary_server_ip'])
def test_secondary_proxy_http(self):
params = self._check_params(['secondary_proxy', 'secondary_server_ip'])
if params is None:
return
ydl = FakeYDL()
req = urllib.request.Request('http://yt-dl.org/ip')
req.add_header('Ytdl-request-proxy', params['secondary_proxy'])
self.assertEqual(
ydl.urlopen(req).read().decode(),
params['secondary_server_ip'])
def test_secondary_proxy_https(self):
params = self._check_params(['secondary_proxy', 'secondary_server_ip'])
if params is None:
return
ydl = FakeYDL()
req = urllib.request.Request('https://yt-dl.org/ip')
req.add_header('Ytdl-request-proxy', params['secondary_proxy'])
self.assertEqual(
ydl.urlopen(req).read().decode(),
params['secondary_server_ip'])
class Socks4CD(enum.IntEnum):
REQUEST_GRANTED = 90
REQUEST_REJECTED_OR_FAILED = 91
REQUEST_REJECTED_CANNOT_CONNECT_TO_IDENTD = 92
REQUEST_REJECTED_DIFFERENT_USERID = 93
@is_download_test
class TestSocks(unittest.TestCase):
_SKIP_SOCKS_TEST = True
class Socks5Reply(enum.IntEnum):
SUCCEEDED = 0x0
GENERAL_FAILURE = 0x1
CONNECTION_NOT_ALLOWED = 0x2
NETWORK_UNREACHABLE = 0x3
HOST_UNREACHABLE = 0x4
CONNECTION_REFUSED = 0x5
TTL_EXPIRED = 0x6
COMMAND_NOT_SUPPORTED = 0x7
ADDRESS_TYPE_NOT_SUPPORTED = 0x8
def setUp(self):
if self._SKIP_SOCKS_TEST:
class SocksTestRequestHandler(BaseRequestHandler):
def __init__(self, *args, socks_info=None, **kwargs):
self.socks_info = socks_info
super().__init__(*args, **kwargs)
class SocksProxyHandler(BaseRequestHandler):
def __init__(self, request_handler_class, socks_server_kwargs, *args, **kwargs):
self.socks_kwargs = socks_server_kwargs or {}
self.request_handler_class = request_handler_class
super().__init__(*args, **kwargs)
class Socks5ProxyHandler(StreamRequestHandler, SocksProxyHandler):
# SOCKS5 protocol https://tools.ietf.org/html/rfc1928
# SOCKS5 username/password authentication https://tools.ietf.org/html/rfc1929
def handle(self):
sleep = self.socks_kwargs.get('sleep')
if sleep:
time.sleep(sleep)
version, nmethods = self.connection.recv(2)
assert version == SOCKS5_VERSION
methods = list(self.connection.recv(nmethods))
auth = self.socks_kwargs.get('auth')
if auth is not None and Socks5Auth.AUTH_USER_PASS not in methods:
self.connection.sendall(struct.pack('!BB', SOCKS5_VERSION, Socks5Auth.AUTH_NO_ACCEPTABLE))
self.server.close_request(self.request)
return
self.port = random.randint(20000, 30000)
self.server_process = subprocess.Popen([
'srelay', '-f', '-i', '127.0.0.1:%d' % self.port],
stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
elif Socks5Auth.AUTH_USER_PASS in methods:
self.connection.sendall(struct.pack('!BB', SOCKS5_VERSION, Socks5Auth.AUTH_USER_PASS))
def tearDown(self):
if self._SKIP_SOCKS_TEST:
_, user_len = struct.unpack('!BB', self.connection.recv(2))
username = self.connection.recv(user_len).decode()
pass_len = ord(self.connection.recv(1))
password = self.connection.recv(pass_len).decode()
if username == auth[0] and password == auth[1]:
self.connection.sendall(struct.pack('!BB', SOCKS5_USER_AUTH_VERSION, SOCKS5_USER_AUTH_SUCCESS))
else:
self.connection.sendall(struct.pack('!BB', SOCKS5_USER_AUTH_VERSION, SOCKS5_USER_AUTH_FAILURE))
self.server.close_request(self.request)
return
self.server_process.terminate()
self.server_process.communicate()
elif Socks5Auth.AUTH_NONE in methods:
self.connection.sendall(struct.pack('!BB', SOCKS5_VERSION, Socks5Auth.AUTH_NONE))
else:
self.connection.sendall(struct.pack('!BB', SOCKS5_VERSION, Socks5Auth.AUTH_NO_ACCEPTABLE))
self.server.close_request(self.request)
return
def _get_ip(self, protocol):
if self._SKIP_SOCKS_TEST:
return '127.0.0.1'
version, command, _, address_type = struct.unpack('!BBBB', self.connection.recv(4))
socks_info = {
'version': version,
'auth_methods': methods,
'command': command,
'client_address': self.client_address,
'ipv4_address': None,
'domain_address': None,
'ipv6_address': None,
}
if address_type == Socks5AddressType.ATYP_IPV4:
socks_info['ipv4_address'] = socket.inet_ntoa(self.connection.recv(4))
elif address_type == Socks5AddressType.ATYP_DOMAINNAME:
socks_info['domain_address'] = self.connection.recv(ord(self.connection.recv(1))).decode()
elif address_type == Socks5AddressType.ATYP_IPV6:
socks_info['ipv6_address'] = socket.inet_ntop(socket.AF_INET6, self.connection.recv(16))
else:
self.server.close_request(self.request)
ydl = FakeYDL({
'proxy': '%s://127.0.0.1:%d' % (protocol, self.port),
})
return ydl.urlopen('http://yt-dl.org/ip').read().decode()
socks_info['port'] = struct.unpack('!H', self.connection.recv(2))[0]
def test_socks4(self):
self.assertTrue(isinstance(self._get_ip('socks4'), str))
# dummy response, the returned IP is just a placeholder
self.connection.sendall(struct.pack(
'!BBBBIH', SOCKS5_VERSION, self.socks_kwargs.get('reply', Socks5Reply.SUCCEEDED), 0x0, 0x1, 0x7f000001, 40000))
def test_socks4a(self):
self.assertTrue(isinstance(self._get_ip('socks4a'), str))
self.request_handler_class(self.request, self.client_address, self.server, socks_info=socks_info)
def test_socks5(self):
self.assertTrue(isinstance(self._get_ip('socks5'), str))
class Socks4ProxyHandler(StreamRequestHandler, SocksProxyHandler):
# SOCKS4 protocol http://www.openssh.com/txt/socks4.protocol
# SOCKS4A protocol http://www.openssh.com/txt/socks4a.protocol
def _read_until_null(self):
return b''.join(iter(functools.partial(self.connection.recv, 1), b'\x00'))
def handle(self):
sleep = self.socks_kwargs.get('sleep')
if sleep:
time.sleep(sleep)
socks_info = {
'version': SOCKS4_VERSION,
'command': None,
'client_address': self.client_address,
'ipv4_address': None,
'port': None,
'domain_address': None,
}
version, command, dest_port, dest_ip = struct.unpack('!BBHI', self.connection.recv(8))
socks_info['port'] = dest_port
socks_info['command'] = command
if version != SOCKS4_VERSION:
self.server.close_request(self.request)
return
use_remote_dns = False
if 0x0 < dest_ip <= 0xFF:
use_remote_dns = True
else:
socks_info['ipv4_address'] = socket.inet_ntoa(struct.pack('!I', dest_ip))
user_id = self._read_until_null().decode()
if user_id != (self.socks_kwargs.get('user_id') or ''):
self.connection.sendall(struct.pack(
'!BBHI', SOCKS4_REPLY_VERSION, Socks4CD.REQUEST_REJECTED_DIFFERENT_USERID, 0x00, 0x00000000))
self.server.close_request(self.request)
return
if use_remote_dns:
socks_info['domain_address'] = self._read_until_null().decode()
# dummy response, the returned IP is just a placeholder
self.connection.sendall(
struct.pack(
'!BBHI', SOCKS4_REPLY_VERSION,
self.socks_kwargs.get('cd_reply', Socks4CD.REQUEST_GRANTED), 40000, 0x7f000001))
self.request_handler_class(self.request, self.client_address, self.server, socks_info=socks_info)
class IPv6ThreadingTCPServer(ThreadingTCPServer):
address_family = socket.AF_INET6
class SocksHTTPTestRequestHandler(http.server.BaseHTTPRequestHandler, SocksTestRequestHandler):
def do_GET(self):
if self.path == '/socks_info':
payload = json.dumps(self.socks_info.copy())
self.send_response(200)
self.send_header('Content-Type', 'application/json; charset=utf-8')
self.send_header('Content-Length', str(len(payload)))
self.end_headers()
self.wfile.write(payload.encode())
class SocksWebSocketTestRequestHandler(SocksTestRequestHandler):
def handle(self):
import websockets.sync.server
protocol = websockets.ServerProtocol()
connection = websockets.sync.server.ServerConnection(socket=self.request, protocol=protocol, close_timeout=0)
connection.handshake()
connection.send(json.dumps(self.socks_info))
connection.close()
@contextlib.contextmanager
def socks_server(socks_server_class, request_handler, bind_ip=None, **socks_server_kwargs):
server = server_thread = None
try:
bind_address = bind_ip or '127.0.0.1'
server_type = ThreadingTCPServer if '.' in bind_address else IPv6ThreadingTCPServer
server = server_type(
(bind_address, 0), functools.partial(socks_server_class, request_handler, socks_server_kwargs))
server_port = http_server_port(server)
server_thread = threading.Thread(target=server.serve_forever)
server_thread.daemon = True
server_thread.start()
if '.' not in bind_address:
yield f'[{bind_address}]:{server_port}'
else:
yield f'{bind_address}:{server_port}'
finally:
server.shutdown()
server.server_close()
server_thread.join(2.0)
class SocksProxyTestContext(abc.ABC):
REQUEST_HANDLER_CLASS = None
def socks_server(self, server_class, *args, **kwargs):
return socks_server(server_class, self.REQUEST_HANDLER_CLASS, *args, **kwargs)
@abc.abstractmethod
def socks_info_request(self, handler, target_domain=None, target_port=None, **req_kwargs) -> dict:
"""return a dict of socks_info"""
class HTTPSocksTestProxyContext(SocksProxyTestContext):
REQUEST_HANDLER_CLASS = SocksHTTPTestRequestHandler
def socks_info_request(self, handler, target_domain=None, target_port=None, **req_kwargs):
request = Request(f'http://{target_domain or "127.0.0.1"}:{target_port or "40000"}/socks_info', **req_kwargs)
handler.validate(request)
return json.loads(handler.send(request).read().decode())
class WebSocketSocksTestProxyContext(SocksProxyTestContext):
REQUEST_HANDLER_CLASS = SocksWebSocketTestRequestHandler
def socks_info_request(self, handler, target_domain=None, target_port=None, **req_kwargs):
request = Request(f'ws://{target_domain or "127.0.0.1"}:{target_port or "40000"}', **req_kwargs)
handler.validate(request)
ws = handler.send(request)
ws.send('socks_info')
socks_info = ws.recv()
ws.close()
return json.loads(socks_info)
CTX_MAP = {
'http': HTTPSocksTestProxyContext,
'ws': WebSocketSocksTestProxyContext,
}
@pytest.fixture(scope='module')
def ctx(request):
return CTX_MAP[request.param]()
@pytest.mark.parametrize(
'handler,ctx', [
('Urllib', 'http'),
('Requests', 'http'),
('Websockets', 'ws'),
('CurlCFFI', 'http'),
], indirect=True)
class TestSocks4Proxy:
def test_socks4_no_auth(self, handler, ctx):
with handler() as rh:
with ctx.socks_server(Socks4ProxyHandler) as server_address:
response = ctx.socks_info_request(
rh, proxies={'all': f'socks4://{server_address}'})
assert response['version'] == 4
def test_socks4_auth(self, handler, ctx):
with handler() as rh:
with ctx.socks_server(Socks4ProxyHandler, user_id='user') as server_address:
with pytest.raises(ProxyError):
ctx.socks_info_request(rh, proxies={'all': f'socks4://{server_address}'})
response = ctx.socks_info_request(
rh, proxies={'all': f'socks4://user:@{server_address}'})
assert response['version'] == 4
def test_socks4a_ipv4_target(self, handler, ctx):
with ctx.socks_server(Socks4ProxyHandler) as server_address:
with handler(proxies={'all': f'socks4a://{server_address}'}) as rh:
response = ctx.socks_info_request(rh, target_domain='127.0.0.1')
assert response['version'] == 4
assert (response['ipv4_address'] == '127.0.0.1') != (response['domain_address'] == '127.0.0.1')
def test_socks4a_domain_target(self, handler, ctx):
with ctx.socks_server(Socks4ProxyHandler) as server_address:
with handler(proxies={'all': f'socks4a://{server_address}'}) as rh:
response = ctx.socks_info_request(rh, target_domain='localhost')
assert response['version'] == 4
assert response['ipv4_address'] is None
assert response['domain_address'] == 'localhost'
def test_ipv4_client_source_address(self, handler, ctx):
with ctx.socks_server(Socks4ProxyHandler) as server_address:
source_address = f'127.0.0.{random.randint(5, 255)}'
verify_address_availability(source_address)
with handler(proxies={'all': f'socks4://{server_address}'},
source_address=source_address) as rh:
response = ctx.socks_info_request(rh)
assert response['client_address'][0] == source_address
assert response['version'] == 4
@pytest.mark.parametrize('reply_code', [
Socks4CD.REQUEST_REJECTED_OR_FAILED,
Socks4CD.REQUEST_REJECTED_CANNOT_CONNECT_TO_IDENTD,
Socks4CD.REQUEST_REJECTED_DIFFERENT_USERID,
])
def test_socks4_errors(self, handler, ctx, reply_code):
with ctx.socks_server(Socks4ProxyHandler, cd_reply=reply_code) as server_address:
with handler(proxies={'all': f'socks4://{server_address}'}) as rh:
with pytest.raises(ProxyError):
ctx.socks_info_request(rh)
def test_ipv6_socks4_proxy(self, handler, ctx):
with ctx.socks_server(Socks4ProxyHandler, bind_ip='::1') as server_address:
with handler(proxies={'all': f'socks4://{server_address}'}) as rh:
response = ctx.socks_info_request(rh, target_domain='127.0.0.1')
assert response['client_address'][0] == '::1'
assert response['ipv4_address'] == '127.0.0.1'
assert response['version'] == 4
def test_timeout(self, handler, ctx):
with ctx.socks_server(Socks4ProxyHandler, sleep=2) as server_address:
with handler(proxies={'all': f'socks4://{server_address}'}, timeout=0.5) as rh:
with pytest.raises(TransportError):
ctx.socks_info_request(rh)
@pytest.mark.parametrize(
'handler,ctx', [
('Urllib', 'http'),
('Requests', 'http'),
('Websockets', 'ws'),
('CurlCFFI', 'http'),
], indirect=True)
class TestSocks5Proxy:
def test_socks5_no_auth(self, handler, ctx):
with ctx.socks_server(Socks5ProxyHandler) as server_address:
with handler(proxies={'all': f'socks5://{server_address}'}) as rh:
response = ctx.socks_info_request(rh)
assert response['auth_methods'] == [0x0]
assert response['version'] == 5
def test_socks5_user_pass(self, handler, ctx):
with ctx.socks_server(Socks5ProxyHandler, auth=('test', 'testpass')) as server_address:
with handler() as rh:
with pytest.raises(ProxyError):
ctx.socks_info_request(rh, proxies={'all': f'socks5://{server_address}'})
response = ctx.socks_info_request(
rh, proxies={'all': f'socks5://test:testpass@{server_address}'})
assert response['auth_methods'] == [Socks5Auth.AUTH_NONE, Socks5Auth.AUTH_USER_PASS]
assert response['version'] == 5
def test_socks5_ipv4_target(self, handler, ctx):
with ctx.socks_server(Socks5ProxyHandler) as server_address:
with handler(proxies={'all': f'socks5://{server_address}'}) as rh:
response = ctx.socks_info_request(rh, target_domain='127.0.0.1')
assert response['ipv4_address'] == '127.0.0.1'
assert response['version'] == 5
def test_socks5_domain_target(self, handler, ctx):
with ctx.socks_server(Socks5ProxyHandler) as server_address:
with handler(proxies={'all': f'socks5://{server_address}'}) as rh:
response = ctx.socks_info_request(rh, target_domain='localhost')
assert (response['ipv4_address'] == '127.0.0.1') != (response['ipv6_address'] == '::1')
assert response['version'] == 5
def test_socks5h_domain_target(self, handler, ctx):
with ctx.socks_server(Socks5ProxyHandler) as server_address:
with handler(proxies={'all': f'socks5h://{server_address}'}) as rh:
response = ctx.socks_info_request(rh, target_domain='localhost')
assert response['ipv4_address'] is None
assert response['domain_address'] == 'localhost'
assert response['version'] == 5
def test_socks5h_ip_target(self, handler, ctx):
with ctx.socks_server(Socks5ProxyHandler) as server_address:
with handler(proxies={'all': f'socks5h://{server_address}'}) as rh:
response = ctx.socks_info_request(rh, target_domain='127.0.0.1')
assert response['ipv4_address'] == '127.0.0.1'
assert response['domain_address'] is None
assert response['version'] == 5
def test_socks5_ipv6_destination(self, handler, ctx):
with ctx.socks_server(Socks5ProxyHandler) as server_address:
with handler(proxies={'all': f'socks5://{server_address}'}) as rh:
response = ctx.socks_info_request(rh, target_domain='[::1]')
assert response['ipv6_address'] == '::1'
assert response['version'] == 5
def test_ipv6_socks5_proxy(self, handler, ctx):
with ctx.socks_server(Socks5ProxyHandler, bind_ip='::1') as server_address:
with handler(proxies={'all': f'socks5://{server_address}'}) as rh:
response = ctx.socks_info_request(rh, target_domain='127.0.0.1')
assert response['client_address'][0] == '::1'
assert response['ipv4_address'] == '127.0.0.1'
assert response['version'] == 5
# XXX: is there any feasible way of testing IPv6 source addresses?
# Same would go for non-proxy source_address test...
def test_ipv4_client_source_address(self, handler, ctx):
with ctx.socks_server(Socks5ProxyHandler) as server_address:
source_address = f'127.0.0.{random.randint(5, 255)}'
verify_address_availability(source_address)
with handler(proxies={'all': f'socks5://{server_address}'}, source_address=source_address) as rh:
response = ctx.socks_info_request(rh)
assert response['client_address'][0] == source_address
assert response['version'] == 5
@pytest.mark.parametrize('reply_code', [
Socks5Reply.GENERAL_FAILURE,
Socks5Reply.CONNECTION_NOT_ALLOWED,
Socks5Reply.NETWORK_UNREACHABLE,
Socks5Reply.HOST_UNREACHABLE,
Socks5Reply.CONNECTION_REFUSED,
Socks5Reply.TTL_EXPIRED,
Socks5Reply.COMMAND_NOT_SUPPORTED,
Socks5Reply.ADDRESS_TYPE_NOT_SUPPORTED,
])
def test_socks5_errors(self, handler, ctx, reply_code):
with ctx.socks_server(Socks5ProxyHandler, reply=reply_code) as server_address:
with handler(proxies={'all': f'socks5://{server_address}'}) as rh:
with pytest.raises(ProxyError):
ctx.socks_info_request(rh)
def test_timeout(self, handler, ctx):
with ctx.socks_server(Socks5ProxyHandler, sleep=2) as server_address:
with handler(proxies={'all': f'socks5://{server_address}'}, timeout=1) as rh:
with pytest.raises(TransportError):
ctx.socks_info_request(rh)
if __name__ == '__main__':

View File

@@ -40,12 +40,11 @@ def setUp(self):
self.ie = self.IE()
self.DL.add_info_extractor(self.ie)
if not self.IE.working():
print('Skipping: %s marked as not _WORKING' % self.IE.ie_key())
print(f'Skipping: {self.IE.ie_key()} marked as not _WORKING')
self.skipTest('IE marked as not _WORKING')
def getInfoDict(self):
info_dict = self.DL.extract_info(self.url, download=False)
return info_dict
return self.DL.extract_info(self.url, download=False)
def getSubtitles(self):
info_dict = self.getInfoDict()
@@ -87,7 +86,7 @@ def test_youtube_allsubtitles(self):
self.assertEqual(md5(subtitles['en']), 'ae1bd34126571a77aabd4d276b28044d')
self.assertEqual(md5(subtitles['it']), '0e0b667ba68411d88fd1c5f4f4eab2f9')
for lang in ['fr', 'de']:
self.assertTrue(subtitles.get(lang) is not None, 'Subtitles for \'%s\' not extracted' % lang)
self.assertTrue(subtitles.get(lang) is not None, f'Subtitles for \'{lang}\' not extracted')
def _test_subtitles_format(self, fmt, md5_hash, lang='en'):
self.DL.params['writesubtitles'] = True
@@ -157,7 +156,7 @@ def test_allsubtitles(self):
self.assertEqual(md5(subtitles['en']), '976553874490cba125086bbfea3ff76f')
self.assertEqual(md5(subtitles['fr']), '594564ec7d588942e384e920e5341792')
for lang in ['es', 'fr', 'de']:
self.assertTrue(subtitles.get(lang) is not None, 'Subtitles for \'%s\' not extracted' % lang)
self.assertTrue(subtitles.get(lang) is not None, f'Subtitles for \'{lang}\' not extracted')
def test_nosubtitles(self):
self.DL.expect_warning('video doesn\'t have subtitles')
@@ -182,7 +181,7 @@ def test_allsubtitles(self):
self.assertEqual(md5(subtitles['en']), '4262c1665ff928a2dada178f62cb8d14')
self.assertEqual(md5(subtitles['fr']), '66a63f7f42c97a50f8c0e90bc7797bb5')
for lang in ['es', 'fr', 'de']:
self.assertTrue(subtitles.get(lang) is not None, 'Subtitles for \'%s\' not extracted' % lang)
self.assertTrue(subtitles.get(lang) is not None, f'Subtitles for \'{lang}\' not extracted')
@is_download_test

444
test/test_traversal.py Normal file
View File

@@ -0,0 +1,444 @@
import http.cookies
import re
import xml.etree.ElementTree
import pytest
from yt_dlp.utils import dict_get, int_or_none, str_or_none
from yt_dlp.utils.traversal import traverse_obj
_TEST_DATA = {
100: 100,
1.2: 1.2,
'str': 'str',
'None': None,
'...': ...,
'urls': [
{'index': 0, 'url': 'https://www.example.com/0'},
{'index': 1, 'url': 'https://www.example.com/1'},
],
'data': (
{'index': 2},
{'index': 3},
),
'dict': {},
}
class TestTraversal:
def test_traversal_base(self):
assert traverse_obj(_TEST_DATA, ('str',)) == 'str', \
'allow tuple path'
assert traverse_obj(_TEST_DATA, ['str']) == 'str', \
'allow list path'
assert traverse_obj(_TEST_DATA, (value for value in ('str',))) == 'str', \
'allow iterable path'
assert traverse_obj(_TEST_DATA, 'str') == 'str', \
'single items should be treated as a path'
assert traverse_obj(_TEST_DATA, 100) == 100, \
'allow int path'
assert traverse_obj(_TEST_DATA, 1.2) == 1.2, \
'allow float path'
assert traverse_obj(_TEST_DATA, None) == _TEST_DATA, \
'`None` should not perform any modification'
def test_traversal_ellipsis(self):
assert traverse_obj(_TEST_DATA, ...) == [x for x in _TEST_DATA.values() if x not in (None, {})], \
'`...` should give all non discarded values'
assert traverse_obj(_TEST_DATA, ('urls', 0, ...)) == list(_TEST_DATA['urls'][0].values()), \
'`...` selection for dicts should select all values'
assert traverse_obj(_TEST_DATA, (..., ..., 'url')) == ['https://www.example.com/0', 'https://www.example.com/1'], \
'nested `...` queries should work'
assert traverse_obj(_TEST_DATA, (..., ..., 'index')) == list(range(4)), \
'`...` query result should be flattened'
assert traverse_obj(iter(range(4)), ...) == list(range(4)), \
'`...` should accept iterables'
def test_traversal_function(self):
filter_func = lambda x, y: x == 'urls' and isinstance(y, list)
assert traverse_obj(_TEST_DATA, filter_func) == [_TEST_DATA['urls']], \
'function as query key should perform a filter based on (key, value)'
assert traverse_obj(_TEST_DATA, lambda _, x: isinstance(x[0], str)) == ['str'], \
'exceptions in the query function should be catched'
assert traverse_obj(iter(range(4)), lambda _, x: x % 2 == 0) == [0, 2], \
'function key should accept iterables'
# Wrong function signature should raise (debug mode)
with pytest.raises(Exception):
traverse_obj(_TEST_DATA, lambda a: ...)
with pytest.raises(Exception):
traverse_obj(_TEST_DATA, lambda a, b, c: ...)
def test_traversal_set(self):
# transformation/type, like `expected_type`
assert traverse_obj(_TEST_DATA, (..., {str.upper})) == ['STR'], \
'Function in set should be a transformation'
assert traverse_obj(_TEST_DATA, (..., {str})) == ['str'], \
'Type in set should be a type filter'
assert traverse_obj(_TEST_DATA, (..., {str, int})) == [100, 'str'], \
'Multiple types in set should be a type filter'
assert traverse_obj(_TEST_DATA, {dict}) == _TEST_DATA, \
'A single set should be wrapped into a path'
assert traverse_obj(_TEST_DATA, (..., {str.upper})) == ['STR'], \
'Transformation function should not raise'
expected = [x for x in map(str_or_none, _TEST_DATA.values()) if x is not None]
assert traverse_obj(_TEST_DATA, (..., {str_or_none})) == expected, \
'Function in set should be a transformation'
assert traverse_obj(_TEST_DATA, ('fail', {lambda _: 'const'})) == 'const', \
'Function in set should always be called'
# Sets with length < 1 or > 1 not including only types should raise
with pytest.raises(Exception):
traverse_obj(_TEST_DATA, set())
with pytest.raises(Exception):
traverse_obj(_TEST_DATA, {str.upper, str})
def test_traversal_slice(self):
_SLICE_DATA = [0, 1, 2, 3, 4]
assert traverse_obj(_TEST_DATA, ('dict', slice(1))) is None, \
'slice on a dictionary should not throw'
assert traverse_obj(_SLICE_DATA, slice(1)) == _SLICE_DATA[:1], \
'slice key should apply slice to sequence'
assert traverse_obj(_SLICE_DATA, slice(1, 2)) == _SLICE_DATA[1:2], \
'slice key should apply slice to sequence'
assert traverse_obj(_SLICE_DATA, slice(1, 4, 2)) == _SLICE_DATA[1:4:2], \
'slice key should apply slice to sequence'
def test_traversal_alternatives(self):
assert traverse_obj(_TEST_DATA, 'fail', 'str') == 'str', \
'multiple `paths` should be treated as alternative paths'
assert traverse_obj(_TEST_DATA, 'str', 100) == 'str', \
'alternatives should exit early'
assert traverse_obj(_TEST_DATA, 'fail', 'fail') is None, \
'alternatives should return `default` if exhausted'
assert traverse_obj(_TEST_DATA, (..., 'fail'), 100) == 100, \
'alternatives should track their own branching return'
assert traverse_obj(_TEST_DATA, ('dict', ...), ('data', ...)) == list(_TEST_DATA['data']), \
'alternatives on empty objects should search further'
def test_traversal_branching_nesting(self):
assert traverse_obj(_TEST_DATA, ('urls', (3, 0), 'url')) == ['https://www.example.com/0'], \
'tuple as key should be treated as branches'
assert traverse_obj(_TEST_DATA, ('urls', [3, 0], 'url')) == ['https://www.example.com/0'], \
'list as key should be treated as branches'
assert traverse_obj(_TEST_DATA, ('urls', ((1, 'fail'), (0, 'url')))) == ['https://www.example.com/0'], \
'double nesting in path should be treated as paths'
assert traverse_obj(['0', [1, 2]], [(0, 1), 0]) == [1], \
'do not fail early on branching'
expected = ['https://www.example.com/0', 'https://www.example.com/1']
assert traverse_obj(_TEST_DATA, ('urls', ((0, ('fail', 'url')), (1, 'url')))) == expected, \
'tripple nesting in path should be treated as branches'
assert traverse_obj(_TEST_DATA, ('urls', ('fail', (..., 'url')))) == expected, \
'ellipsis as branch path start gets flattened'
def test_traversal_dict(self):
assert traverse_obj(_TEST_DATA, {0: 100, 1: 1.2}) == {0: 100, 1: 1.2}, \
'dict key should result in a dict with the same keys'
expected = {0: 'https://www.example.com/0'}
assert traverse_obj(_TEST_DATA, {0: ('urls', 0, 'url')}) == expected, \
'dict key should allow paths'
expected = {0: ['https://www.example.com/0']}
assert traverse_obj(_TEST_DATA, {0: ('urls', (3, 0), 'url')}) == expected, \
'tuple in dict path should be treated as branches'
assert traverse_obj(_TEST_DATA, {0: ('urls', ((1, 'fail'), (0, 'url')))}) == expected, \
'double nesting in dict path should be treated as paths'
expected = {0: ['https://www.example.com/1', 'https://www.example.com/0']}
assert traverse_obj(_TEST_DATA, {0: ('urls', ((1, ('fail', 'url')), (0, 'url')))}) == expected, \
'tripple nesting in dict path should be treated as branches'
assert traverse_obj(_TEST_DATA, {0: 'fail'}) == {}, \
'remove `None` values when top level dict key fails'
assert traverse_obj(_TEST_DATA, {0: 'fail'}, default=...) == {0: ...}, \
'use `default` if key fails and `default`'
assert traverse_obj(_TEST_DATA, {0: 'dict'}) == {}, \
'remove empty values when dict key'
assert traverse_obj(_TEST_DATA, {0: 'dict'}, default=...) == {0: ...}, \
'use `default` when dict key and `default`'
assert traverse_obj(_TEST_DATA, {0: {0: 'fail'}}) == {}, \
'remove empty values when nested dict key fails'
assert traverse_obj(None, {0: 'fail'}) == {}, \
'default to dict if pruned'
assert traverse_obj(None, {0: 'fail'}, default=...) == {0: ...}, \
'default to dict if pruned and default is given'
assert traverse_obj(_TEST_DATA, {0: {0: 'fail'}}, default=...) == {0: {0: ...}}, \
'use nested `default` when nested dict key fails and `default`'
assert traverse_obj(_TEST_DATA, {0: ('dict', ...)}) == {}, \
'remove key if branch in dict key not successful'
def test_traversal_default(self):
_DEFAULT_DATA = {'None': None, 'int': 0, 'list': []}
assert traverse_obj(_DEFAULT_DATA, 'fail') is None, \
'default value should be `None`'
assert traverse_obj(_DEFAULT_DATA, 'fail', 'fail', default=...) == ..., \
'chained fails should result in default'
assert traverse_obj(_DEFAULT_DATA, 'None', 'int') == 0, \
'should not short cirquit on `None`'
assert traverse_obj(_DEFAULT_DATA, 'fail', default=1) == 1, \
'invalid dict key should result in `default`'
assert traverse_obj(_DEFAULT_DATA, 'None', default=1) == 1, \
'`None` is a deliberate sentinel and should become `default`'
assert traverse_obj(_DEFAULT_DATA, ('list', 10)) is None, \
'`IndexError` should result in `default`'
assert traverse_obj(_DEFAULT_DATA, (..., 'fail'), default=1) == 1, \
'if branched but not successful return `default` if defined, not `[]`'
assert traverse_obj(_DEFAULT_DATA, (..., 'fail'), default=None) is None, \
'if branched but not successful return `default` even if `default` is `None`'
assert traverse_obj(_DEFAULT_DATA, (..., 'fail')) == [], \
'if branched but not successful return `[]`, not `default`'
assert traverse_obj(_DEFAULT_DATA, ('list', ...)) == [], \
'if branched but object is empty return `[]`, not `default`'
assert traverse_obj(None, ...) == [], \
'if branched but object is `None` return `[]`, not `default`'
assert traverse_obj({0: None}, (0, ...)) == [], \
'if branched but state is `None` return `[]`, not `default`'
@pytest.mark.parametrize('path', [
('fail', ...),
(..., 'fail'),
100 * ('fail',) + (...,),
(...,) + 100 * ('fail',),
])
def test_traversal_branching(self, path):
assert traverse_obj({}, path) == [], \
'if branched but state is `None`, return `[]` (not `default`)'
assert traverse_obj({}, 'fail', path) == [], \
'if branching in last alternative and previous did not match, return `[]` (not `default`)'
assert traverse_obj({0: 'x'}, 0, path) == 'x', \
'if branching in last alternative and previous did match, return single value'
assert traverse_obj({0: 'x'}, path, 0) == 'x', \
'if branching in first alternative and non-branching path does match, return single value'
assert traverse_obj({}, path, 'fail') is None, \
'if branching in first alternative and non-branching path does not match, return `default`'
def test_traversal_expected_type(self):
_EXPECTED_TYPE_DATA = {'str': 'str', 'int': 0}
assert traverse_obj(_EXPECTED_TYPE_DATA, 'str', expected_type=str) == 'str', \
'accept matching `expected_type` type'
assert traverse_obj(_EXPECTED_TYPE_DATA, 'str', expected_type=int) is None, \
'reject non matching `expected_type` type'
assert traverse_obj(_EXPECTED_TYPE_DATA, 'int', expected_type=lambda x: str(x)) == '0', \
'transform type using type function'
assert traverse_obj(_EXPECTED_TYPE_DATA, 'str', expected_type=lambda _: 1 / 0) is None, \
'wrap expected_type fuction in try_call'
assert traverse_obj(_EXPECTED_TYPE_DATA, ..., expected_type=str) == ['str'], \
'eliminate items that expected_type fails on'
assert traverse_obj(_TEST_DATA, {0: 100, 1: 1.2}, expected_type=int) == {0: 100}, \
'type as expected_type should filter dict values'
assert traverse_obj(_TEST_DATA, {0: 100, 1: 1.2, 2: 'None'}, expected_type=str_or_none) == {0: '100', 1: '1.2'}, \
'function as expected_type should transform dict values'
assert traverse_obj(_TEST_DATA, ({0: 1.2}, 0, {int_or_none}), expected_type=int) == 1, \
'expected_type should not filter non final dict values'
assert traverse_obj(_TEST_DATA, {0: {0: 100, 1: 'str'}}, expected_type=int) == {0: {0: 100}}, \
'expected_type should transform deep dict values'
assert traverse_obj(_TEST_DATA, [({0: '...'}, {0: '...'})], expected_type=type(...)) == [{0: ...}, {0: ...}], \
'expected_type should transform branched dict values'
assert traverse_obj({1: {3: 4}}, [(1, 2), 3], expected_type=int) == [4], \
'expected_type regression for type matching in tuple branching'
assert traverse_obj(_TEST_DATA, ['data', ...], expected_type=int) == [], \
'expected_type regression for type matching in dict result'
def test_traversal_get_all(self):
_GET_ALL_DATA = {'key': [0, 1, 2]}
assert traverse_obj(_GET_ALL_DATA, ('key', ...), get_all=False) == 0, \
'if not `get_all`, return only first matching value'
assert traverse_obj(_GET_ALL_DATA, ..., get_all=False) == [0, 1, 2], \
'do not overflatten if not `get_all`'
def test_traversal_casesense(self):
_CASESENSE_DATA = {
'KeY': 'value0',
0: {
'KeY': 'value1',
0: {'KeY': 'value2'},
},
}
assert traverse_obj(_CASESENSE_DATA, 'key') is None, \
'dict keys should be case sensitive unless `casesense`'
assert traverse_obj(_CASESENSE_DATA, 'keY', casesense=False) == 'value0', \
'allow non matching key case if `casesense`'
assert traverse_obj(_CASESENSE_DATA, [0, ('keY',)], casesense=False) == ['value1'], \
'allow non matching key case in branch if `casesense`'
assert traverse_obj(_CASESENSE_DATA, [0, ([0, 'keY'],)], casesense=False) == ['value2'], \
'allow non matching key case in branch path if `casesense`'
def test_traversal_traverse_string(self):
_TRAVERSE_STRING_DATA = {'str': 'str', 1.2: 1.2}
assert traverse_obj(_TRAVERSE_STRING_DATA, ('str', 0)) is None, \
'do not traverse into string if not `traverse_string`'
assert traverse_obj(_TRAVERSE_STRING_DATA, ('str', 0), traverse_string=True) == 's', \
'traverse into string if `traverse_string`'
assert traverse_obj(_TRAVERSE_STRING_DATA, (1.2, 1), traverse_string=True) == '.', \
'traverse into converted data if `traverse_string`'
assert traverse_obj(_TRAVERSE_STRING_DATA, ('str', ...), traverse_string=True) == 'str', \
'`...` should result in string (same value) if `traverse_string`'
assert traverse_obj(_TRAVERSE_STRING_DATA, ('str', slice(0, None, 2)), traverse_string=True) == 'sr', \
'`slice` should result in string if `traverse_string`'
assert traverse_obj(_TRAVERSE_STRING_DATA, ('str', lambda i, v: i or v == 's'), traverse_string=True) == 'str', \
'function should result in string if `traverse_string`'
assert traverse_obj(_TRAVERSE_STRING_DATA, ('str', (0, 2)), traverse_string=True) == ['s', 'r'], \
'branching should result in list if `traverse_string`'
assert traverse_obj({}, (0, ...), traverse_string=True) == [], \
'branching should result in list if `traverse_string`'
assert traverse_obj({}, (0, lambda x, y: True), traverse_string=True) == [], \
'branching should result in list if `traverse_string`'
assert traverse_obj({}, (0, slice(1)), traverse_string=True) == [], \
'branching should result in list if `traverse_string`'
def test_traversal_re(self):
mobj = re.fullmatch(r'0(12)(?P<group>3)(4)?', '0123')
assert traverse_obj(mobj, ...) == [x for x in mobj.groups() if x is not None], \
'`...` on a `re.Match` should give its `groups()`'
assert traverse_obj(mobj, lambda k, _: k in (0, 2)) == ['0123', '3'], \
'function on a `re.Match` should give groupno, value starting at 0'
assert traverse_obj(mobj, 'group') == '3', \
'str key on a `re.Match` should give group with that name'
assert traverse_obj(mobj, 2) == '3', \
'int key on a `re.Match` should give group with that name'
assert traverse_obj(mobj, 'gRoUp', casesense=False) == '3', \
'str key on a `re.Match` should respect casesense'
assert traverse_obj(mobj, 'fail') is None, \
'failing str key on a `re.Match` should return `default`'
assert traverse_obj(mobj, 'gRoUpS', casesense=False) is None, \
'failing str key on a `re.Match` should return `default`'
assert traverse_obj(mobj, 8) is None, \
'failing int key on a `re.Match` should return `default`'
assert traverse_obj(mobj, lambda k, _: k in (0, 'group')) == ['0123', '3'], \
'function on a `re.Match` should give group name as well'
def test_traversal_xml_etree(self):
etree = xml.etree.ElementTree.fromstring('''<?xml version="1.0"?>
<data>
<country name="Liechtenstein">
<rank>1</rank>
<year>2008</year>
<gdppc>141100</gdppc>
<neighbor name="Austria" direction="E"/>
<neighbor name="Switzerland" direction="W"/>
</country>
<country name="Singapore">
<rank>4</rank>
<year>2011</year>
<gdppc>59900</gdppc>
<neighbor name="Malaysia" direction="N"/>
</country>
<country name="Panama">
<rank>68</rank>
<year>2011</year>
<gdppc>13600</gdppc>
<neighbor name="Costa Rica" direction="W"/>
<neighbor name="Colombia" direction="E"/>
</country>
</data>''')
assert traverse_obj(etree, '') == etree, \
'empty str key should return the element itself'
assert traverse_obj(etree, 'country') == list(etree), \
'str key should lead all children with that tag name'
assert traverse_obj(etree, ...) == list(etree), \
'`...` as key should return all children'
assert traverse_obj(etree, lambda _, x: x[0].text == '4') == [etree[1]], \
'function as key should get element as value'
assert traverse_obj(etree, lambda i, _: i == 1) == [etree[1]], \
'function as key should get index as key'
assert traverse_obj(etree, 0) == etree[0], \
'int key should return the nth child'
expected = ['Austria', 'Switzerland', 'Malaysia', 'Costa Rica', 'Colombia']
assert traverse_obj(etree, './/neighbor/@name') == expected, \
'`@<attribute>` at end of path should give that attribute'
assert traverse_obj(etree, '//neighbor/@fail') == [None, None, None, None, None], \
'`@<nonexistant>` at end of path should give `None`'
assert traverse_obj(etree, ('//neighbor/@', 2)) == {'name': 'Malaysia', 'direction': 'N'}, \
'`@` should give the full attribute dict'
assert traverse_obj(etree, '//year/text()') == ['2008', '2011', '2011'], \
'`text()` at end of path should give the inner text'
assert traverse_obj(etree, '//*[@direction]/@direction') == ['E', 'W', 'N', 'W', 'E'], \
'full Python xpath features should be supported'
assert traverse_obj(etree, (0, '@name')) == 'Liechtenstein', \
'special transformations should act on current element'
assert traverse_obj(etree, ('country', 0, ..., 'text()', {int_or_none})) == [1, 2008, 141100], \
'special transformations should act on current element'
def test_traversal_unbranching(self):
assert traverse_obj(_TEST_DATA, [(100, 1.2), all]) == [100, 1.2], \
'`all` should give all results as list'
assert traverse_obj(_TEST_DATA, [(100, 1.2), any]) == 100, \
'`any` should give the first result'
assert traverse_obj(_TEST_DATA, [100, all]) == [100], \
'`all` should give list if non branching'
assert traverse_obj(_TEST_DATA, [100, any]) == 100, \
'`any` should give single item if non branching'
assert traverse_obj(_TEST_DATA, [('dict', 'None', 100), all]) == [100], \
'`all` should filter `None` and empty dict'
assert traverse_obj(_TEST_DATA, [('dict', 'None', 100), any]) == 100, \
'`any` should filter `None` and empty dict'
assert traverse_obj(_TEST_DATA, [{
'all': [('dict', 'None', 100, 1.2), all],
'any': [('dict', 'None', 100, 1.2), any],
}]) == {'all': [100, 1.2], 'any': 100}, \
'`all`/`any` should apply to each dict path separately'
assert traverse_obj(_TEST_DATA, [{
'all': [('dict', 'None', 100, 1.2), all],
'any': [('dict', 'None', 100, 1.2), any],
}], get_all=False) == {'all': [100, 1.2], 'any': 100}, \
'`all`/`any` should apply to dict regardless of `get_all`'
assert traverse_obj(_TEST_DATA, [('dict', 'None', 100, 1.2), all, {float}]) is None, \
'`all` should reset branching status'
assert traverse_obj(_TEST_DATA, [('dict', 'None', 100, 1.2), any, {float}]) is None, \
'`any` should reset branching status'
assert traverse_obj(_TEST_DATA, [('dict', 'None', 100, 1.2), all, ..., {float}]) == [1.2], \
'`all` should allow further branching'
assert traverse_obj(_TEST_DATA, [('dict', 'None', 'urls', 'data'), any, ..., 'index']) == [0, 1], \
'`any` should allow further branching'
def test_traversal_morsel(self):
values = {
'expires': 'a',
'path': 'b',
'comment': 'c',
'domain': 'd',
'max-age': 'e',
'secure': 'f',
'httponly': 'g',
'version': 'h',
'samesite': 'i',
}
morsel = http.cookies.Morsel()
morsel.set('item_key', 'item_value', 'coded_value')
morsel.update(values)
values['key'] = 'item_key'
values['value'] = 'item_value'
for key, value in values.items():
assert traverse_obj(morsel, key) == value, \
'Morsel should provide access to all values'
assert traverse_obj(morsel, ...) == list(values.values()), \
'`...` should yield all values'
assert traverse_obj(morsel, lambda k, v: True) == list(values.values()), \
'function key should yield all values'
assert traverse_obj(morsel, [(None,), any]) == morsel, \
'Morsel should not be implicitly changed to dict on usage'
class TestDictGet:
def test_dict_get(self):
FALSE_VALUES = {
'none': None,
'false': False,
'zero': 0,
'empty_string': '',
'empty_list': [],
}
d = {**FALSE_VALUES, 'a': 42}
assert dict_get(d, 'a') == 42
assert dict_get(d, 'b') is None
assert dict_get(d, 'b', 42) == 42
assert dict_get(d, ('a',)) == 42
assert dict_get(d, ('b', 'a')) == 42
assert dict_get(d, ('b', 'c', 'a', 'd')) == 42
assert dict_get(d, ('b', 'c')) is None
assert dict_get(d, ('b', 'c'), 42) == 42
for key, false_value in FALSE_VALUES.items():
assert dict_get(d, ('b', 'c', key)) is None
assert dict_get(d, ('b', 'c', key), skip_false_values=False) == false_value

228
test/test_update.py Normal file
View File

@@ -0,0 +1,228 @@
#!/usr/bin/env python3
# Allow direct execution
import os
import sys
import unittest
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from test.helper import FakeYDL, report_warning
from yt_dlp.update import UpdateInfo, Updater
# XXX: Keep in sync with yt_dlp.update.UPDATE_SOURCES
TEST_UPDATE_SOURCES = {
'stable': 'yt-dlp/yt-dlp',
'nightly': 'yt-dlp/yt-dlp-nightly-builds',
'master': 'yt-dlp/yt-dlp-master-builds',
}
TEST_API_DATA = {
'yt-dlp/yt-dlp/latest': {
'tag_name': '2023.12.31',
'target_commitish': 'bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb',
'name': 'yt-dlp 2023.12.31',
'body': 'BODY',
},
'yt-dlp/yt-dlp-nightly-builds/latest': {
'tag_name': '2023.12.31.123456',
'target_commitish': 'master',
'name': 'yt-dlp nightly 2023.12.31.123456',
'body': 'Generated from: https://github.com/yt-dlp/yt-dlp/commit/cccccccccccccccccccccccccccccccccccccccc',
},
'yt-dlp/yt-dlp-master-builds/latest': {
'tag_name': '2023.12.31.987654',
'target_commitish': 'master',
'name': 'yt-dlp master 2023.12.31.987654',
'body': 'Generated from: https://github.com/yt-dlp/yt-dlp/commit/dddddddddddddddddddddddddddddddddddddddd',
},
'yt-dlp/yt-dlp/tags/testing': {
'tag_name': 'testing',
'target_commitish': '9999999999999999999999999999999999999999',
'name': 'testing',
'body': 'BODY',
},
'fork/yt-dlp/latest': {
'tag_name': '2050.12.31',
'target_commitish': 'eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee',
'name': '2050.12.31',
'body': 'BODY',
},
'fork/yt-dlp/tags/pr0000': {
'tag_name': 'pr0000',
'target_commitish': 'ffffffffffffffffffffffffffffffffffffffff',
'name': 'pr1234 2023.11.11.000000',
'body': 'BODY',
},
'fork/yt-dlp/tags/pr1234': {
'tag_name': 'pr1234',
'target_commitish': '0000000000000000000000000000000000000000',
'name': 'pr1234 2023.12.31.555555',
'body': 'BODY',
},
'fork/yt-dlp/tags/pr9999': {
'tag_name': 'pr9999',
'target_commitish': '1111111111111111111111111111111111111111',
'name': 'pr9999',
'body': 'BODY',
},
'fork/yt-dlp-satellite/tags/pr987': {
'tag_name': 'pr987',
'target_commitish': 'master',
'name': 'pr987',
'body': 'Generated from: https://github.com/yt-dlp/yt-dlp/commit/2222222222222222222222222222222222222222',
},
}
TEST_LOCKFILE_COMMENT = '# This file is used for regulating self-update'
TEST_LOCKFILE_V1 = rf'''{TEST_LOCKFILE_COMMENT}
lock 2022.08.18.36 .+ Python 3\.6
lock 2023.11.16 (?!win_x86_exe).+ Python 3\.7
lock 2023.11.16 win_x86_exe .+ Windows-(?:Vista|2008Server)
'''
TEST_LOCKFILE_V2_TMPL = r'''%s
lockV2 yt-dlp/yt-dlp 2022.08.18.36 .+ Python 3\.6
lockV2 yt-dlp/yt-dlp 2023.11.16 (?!win_x86_exe).+ Python 3\.7
lockV2 yt-dlp/yt-dlp 2023.11.16 win_x86_exe .+ Windows-(?:Vista|2008Server)
lockV2 yt-dlp/yt-dlp-nightly-builds 2023.11.15.232826 (?!win_x86_exe).+ Python 3\.7
lockV2 yt-dlp/yt-dlp-nightly-builds 2023.11.15.232826 win_x86_exe .+ Windows-(?:Vista|2008Server)
lockV2 yt-dlp/yt-dlp-master-builds 2023.11.15.232812 (?!win_x86_exe).+ Python 3\.7
lockV2 yt-dlp/yt-dlp-master-builds 2023.11.15.232812 win_x86_exe .+ Windows-(?:Vista|2008Server)
'''
TEST_LOCKFILE_V2 = TEST_LOCKFILE_V2_TMPL % TEST_LOCKFILE_COMMENT
TEST_LOCKFILE_ACTUAL = TEST_LOCKFILE_V2_TMPL % TEST_LOCKFILE_V1.rstrip('\n')
TEST_LOCKFILE_FORK = rf'''{TEST_LOCKFILE_ACTUAL}# Test if a fork blocks updates to non-numeric tags
lockV2 fork/yt-dlp pr0000 .+ Python 3.6
lockV2 fork/yt-dlp pr1234 (?!win_x86_exe).+ Python 3\.7
lockV2 fork/yt-dlp pr1234 win_x86_exe .+ Windows-(?:Vista|2008Server)
lockV2 fork/yt-dlp pr9999 .+ Python 3.11
'''
class FakeUpdater(Updater):
current_version = '2022.01.01'
current_commit = 'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa'
_channel = 'stable'
_origin = 'yt-dlp/yt-dlp'
_update_sources = TEST_UPDATE_SOURCES
def _download_update_spec(self, *args, **kwargs):
return TEST_LOCKFILE_ACTUAL
def _call_api(self, tag):
tag = f'tags/{tag}' if tag != 'latest' else tag
return TEST_API_DATA[f'{self.requested_repo}/{tag}']
def _report_error(self, msg, *args, **kwargs):
report_warning(msg)
class TestUpdate(unittest.TestCase):
maxDiff = None
def test_update_spec(self):
ydl = FakeYDL()
updater = FakeUpdater(ydl, 'stable')
def test(lockfile, identifier, input_tag, expect_tag, exact=False, repo='yt-dlp/yt-dlp'):
updater._identifier = identifier
updater._exact = exact
updater.requested_repo = repo
result = updater._process_update_spec(lockfile, input_tag)
self.assertEqual(
result, expect_tag,
f'{identifier!r} requesting {repo}@{input_tag} (exact={exact}) '
f'returned {result!r} instead of {expect_tag!r}')
for lockfile in (TEST_LOCKFILE_V1, TEST_LOCKFILE_V2, TEST_LOCKFILE_ACTUAL, TEST_LOCKFILE_FORK):
# Normal operation
test(lockfile, 'zip Python 3.12.0', '2023.12.31', '2023.12.31')
test(lockfile, 'zip stable Python 3.12.0', '2023.12.31', '2023.12.31', exact=True)
# Python 3.6 --update should update only to its lock
test(lockfile, 'zip Python 3.6.0', '2023.11.16', '2022.08.18.36')
# --update-to an exact version later than the lock should return None
test(lockfile, 'zip stable Python 3.6.0', '2023.11.16', None, exact=True)
# Python 3.7 should be able to update to its lock
test(lockfile, 'zip Python 3.7.0', '2023.11.16', '2023.11.16')
test(lockfile, 'zip stable Python 3.7.1', '2023.11.16', '2023.11.16', exact=True)
# Non-win_x86_exe builds on py3.7 must be locked
test(lockfile, 'zip Python 3.7.1', '2023.12.31', '2023.11.16')
test(lockfile, 'zip stable Python 3.7.1', '2023.12.31', None, exact=True)
test( # Windows Vista w/ win_x86_exe must be locked
lockfile, 'win_x86_exe stable Python 3.7.9 (CPython x86 32bit) - Windows-Vista-6.0.6003-SP2',
'2023.12.31', '2023.11.16')
test( # Windows 2008Server w/ win_x86_exe must be locked
lockfile, 'win_x86_exe Python 3.7.9 (CPython x86 32bit) - Windows-2008Server',
'2023.12.31', None, exact=True)
test( # Windows 7 w/ win_x86_exe py3.7 build should be able to update beyond lock
lockfile, 'win_x86_exe stable Python 3.7.9 (CPython x86 32bit) - Windows-7-6.1.7601-SP1',
'2023.12.31', '2023.12.31')
test( # Windows 8.1 w/ '2008Server' in platform string should be able to update beyond lock
lockfile, 'win_x86_exe Python 3.7.9 (CPython x86 32bit) - Windows-post2008Server-6.2.9200',
'2023.12.31', '2023.12.31', exact=True)
# Forks can block updates to non-numeric tags rather than lock
test(TEST_LOCKFILE_FORK, 'zip Python 3.6.3', 'pr0000', None, repo='fork/yt-dlp')
test(TEST_LOCKFILE_FORK, 'zip stable Python 3.7.4', 'pr0000', 'pr0000', repo='fork/yt-dlp')
test(TEST_LOCKFILE_FORK, 'zip stable Python 3.7.4', 'pr1234', None, repo='fork/yt-dlp')
test(TEST_LOCKFILE_FORK, 'zip Python 3.8.1', 'pr1234', 'pr1234', repo='fork/yt-dlp', exact=True)
test(
TEST_LOCKFILE_FORK, 'win_x86_exe stable Python 3.7.9 (CPython x86 32bit) - Windows-Vista-6.0.6003-SP2',
'pr1234', None, repo='fork/yt-dlp')
test(
TEST_LOCKFILE_FORK, 'win_x86_exe stable Python 3.7.9 (CPython x86 32bit) - Windows-7-6.1.7601-SP1',
'2023.12.31', '2023.12.31', repo='fork/yt-dlp')
test(TEST_LOCKFILE_FORK, 'zip Python 3.11.2', 'pr9999', None, repo='fork/yt-dlp', exact=True)
test(TEST_LOCKFILE_FORK, 'zip stable Python 3.12.0', 'pr9999', 'pr9999', repo='fork/yt-dlp')
def test_query_update(self):
ydl = FakeYDL()
def test(target, expected, current_version=None, current_commit=None, identifier=None):
updater = FakeUpdater(ydl, target)
if current_version:
updater.current_version = current_version
if current_commit:
updater.current_commit = current_commit
updater._identifier = identifier or 'zip'
update_info = updater.query_update(_output=True)
self.assertDictEqual(
update_info.__dict__ if update_info else {}, expected.__dict__ if expected else {})
test('yt-dlp/yt-dlp@latest', UpdateInfo(
'2023.12.31', version='2023.12.31', requested_version='2023.12.31', commit='b' * 40))
test('yt-dlp/yt-dlp-nightly-builds@latest', UpdateInfo(
'2023.12.31.123456', version='2023.12.31.123456', requested_version='2023.12.31.123456', commit='c' * 40))
test('yt-dlp/yt-dlp-master-builds@latest', UpdateInfo(
'2023.12.31.987654', version='2023.12.31.987654', requested_version='2023.12.31.987654', commit='d' * 40))
test('fork/yt-dlp@latest', UpdateInfo(
'2050.12.31', version='2050.12.31', requested_version='2050.12.31', commit='e' * 40))
test('fork/yt-dlp@pr0000', UpdateInfo(
'pr0000', version='2023.11.11.000000', requested_version='2023.11.11.000000', commit='f' * 40))
test('fork/yt-dlp@pr1234', UpdateInfo(
'pr1234', version='2023.12.31.555555', requested_version='2023.12.31.555555', commit='0' * 40))
test('fork/yt-dlp@pr9999', UpdateInfo(
'pr9999', version=None, requested_version=None, commit='1' * 40))
test('fork/yt-dlp-satellite@pr987', UpdateInfo(
'pr987', version=None, requested_version=None, commit='2' * 40))
test('yt-dlp/yt-dlp', None, current_version='2024.01.01')
test('stable', UpdateInfo(
'2023.12.31', version='2023.12.31', requested_version='2023.12.31', commit='b' * 40))
test('nightly', UpdateInfo(
'2023.12.31.123456', version='2023.12.31.123456', requested_version='2023.12.31.123456', commit='c' * 40))
test('master', UpdateInfo(
'2023.12.31.987654', version='2023.12.31.987654', requested_version='2023.12.31.987654', commit='d' * 40))
test('testing', None, current_commit='9' * 40)
test('testing', UpdateInfo('testing', commit='9' * 40))
if __name__ == '__main__':
unittest.main()

View File

@@ -1,30 +0,0 @@
#!/usr/bin/env python3
# Allow direct execution
import os
import sys
import unittest
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import json
from yt_dlp.update import rsa_verify
class TestUpdate(unittest.TestCase):
def test_rsa_verify(self):
UPDATES_RSA_KEY = (0x9d60ee4d8f805312fdb15a62f87b95bd66177b91df176765d13514a0f1754bcd2057295c5b6f1d35daa6742c3ffc9a82d3e118861c207995a8031e151d863c9927e304576bc80692bc8e094896fcf11b66f3e29e04e3a71e9a11558558acea1840aec37fc396fb6b65dc81a1c4144e03bd1c011de62e3f1357b327d08426fe93, 65537)
with open(os.path.join(os.path.dirname(os.path.abspath(__file__)), 'versions.json'), 'rb') as f:
versions_info = f.read().decode()
versions_info = json.loads(versions_info)
signature = versions_info['signature']
del versions_info['signature']
self.assertTrue(rsa_verify(
json.dumps(versions_info, sort_keys=True).encode(),
signature, UPDATES_RSA_KEY))
if __name__ == '__main__':
unittest.main()

View File

@@ -2,10 +2,10 @@
# Allow direct execution
import os
import re
import sys
import unittest
import warnings
import datetime as dt
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
@@ -14,6 +14,7 @@
import io
import itertools
import json
import subprocess
import xml.etree.ElementTree
from yt_dlp.compat import (
@@ -27,7 +28,9 @@
ExtractorError,
InAdvancePagedList,
LazyList,
NO_DEFAULT,
OnDemandPagedList,
Popen,
age_restricted,
args_to_str,
base_url,
@@ -43,14 +46,12 @@
determine_ext,
determine_file_encoding,
dfxp2srt,
dict_get,
encode_base_n,
encode_compat_str,
encodeFilename,
escape_rfc3986,
escape_url,
expand_path,
extract_attributes,
extract_basic_auth,
find_xpath_attr,
fix_xml_ampersands,
float_or_none,
@@ -103,16 +104,13 @@
sanitize_filename,
sanitize_path,
sanitize_url,
sanitized_Request,
shell_quote,
smuggle_url,
str_or_none,
str_to_int,
strip_jsonp,
strip_or_none,
subtitles_filename,
timeconvert,
traverse_obj,
try_call,
unescapeHTML,
unified_strdate,
@@ -132,6 +130,13 @@
xpath_text,
xpath_with_ns,
)
from yt_dlp.utils._utils import _UnsafeExtensionError
from yt_dlp.utils.networking import (
HTTPHeaderDict,
escape_rfc3986,
normalize_url,
remove_dot_segments,
)
class TestUtil(unittest.TestCase):
@@ -258,15 +263,6 @@ def test_sanitize_url(self):
self.assertEqual(sanitize_url('https://foo.bar'), 'https://foo.bar')
self.assertEqual(sanitize_url('foo bar'), 'foo bar')
def test_extract_basic_auth(self):
auth_header = lambda url: sanitized_Request(url).get_header('Authorization')
self.assertFalse(auth_header('http://foo.bar'))
self.assertFalse(auth_header('http://:foo.bar'))
self.assertEqual(auth_header('http://@foo.bar'), 'Basic Og==')
self.assertEqual(auth_header('http://:pass@foo.bar'), 'Basic OnBhc3M=')
self.assertEqual(auth_header('http://user:@foo.bar'), 'Basic dXNlcjo=')
self.assertEqual(auth_header('http://user:pass@foo.bar'), 'Basic dXNlcjpwYXNz')
def test_expand_path(self):
def env(var):
return f'%{var}%' if sys.platform == 'win32' else f'${var}'
@@ -281,11 +277,18 @@ def env(var):
self.assertEqual(expand_path(env('HOME')), os.getenv('HOME'))
self.assertEqual(expand_path('~'), os.getenv('HOME'))
self.assertEqual(
expand_path('~/%s' % env('yt_dlp_EXPATH_PATH')),
'%s/expanded' % os.getenv('HOME'))
expand_path('~/{}'.format(env('yt_dlp_EXPATH_PATH'))),
'{}/expanded'.format(os.getenv('HOME')))
finally:
os.environ['HOME'] = old_home or ''
_uncommon_extensions = [
('exe', 'abc.exe.ext'),
('de', 'abc.de.ext'),
('../.mp4', None),
('..\\.mp4', None),
]
def test_prepend_extension(self):
self.assertEqual(prepend_extension('abc.ext', 'temp'), 'abc.temp.ext')
self.assertEqual(prepend_extension('abc.ext', 'temp', 'ext'), 'abc.temp.ext')
@@ -294,6 +297,19 @@ def test_prepend_extension(self):
self.assertEqual(prepend_extension('.abc', 'temp'), '.abc.temp')
self.assertEqual(prepend_extension('.abc.ext', 'temp'), '.abc.temp.ext')
# Test uncommon extensions
self.assertEqual(prepend_extension('abc.ext', 'bin'), 'abc.bin.ext')
for ext, result in self._uncommon_extensions:
with self.assertRaises(_UnsafeExtensionError):
prepend_extension('abc', ext)
if result:
self.assertEqual(prepend_extension('abc.ext', ext, 'ext'), result)
else:
with self.assertRaises(_UnsafeExtensionError):
prepend_extension('abc.ext', ext, 'ext')
with self.assertRaises(_UnsafeExtensionError):
prepend_extension('abc.unexpected_ext', ext, 'ext')
def test_replace_extension(self):
self.assertEqual(replace_extension('abc.ext', 'temp'), 'abc.temp')
self.assertEqual(replace_extension('abc.ext', 'temp', 'ext'), 'abc.temp')
@@ -302,6 +318,16 @@ def test_replace_extension(self):
self.assertEqual(replace_extension('.abc', 'temp'), '.abc.temp')
self.assertEqual(replace_extension('.abc.ext', 'temp'), '.abc.temp')
# Test uncommon extensions
self.assertEqual(replace_extension('abc.ext', 'bin'), 'abc.unknown_video')
for ext, _ in self._uncommon_extensions:
with self.assertRaises(_UnsafeExtensionError):
replace_extension('abc', ext)
with self.assertRaises(_UnsafeExtensionError):
replace_extension('abc.ext', ext, 'ext')
with self.assertRaises(_UnsafeExtensionError):
replace_extension('abc.unexpected_ext', ext, 'ext')
def test_subtitles_filename(self):
self.assertEqual(subtitles_filename('abc.ext', 'en', 'vtt'), 'abc.en.vtt')
self.assertEqual(subtitles_filename('abc.ext', 'en', 'vtt', 'ext'), 'abc.en.vtt')
@@ -361,12 +387,12 @@ def test_datetime_from_str(self):
self.assertEqual(datetime_from_str('now+23hours', precision='hour'), datetime_from_str('now+23hours', precision='auto'))
def test_daterange(self):
_20century = DateRange("19000101", "20000101")
self.assertFalse("17890714" in _20century)
_ac = DateRange("00010101")
self.assertTrue("19690721" in _ac)
_firstmilenium = DateRange(end="10000101")
self.assertTrue("07110427" in _firstmilenium)
_20century = DateRange('19000101', '20000101')
self.assertFalse('17890714' in _20century)
_ac = DateRange('00010101')
self.assertTrue('19690721' in _ac)
_firstmilenium = DateRange(end='10000101')
self.assertTrue('07110427' in _firstmilenium)
def test_unified_dates(self):
self.assertEqual(unified_strdate('December 21, 2010'), '20101221')
@@ -511,7 +537,7 @@ def test_xpath_attr(self):
self.assertRaises(ExtractorError, xpath_attr, doc, 'div/p', 'y', fatal=True)
def test_smuggle_url(self):
data = {"ö": "ö", "abc": [3]}
data = {'ö': 'ö', 'abc': [3]}
url = 'https://foo.bar/baz?x=y#a'
smug_url = smuggle_url(url, data)
unsmug_url, unsmug_data = unsmuggle_url(smug_url)
@@ -663,6 +689,8 @@ def test_parse_duration(self):
self.assertEqual(parse_duration('P0Y0M0DT0H4M20.880S'), 260.88)
self.assertEqual(parse_duration('01:02:03:050'), 3723.05)
self.assertEqual(parse_duration('103:050'), 103.05)
self.assertEqual(parse_duration('1HR 3MIN'), 3780)
self.assertEqual(parse_duration('2hrs 3mins'), 7380)
def test_fix_xml_ampersands(self):
self.assertEqual(
@@ -756,28 +784,6 @@ def test_multipart_encode(self):
self.assertRaises(
ValueError, multipart_encode, {b'field': b'value'}, boundary='value')
def test_dict_get(self):
FALSE_VALUES = {
'none': None,
'false': False,
'zero': 0,
'empty_string': '',
'empty_list': [],
}
d = FALSE_VALUES.copy()
d['a'] = 42
self.assertEqual(dict_get(d, 'a'), 42)
self.assertEqual(dict_get(d, 'b'), None)
self.assertEqual(dict_get(d, 'b', 42), 42)
self.assertEqual(dict_get(d, ('a', )), 42)
self.assertEqual(dict_get(d, ('b', 'a', )), 42)
self.assertEqual(dict_get(d, ('b', 'c', 'a', 'd', )), 42)
self.assertEqual(dict_get(d, ('b', 'c', )), None)
self.assertEqual(dict_get(d, ('b', 'c', ), 42), 42)
for key, false_value in FALSE_VALUES.items():
self.assertEqual(dict_get(d, ('b', 'c', key, )), None)
self.assertEqual(dict_get(d, ('b', 'c', key, ), skip_false_values=False), false_value)
def test_merge_dicts(self):
self.assertEqual(merge_dicts({'a': 1}, {'b': 2}), {'a': 1, 'b': 2})
self.assertEqual(merge_dicts({'a': 1}, {'a': 2}), {'a': 1})
@@ -795,6 +801,11 @@ def test_encode_compat_str(self):
def test_parse_iso8601(self):
self.assertEqual(parse_iso8601('2014-03-23T23:04:26+0100'), 1395612266)
self.assertEqual(parse_iso8601('2014-03-23T23:04:26-07:00'), 1395641066)
self.assertEqual(parse_iso8601('2014-03-23T23:04:26', timezone=dt.timedelta(hours=-7)), 1395641066)
self.assertEqual(parse_iso8601('2014-03-23T23:04:26', timezone=NO_DEFAULT), None)
# default does not override timezone in date_str
self.assertEqual(parse_iso8601('2014-03-23T23:04:26-07:00', timezone=dt.timedelta(hours=-10)), 1395641066)
self.assertEqual(parse_iso8601('2014-03-23T22:04:26+0000'), 1395612266)
self.assertEqual(parse_iso8601('2014-03-23T22:04:26Z'), 1395612266)
self.assertEqual(parse_iso8601('2014-03-23T22:04:26.1234Z'), 1395612266)
@@ -804,7 +815,7 @@ def test_parse_iso8601(self):
def test_strip_jsonp(self):
stripped = strip_jsonp('cb ([ {"id":"532cb",\n\n\n"x":\n3}\n]\n);')
d = json.loads(stripped)
self.assertEqual(d, [{"id": "532cb", "x": 3}])
self.assertEqual(d, [{'id': '532cb', 'x': 3}])
stripped = strip_jsonp('parseMetadata({"STATUS":"OK"})\n\n\n//epc')
d = json.loads(stripped)
@@ -939,24 +950,45 @@ def test_escape_rfc3986(self):
self.assertEqual(escape_rfc3986('foo bar'), 'foo%20bar')
self.assertEqual(escape_rfc3986('foo%20bar'), 'foo%20bar')
def test_escape_url(self):
def test_normalize_url(self):
self.assertEqual(
escape_url('http://wowza.imust.org/srv/vod/telemb/new/UPLOAD/UPLOAD/20224_IncendieHavré_FD.mp4'),
'http://wowza.imust.org/srv/vod/telemb/new/UPLOAD/UPLOAD/20224_IncendieHavre%CC%81_FD.mp4'
normalize_url('http://wowza.imust.org/srv/vod/telemb/new/UPLOAD/UPLOAD/20224_IncendieHavré_FD.mp4'),
'http://wowza.imust.org/srv/vod/telemb/new/UPLOAD/UPLOAD/20224_IncendieHavre%CC%81_FD.mp4',
)
self.assertEqual(
escape_url('http://www.ardmediathek.de/tv/Sturm-der-Liebe/Folge-2036-Zu-Mann-und-Frau-erklärt/Das-Erste/Video?documentId=22673108&bcastId=5290'),
'http://www.ardmediathek.de/tv/Sturm-der-Liebe/Folge-2036-Zu-Mann-und-Frau-erkl%C3%A4rt/Das-Erste/Video?documentId=22673108&bcastId=5290'
normalize_url('http://www.ardmediathek.de/tv/Sturm-der-Liebe/Folge-2036-Zu-Mann-und-Frau-erklärt/Das-Erste/Video?documentId=22673108&bcastId=5290'),
'http://www.ardmediathek.de/tv/Sturm-der-Liebe/Folge-2036-Zu-Mann-und-Frau-erkl%C3%A4rt/Das-Erste/Video?documentId=22673108&bcastId=5290',
)
self.assertEqual(
escape_url('http://тест.рф/фрагмент'),
'http://xn--e1aybc.xn--p1ai/%D1%84%D1%80%D0%B0%D0%B3%D0%BC%D0%B5%D0%BD%D1%82'
normalize_url('http://тест.рф/фрагмент'),
'http://xn--e1aybc.xn--p1ai/%D1%84%D1%80%D0%B0%D0%B3%D0%BC%D0%B5%D0%BD%D1%82',
)
self.assertEqual(
escape_url('http://тест.рф/абв?абв=абв#абв'),
'http://xn--e1aybc.xn--p1ai/%D0%B0%D0%B1%D0%B2?%D0%B0%D0%B1%D0%B2=%D0%B0%D0%B1%D0%B2#%D0%B0%D0%B1%D0%B2'
normalize_url('http://тест.рф/абв?абв=абв#абв'),
'http://xn--e1aybc.xn--p1ai/%D0%B0%D0%B1%D0%B2?%D0%B0%D0%B1%D0%B2=%D0%B0%D0%B1%D0%B2#%D0%B0%D0%B1%D0%B2',
)
self.assertEqual(escape_url('http://vimeo.com/56015672#at=0'), 'http://vimeo.com/56015672#at=0')
self.assertEqual(normalize_url('http://vimeo.com/56015672#at=0'), 'http://vimeo.com/56015672#at=0')
self.assertEqual(normalize_url('http://www.example.com/../a/b/../c/./d.html'), 'http://www.example.com/a/c/d.html')
def test_remove_dot_segments(self):
self.assertEqual(remove_dot_segments('/a/b/c/./../../g'), '/a/g')
self.assertEqual(remove_dot_segments('mid/content=5/../6'), 'mid/6')
self.assertEqual(remove_dot_segments('/ad/../cd'), '/cd')
self.assertEqual(remove_dot_segments('/ad/../cd/'), '/cd/')
self.assertEqual(remove_dot_segments('/..'), '/')
self.assertEqual(remove_dot_segments('/./'), '/')
self.assertEqual(remove_dot_segments('/./a'), '/a')
self.assertEqual(remove_dot_segments('/abc/./.././d/././e/.././f/./../../ghi'), '/ghi')
self.assertEqual(remove_dot_segments('/'), '/')
self.assertEqual(remove_dot_segments('/t'), '/t')
self.assertEqual(remove_dot_segments('t'), 't')
self.assertEqual(remove_dot_segments(''), '')
self.assertEqual(remove_dot_segments('/../a/b/c'), '/a/b/c')
self.assertEqual(remove_dot_segments('../a'), 'a')
self.assertEqual(remove_dot_segments('./a'), 'a')
self.assertEqual(remove_dot_segments('.'), '')
self.assertEqual(remove_dot_segments('////'), '////')
def test_js_to_json_vars_strings(self):
self.assertDictEqual(
@@ -978,7 +1010,7 @@ def test_js_to_json_vars_strings(self):
'e': 'false',
'f': '"false"',
'g': 'var',
}
},
)),
{
'null': None,
@@ -987,8 +1019,8 @@ def test_js_to_json_vars_strings(self):
'trueStr': 'true',
'false': False,
'falseStr': 'false',
'unresolvedVar': 'var'
}
'unresolvedVar': 'var',
},
)
self.assertDictEqual(
@@ -1004,14 +1036,14 @@ def test_js_to_json_vars_strings(self):
'b': '"123"',
'c': '1.23',
'd': '"1.23"',
}
},
)),
{
'int': 123,
'intStr': '123',
'float': 1.23,
'floatStr': '1.23',
}
},
)
self.assertDictEqual(
@@ -1027,14 +1059,14 @@ def test_js_to_json_vars_strings(self):
'b': '"{}"',
'c': '[]',
'd': '"[]"',
}
},
)),
{
'object': {},
'objectStr': '{}',
'array': [],
'arrayStr': '[]',
}
},
)
def test_js_to_json_realworld(self):
@@ -1080,7 +1112,7 @@ def test_js_to_json_realworld(self):
def test_js_to_json_edgecases(self):
on = js_to_json("{abc_def:'1\\'\\\\2\\\\\\'3\"4'}")
self.assertEqual(json.loads(on), {"abc_def": "1'\\2\\'3\"4"})
self.assertEqual(json.loads(on), {'abc_def': "1'\\2\\'3\"4"})
on = js_to_json('{"abc": true}')
self.assertEqual(json.loads(on), {'abc': True})
@@ -1112,9 +1144,9 @@ def test_js_to_json_edgecases(self):
'c': 0,
'd': 42.42,
'e': [],
'f': "abc",
'g': "",
'42': 42
'f': 'abc',
'g': '',
'42': 42,
})
on = js_to_json('["abc", "def",]')
@@ -1189,6 +1221,9 @@ def test_js_to_json_edgecases(self):
on = js_to_json('\'"\\""\'')
self.assertEqual(json.loads(on), '"""', msg='Unnecessary quote escape should be escaped')
on = js_to_json('[new Date("spam"), \'("eggs")\']')
self.assertEqual(json.loads(on), ['spam', '("eggs")'], msg='Date regex should match a single string')
def test_js_to_json_malformed(self):
self.assertEqual(js_to_json('42a1'), '42"a1"')
self.assertEqual(js_to_json('42a-1'), '42"a"-1')
@@ -1200,6 +1235,14 @@ def test_js_to_json_template_literal(self):
self.assertEqual(js_to_json('`${name}"${name}"`', {'name': '5'}), '"5\\"5\\""')
self.assertEqual(js_to_json('`${name}`', {}), '"name"')
def test_js_to_json_common_constructors(self):
self.assertEqual(json.loads(js_to_json('new Map([["a", 5]])')), {'a': 5})
self.assertEqual(json.loads(js_to_json('Array(5, 10)')), [5, 10])
self.assertEqual(json.loads(js_to_json('new Array(15,5)')), [15, 5])
self.assertEqual(json.loads(js_to_json('new Map([Array(5, 10),new Array(15,5)])')), {'5': 10, '15': 5})
self.assertEqual(json.loads(js_to_json('new Date("123")')), '123')
self.assertEqual(json.loads(js_to_json('new Date(\'2023-10-19\')')), '2023-10-19')
def test_extract_attributes(self):
self.assertEqual(extract_attributes('<e x="y">'), {'x': 'y'})
self.assertEqual(extract_attributes("<e x='y'>"), {'x': 'y'})
@@ -1253,7 +1296,7 @@ def test_intlist_to_bytes(self):
def test_args_to_str(self):
self.assertEqual(
args_to_str(['foo', 'ba/r', '-baz', '2 be', '']),
'foo ba/r -baz \'2 be\' \'\'' if compat_os_name != 'nt' else 'foo ba/r -baz "2 be" ""'
'foo ba/r -baz \'2 be\' \'\'' if compat_os_name != 'nt' else 'foo ba/r -baz "2 be" ""',
)
def test_parse_filesize(self):
@@ -1336,10 +1379,10 @@ def test_is_html(self):
self.assertTrue(is_html( # UTF-8 with BOM
b'\xef\xbb\xbf<!DOCTYPE foo>\xaaa'))
self.assertTrue(is_html( # UTF-16-LE
b'\xff\xfe<\x00h\x00t\x00m\x00l\x00>\x00\xe4\x00'
b'\xff\xfe<\x00h\x00t\x00m\x00l\x00>\x00\xe4\x00',
))
self.assertTrue(is_html( # UTF-16-BE
b'\xfe\xff\x00<\x00h\x00t\x00m\x00l\x00>\x00\xe4'
b'\xfe\xff\x00<\x00h\x00t\x00m\x00l\x00>\x00\xe4',
))
self.assertTrue(is_html( # UTF-32-BE
b'\x00\x00\xFE\xFF\x00\x00\x00<\x00\x00\x00h\x00\x00\x00t\x00\x00\x00m\x00\x00\x00l\x00\x00\x00>\x00\x00\x00\xe4'))
@@ -1835,6 +1878,8 @@ def test_iri_to_uri(self):
def test_clean_podcast_url(self):
self.assertEqual(clean_podcast_url('https://www.podtrac.com/pts/redirect.mp3/chtbl.com/track/5899E/traffic.megaphone.fm/HSW7835899191.mp3'), 'https://traffic.megaphone.fm/HSW7835899191.mp3')
self.assertEqual(clean_podcast_url('https://play.podtrac.com/npr-344098539/edge1.pod.npr.org/anon.npr-podcasts/podcast/npr/waitwait/2020/10/20201003_waitwait_wwdtmpodcast201003-015621a5-f035-4eca-a9a1-7c118d90bc3c.mp3'), 'https://edge1.pod.npr.org/anon.npr-podcasts/podcast/npr/waitwait/2020/10/20201003_waitwait_wwdtmpodcast201003-015621a5-f035-4eca-a9a1-7c118d90bc3c.mp3')
self.assertEqual(clean_podcast_url('https://pdst.fm/e/2.gum.fm/chtbl.com/track/chrt.fm/track/34D33/pscrb.fm/rss/p/traffic.megaphone.fm/ITLLC7765286967.mp3?updated=1687282661'), 'https://traffic.megaphone.fm/ITLLC7765286967.mp3?updated=1687282661')
self.assertEqual(clean_podcast_url('https://pdst.fm/e/https://mgln.ai/e/441/www.buzzsprout.com/1121972/13019085-ep-252-the-deep-life-stack.mp3'), 'https://www.buzzsprout.com/1121972/13019085-ep-252-the-deep-life-stack.mp3')
def test_LazyList(self):
it = list(range(10))
@@ -1921,7 +1966,7 @@ def test_locked_file(self):
with locked_file(FILE, test_mode, False):
pass
except (BlockingIOError, PermissionError):
if not testing_write: # FIXME
if not testing_write: # FIXME: blocked read access
print(f'Known issue: Exclusive lock ({lock_mode}) blocks read access ({test_mode})')
continue
self.assertTrue(testing_write, f'{test_mode} is blocked by {lock_mode}')
@@ -1989,7 +2034,7 @@ def total(*x, **kwargs):
msg='int fn with expected_type int should give int')
self.assertEqual(try_call(lambda: 1, expected_type=dict), None,
msg='int fn with wrong expected_type should give None')
self.assertEqual(try_call(total, args=(0, 1, 0, ), expected_type=int), 1,
self.assertEqual(try_call(total, args=(0, 1, 0), expected_type=int), 1,
msg='fn should accept arglist')
self.assertEqual(try_call(total, kwargs={'a': 0, 'b': 1, 'c': 0}, expected_type=int), 1,
msg='fn should accept kwargs')
@@ -2006,321 +2051,84 @@ def test_variadic(self):
warnings.simplefilter('ignore')
self.assertEqual(variadic('spam', allowed_types=[dict]), 'spam')
def test_traverse_obj(self):
_TEST_DATA = {
100: 100,
1.2: 1.2,
'str': 'str',
'None': None,
'...': ...,
'urls': [
{'index': 0, 'url': 'https://www.example.com/0'},
{'index': 1, 'url': 'https://www.example.com/1'},
],
'data': (
{'index': 2},
{'index': 3},
),
'dict': {},
}
def test_http_header_dict(self):
headers = HTTPHeaderDict()
headers['ytdl-test'] = b'0'
self.assertEqual(list(headers.items()), [('Ytdl-Test', '0')])
headers['ytdl-test'] = 1
self.assertEqual(list(headers.items()), [('Ytdl-Test', '1')])
headers['Ytdl-test'] = '2'
self.assertEqual(list(headers.items()), [('Ytdl-Test', '2')])
self.assertTrue('ytDl-Test' in headers)
self.assertEqual(str(headers), str(dict(headers)))
self.assertEqual(repr(headers), str(dict(headers)))
# Test base functionality
self.assertEqual(traverse_obj(_TEST_DATA, ('str',)), 'str',
msg='allow tuple path')
self.assertEqual(traverse_obj(_TEST_DATA, ['str']), 'str',
msg='allow list path')
self.assertEqual(traverse_obj(_TEST_DATA, (value for value in ("str",))), 'str',
msg='allow iterable path')
self.assertEqual(traverse_obj(_TEST_DATA, 'str'), 'str',
msg='single items should be treated as a path')
self.assertEqual(traverse_obj(_TEST_DATA, None), _TEST_DATA)
self.assertEqual(traverse_obj(_TEST_DATA, 100), 100)
self.assertEqual(traverse_obj(_TEST_DATA, 1.2), 1.2)
headers.update({'X-dlp': 'data'})
self.assertEqual(set(headers.items()), {('Ytdl-Test', '2'), ('X-Dlp', 'data')})
self.assertEqual(dict(headers), {'Ytdl-Test': '2', 'X-Dlp': 'data'})
self.assertEqual(len(headers), 2)
self.assertEqual(headers.copy(), headers)
headers2 = HTTPHeaderDict({'X-dlp': 'data3'}, **headers, **{'X-dlp': 'data2'})
self.assertEqual(set(headers2.items()), {('Ytdl-Test', '2'), ('X-Dlp', 'data2')})
self.assertEqual(len(headers2), 2)
headers2.clear()
self.assertEqual(len(headers2), 0)
# Test Ellipsis behavior
self.assertCountEqual(traverse_obj(_TEST_DATA, ...),
(item for item in _TEST_DATA.values() if item not in (None, {})),
msg='`...` should give all non discarded values')
self.assertCountEqual(traverse_obj(_TEST_DATA, ('urls', 0, ...)), _TEST_DATA['urls'][0].values(),
msg='`...` selection for dicts should select all values')
self.assertEqual(traverse_obj(_TEST_DATA, (..., ..., 'url')),
['https://www.example.com/0', 'https://www.example.com/1'],
msg='nested `...` queries should work')
self.assertCountEqual(traverse_obj(_TEST_DATA, (..., ..., 'index')), range(4),
msg='`...` query result should be flattened')
self.assertEqual(traverse_obj(iter(range(4)), ...), list(range(4)),
msg='`...` should accept iterables')
# ensure we prefer latter headers
headers3 = HTTPHeaderDict({'Ytdl-TeSt': 1}, {'Ytdl-test': 2})
self.assertEqual(set(headers3.items()), {('Ytdl-Test', '2')})
del headers3['ytdl-tesT']
self.assertEqual(dict(headers3), {})
# Test function as key
self.assertEqual(traverse_obj(_TEST_DATA, lambda x, y: x == 'urls' and isinstance(y, list)),
[_TEST_DATA['urls']],
msg='function as query key should perform a filter based on (key, value)')
self.assertCountEqual(traverse_obj(_TEST_DATA, lambda _, x: isinstance(x[0], str)), {'str'},
msg='exceptions in the query function should be catched')
self.assertEqual(traverse_obj(iter(range(4)), lambda _, x: x % 2 == 0), [0, 2],
msg='function key should accept iterables')
if __debug__:
with self.assertRaises(Exception, msg='Wrong function signature should raise in debug'):
traverse_obj(_TEST_DATA, lambda a: ...)
with self.assertRaises(Exception, msg='Wrong function signature should raise in debug'):
traverse_obj(_TEST_DATA, lambda a, b, c: ...)
headers4 = HTTPHeaderDict({'ytdl-test': 'data;'})
self.assertEqual(set(headers4.items()), {('Ytdl-Test', 'data;')})
# Test set as key (transformation/type, like `expected_type`)
self.assertEqual(traverse_obj(_TEST_DATA, (..., {str.upper}, )), ['STR'],
msg='Function in set should be a transformation')
self.assertEqual(traverse_obj(_TEST_DATA, (..., {str})), ['str'],
msg='Type in set should be a type filter')
self.assertEqual(traverse_obj(_TEST_DATA, {dict}), _TEST_DATA,
msg='A single set should be wrapped into a path')
self.assertEqual(traverse_obj(_TEST_DATA, (..., {str.upper})), ['STR'],
msg='Transformation function should not raise')
self.assertEqual(traverse_obj(_TEST_DATA, (..., {str_or_none})),
[item for item in map(str_or_none, _TEST_DATA.values()) if item is not None],
msg='Function in set should be a transformation')
if __debug__:
with self.assertRaises(Exception, msg='Sets with length != 1 should raise in debug'):
traverse_obj(_TEST_DATA, set())
with self.assertRaises(Exception, msg='Sets with length != 1 should raise in debug'):
traverse_obj(_TEST_DATA, {str.upper, str})
# common mistake: strip whitespace from values
# https://github.com/yt-dlp/yt-dlp/issues/8729
headers5 = HTTPHeaderDict({'ytdl-test': ' data; '})
self.assertEqual(set(headers5.items()), {('Ytdl-Test', 'data;')})
# Test `slice` as a key
_SLICE_DATA = [0, 1, 2, 3, 4]
self.assertEqual(traverse_obj(_TEST_DATA, ('dict', slice(1))), None,
msg='slice on a dictionary should not throw')
self.assertEqual(traverse_obj(_SLICE_DATA, slice(1)), _SLICE_DATA[:1],
msg='slice key should apply slice to sequence')
self.assertEqual(traverse_obj(_SLICE_DATA, slice(1, 2)), _SLICE_DATA[1:2],
msg='slice key should apply slice to sequence')
self.assertEqual(traverse_obj(_SLICE_DATA, slice(1, 4, 2)), _SLICE_DATA[1:4:2],
msg='slice key should apply slice to sequence')
def test_extract_basic_auth(self):
assert extract_basic_auth('http://:foo.bar') == ('http://:foo.bar', None)
assert extract_basic_auth('http://foo.bar') == ('http://foo.bar', None)
assert extract_basic_auth('http://@foo.bar') == ('http://foo.bar', 'Basic Og==')
assert extract_basic_auth('http://:pass@foo.bar') == ('http://foo.bar', 'Basic OnBhc3M=')
assert extract_basic_auth('http://user:@foo.bar') == ('http://foo.bar', 'Basic dXNlcjo=')
assert extract_basic_auth('http://user:pass@foo.bar') == ('http://foo.bar', 'Basic dXNlcjpwYXNz')
# Test alternative paths
self.assertEqual(traverse_obj(_TEST_DATA, 'fail', 'str'), 'str',
msg='multiple `paths` should be treated as alternative paths')
self.assertEqual(traverse_obj(_TEST_DATA, 'str', 100), 'str',
msg='alternatives should exit early')
self.assertEqual(traverse_obj(_TEST_DATA, 'fail', 'fail'), None,
msg='alternatives should return `default` if exhausted')
self.assertEqual(traverse_obj(_TEST_DATA, (..., 'fail'), 100), 100,
msg='alternatives should track their own branching return')
self.assertEqual(traverse_obj(_TEST_DATA, ('dict', ...), ('data', ...)), list(_TEST_DATA['data']),
msg='alternatives on empty objects should search further')
# Test branch and path nesting
self.assertEqual(traverse_obj(_TEST_DATA, ('urls', (3, 0), 'url')), ['https://www.example.com/0'],
msg='tuple as key should be treated as branches')
self.assertEqual(traverse_obj(_TEST_DATA, ('urls', [3, 0], 'url')), ['https://www.example.com/0'],
msg='list as key should be treated as branches')
self.assertEqual(traverse_obj(_TEST_DATA, ('urls', ((1, 'fail'), (0, 'url')))), ['https://www.example.com/0'],
msg='double nesting in path should be treated as paths')
self.assertEqual(traverse_obj(['0', [1, 2]], [(0, 1), 0]), [1],
msg='do not fail early on branching')
self.assertCountEqual(traverse_obj(_TEST_DATA, ('urls', ((1, ('fail', 'url')), (0, 'url')))),
['https://www.example.com/0', 'https://www.example.com/1'],
msg='tripple nesting in path should be treated as branches')
self.assertEqual(traverse_obj(_TEST_DATA, ('urls', ('fail', (..., 'url')))),
['https://www.example.com/0', 'https://www.example.com/1'],
msg='ellipsis as branch path start gets flattened')
# Test dictionary as key
self.assertEqual(traverse_obj(_TEST_DATA, {0: 100, 1: 1.2}), {0: 100, 1: 1.2},
msg='dict key should result in a dict with the same keys')
self.assertEqual(traverse_obj(_TEST_DATA, {0: ('urls', 0, 'url')}),
{0: 'https://www.example.com/0'},
msg='dict key should allow paths')
self.assertEqual(traverse_obj(_TEST_DATA, {0: ('urls', (3, 0), 'url')}),
{0: ['https://www.example.com/0']},
msg='tuple in dict path should be treated as branches')
self.assertEqual(traverse_obj(_TEST_DATA, {0: ('urls', ((1, 'fail'), (0, 'url')))}),
{0: ['https://www.example.com/0']},
msg='double nesting in dict path should be treated as paths')
self.assertEqual(traverse_obj(_TEST_DATA, {0: ('urls', ((1, ('fail', 'url')), (0, 'url')))}),
{0: ['https://www.example.com/1', 'https://www.example.com/0']},
msg='tripple nesting in dict path should be treated as branches')
self.assertEqual(traverse_obj(_TEST_DATA, {0: 'fail'}), {},
msg='remove `None` values when top level dict key fails')
self.assertEqual(traverse_obj(_TEST_DATA, {0: 'fail'}, default=...), {0: ...},
msg='use `default` if key fails and `default`')
self.assertEqual(traverse_obj(_TEST_DATA, {0: 'dict'}), {},
msg='remove empty values when dict key')
self.assertEqual(traverse_obj(_TEST_DATA, {0: 'dict'}, default=...), {0: ...},
msg='use `default` when dict key and `default`')
self.assertEqual(traverse_obj(_TEST_DATA, {0: {0: 'fail'}}), {},
msg='remove empty values when nested dict key fails')
self.assertEqual(traverse_obj(None, {0: 'fail'}), {},
msg='default to dict if pruned')
self.assertEqual(traverse_obj(None, {0: 'fail'}, default=...), {0: ...},
msg='default to dict if pruned and default is given')
self.assertEqual(traverse_obj(_TEST_DATA, {0: {0: 'fail'}}, default=...), {0: {0: ...}},
msg='use nested `default` when nested dict key fails and `default`')
self.assertEqual(traverse_obj(_TEST_DATA, {0: ('dict', ...)}), {},
msg='remove key if branch in dict key not successful')
# Testing default parameter behavior
_DEFAULT_DATA = {'None': None, 'int': 0, 'list': []}
self.assertEqual(traverse_obj(_DEFAULT_DATA, 'fail'), None,
msg='default value should be `None`')
self.assertEqual(traverse_obj(_DEFAULT_DATA, 'fail', 'fail', default=...), ...,
msg='chained fails should result in default')
self.assertEqual(traverse_obj(_DEFAULT_DATA, 'None', 'int'), 0,
msg='should not short cirquit on `None`')
self.assertEqual(traverse_obj(_DEFAULT_DATA, 'fail', default=1), 1,
msg='invalid dict key should result in `default`')
self.assertEqual(traverse_obj(_DEFAULT_DATA, 'None', default=1), 1,
msg='`None` is a deliberate sentinel and should become `default`')
self.assertEqual(traverse_obj(_DEFAULT_DATA, ('list', 10)), None,
msg='`IndexError` should result in `default`')
self.assertEqual(traverse_obj(_DEFAULT_DATA, (..., 'fail'), default=1), 1,
msg='if branched but not successful return `default` if defined, not `[]`')
self.assertEqual(traverse_obj(_DEFAULT_DATA, (..., 'fail'), default=None), None,
msg='if branched but not successful return `default` even if `default` is `None`')
self.assertEqual(traverse_obj(_DEFAULT_DATA, (..., 'fail')), [],
msg='if branched but not successful return `[]`, not `default`')
self.assertEqual(traverse_obj(_DEFAULT_DATA, ('list', ...)), [],
msg='if branched but object is empty return `[]`, not `default`')
self.assertEqual(traverse_obj(None, ...), [],
msg='if branched but object is `None` return `[]`, not `default`')
self.assertEqual(traverse_obj({0: None}, (0, ...)), [],
msg='if branched but state is `None` return `[]`, not `default`')
branching_paths = [
('fail', ...),
(..., 'fail'),
100 * ('fail',) + (...,),
(...,) + 100 * ('fail',),
@unittest.skipUnless(compat_os_name == 'nt', 'Only relevant on Windows')
def test_windows_escaping(self):
tests = [
'test"&',
'%CMDCMDLINE:~-1%&',
'a\nb',
'"',
'\\',
'!',
'^!',
'a \\ b',
'a \\" b',
'a \\ b\\',
# We replace \r with \n
('a\r\ra', 'a\n\na'),
]
for branching_path in branching_paths:
self.assertEqual(traverse_obj({}, branching_path), [],
msg='if branched but state is `None`, return `[]` (not `default`)')
self.assertEqual(traverse_obj({}, 'fail', branching_path), [],
msg='if branching in last alternative and previous did not match, return `[]` (not `default`)')
self.assertEqual(traverse_obj({0: 'x'}, 0, branching_path), 'x',
msg='if branching in last alternative and previous did match, return single value')
self.assertEqual(traverse_obj({0: 'x'}, branching_path, 0), 'x',
msg='if branching in first alternative and non-branching path does match, return single value')
self.assertEqual(traverse_obj({}, branching_path, 'fail'), None,
msg='if branching in first alternative and non-branching path does not match, return `default`')
# Testing expected_type behavior
_EXPECTED_TYPE_DATA = {'str': 'str', 'int': 0}
self.assertEqual(traverse_obj(_EXPECTED_TYPE_DATA, 'str', expected_type=str),
'str', msg='accept matching `expected_type` type')
self.assertEqual(traverse_obj(_EXPECTED_TYPE_DATA, 'str', expected_type=int),
None, msg='reject non matching `expected_type` type')
self.assertEqual(traverse_obj(_EXPECTED_TYPE_DATA, 'int', expected_type=lambda x: str(x)),
'0', msg='transform type using type function')
self.assertEqual(traverse_obj(_EXPECTED_TYPE_DATA, 'str', expected_type=lambda _: 1 / 0),
None, msg='wrap expected_type fuction in try_call')
self.assertEqual(traverse_obj(_EXPECTED_TYPE_DATA, ..., expected_type=str),
['str'], msg='eliminate items that expected_type fails on')
self.assertEqual(traverse_obj(_TEST_DATA, {0: 100, 1: 1.2}, expected_type=int),
{0: 100}, msg='type as expected_type should filter dict values')
self.assertEqual(traverse_obj(_TEST_DATA, {0: 100, 1: 1.2, 2: 'None'}, expected_type=str_or_none),
{0: '100', 1: '1.2'}, msg='function as expected_type should transform dict values')
self.assertEqual(traverse_obj(_TEST_DATA, ({0: 1.2}, 0, {int_or_none}), expected_type=int),
1, msg='expected_type should not filter non final dict values')
self.assertEqual(traverse_obj(_TEST_DATA, {0: {0: 100, 1: 'str'}}, expected_type=int),
{0: {0: 100}}, msg='expected_type should transform deep dict values')
self.assertEqual(traverse_obj(_TEST_DATA, [({0: '...'}, {0: '...'})], expected_type=type(...)),
[{0: ...}, {0: ...}], msg='expected_type should transform branched dict values')
self.assertEqual(traverse_obj({1: {3: 4}}, [(1, 2), 3], expected_type=int),
[4], msg='expected_type regression for type matching in tuple branching')
self.assertEqual(traverse_obj(_TEST_DATA, ['data', ...], expected_type=int),
[], msg='expected_type regression for type matching in dict result')
def run_shell(args):
stdout, stderr, error = Popen.run(
args, text=True, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
assert not stderr
assert not error
return stdout
# Test get_all behavior
_GET_ALL_DATA = {'key': [0, 1, 2]}
self.assertEqual(traverse_obj(_GET_ALL_DATA, ('key', ...), get_all=False), 0,
msg='if not `get_all`, return only first matching value')
self.assertEqual(traverse_obj(_GET_ALL_DATA, ..., get_all=False), [0, 1, 2],
msg='do not overflatten if not `get_all`')
for argument in tests:
if isinstance(argument, str):
expected = argument
else:
argument, expected = argument
# Test casesense behavior
_CASESENSE_DATA = {
'KeY': 'value0',
0: {
'KeY': 'value1',
0: {'KeY': 'value2'},
},
}
self.assertEqual(traverse_obj(_CASESENSE_DATA, 'key'), None,
msg='dict keys should be case sensitive unless `casesense`')
self.assertEqual(traverse_obj(_CASESENSE_DATA, 'keY',
casesense=False), 'value0',
msg='allow non matching key case if `casesense`')
self.assertEqual(traverse_obj(_CASESENSE_DATA, (0, ('keY',)),
casesense=False), ['value1'],
msg='allow non matching key case in branch if `casesense`')
self.assertEqual(traverse_obj(_CASESENSE_DATA, (0, ((0, 'keY'),)),
casesense=False), ['value2'],
msg='allow non matching key case in branch path if `casesense`')
# Test traverse_string behavior
_TRAVERSE_STRING_DATA = {'str': 'str', 1.2: 1.2}
self.assertEqual(traverse_obj(_TRAVERSE_STRING_DATA, ('str', 0)), None,
msg='do not traverse into string if not `traverse_string`')
self.assertEqual(traverse_obj(_TRAVERSE_STRING_DATA, ('str', 0),
traverse_string=True), 's',
msg='traverse into string if `traverse_string`')
self.assertEqual(traverse_obj(_TRAVERSE_STRING_DATA, (1.2, 1),
traverse_string=True), '.',
msg='traverse into converted data if `traverse_string`')
self.assertEqual(traverse_obj(_TRAVERSE_STRING_DATA, ('str', ...),
traverse_string=True), 'str',
msg='`...` should result in string (same value) if `traverse_string`')
self.assertEqual(traverse_obj(_TRAVERSE_STRING_DATA, ('str', slice(0, None, 2)),
traverse_string=True), 'sr',
msg='`slice` should result in string if `traverse_string`')
self.assertEqual(traverse_obj(_TRAVERSE_STRING_DATA, ('str', lambda i, v: i or v == "s"),
traverse_string=True), 'str',
msg='function should result in string if `traverse_string`')
self.assertEqual(traverse_obj(_TRAVERSE_STRING_DATA, ('str', (0, 2)),
traverse_string=True), ['s', 'r'],
msg='branching should result in list if `traverse_string`')
self.assertEqual(traverse_obj({}, (0, ...), traverse_string=True), [],
msg='branching should result in list if `traverse_string`')
self.assertEqual(traverse_obj({}, (0, lambda x, y: True), traverse_string=True), [],
msg='branching should result in list if `traverse_string`')
self.assertEqual(traverse_obj({}, (0, slice(1)), traverse_string=True), [],
msg='branching should result in list if `traverse_string`')
# Test is_user_input behavior
_IS_USER_INPUT_DATA = {'range8': list(range(8))}
self.assertEqual(traverse_obj(_IS_USER_INPUT_DATA, ('range8', '3'),
is_user_input=True), 3,
msg='allow for string indexing if `is_user_input`')
self.assertCountEqual(traverse_obj(_IS_USER_INPUT_DATA, ('range8', '3:'),
is_user_input=True), tuple(range(8))[3:],
msg='allow for string slice if `is_user_input`')
self.assertCountEqual(traverse_obj(_IS_USER_INPUT_DATA, ('range8', ':4:2'),
is_user_input=True), tuple(range(8))[:4:2],
msg='allow step in string slice if `is_user_input`')
self.assertCountEqual(traverse_obj(_IS_USER_INPUT_DATA, ('range8', ':'),
is_user_input=True), range(8),
msg='`:` should be treated as `...` if `is_user_input`')
with self.assertRaises(TypeError, msg='too many params should result in error'):
traverse_obj(_IS_USER_INPUT_DATA, ('range8', ':::'), is_user_input=True)
# Test re.Match as input obj
mobj = re.fullmatch(r'0(12)(?P<group>3)(4)?', '0123')
self.assertEqual(traverse_obj(mobj, ...), [x for x in mobj.groups() if x is not None],
msg='`...` on a `re.Match` should give its `groups()`')
self.assertEqual(traverse_obj(mobj, lambda k, _: k in (0, 2)), ['0123', '3'],
msg='function on a `re.Match` should give groupno, value starting at 0')
self.assertEqual(traverse_obj(mobj, 'group'), '3',
msg='str key on a `re.Match` should give group with that name')
self.assertEqual(traverse_obj(mobj, 2), '3',
msg='int key on a `re.Match` should give group with that name')
self.assertEqual(traverse_obj(mobj, 'gRoUp', casesense=False), '3',
msg='str key on a `re.Match` should respect casesense')
self.assertEqual(traverse_obj(mobj, 'fail'), None,
msg='failing str key on a `re.Match` should return `default`')
self.assertEqual(traverse_obj(mobj, 'gRoUpS', casesense=False), None,
msg='failing str key on a `re.Match` should return `default`')
self.assertEqual(traverse_obj(mobj, 8), None,
msg='failing int key on a `re.Match` should return `default`')
self.assertEqual(traverse_obj(mobj, lambda k, _: k in (0, 'group')), ['0123', '3'],
msg='function on a `re.Match` should give group name as well')
args = [sys.executable, '-c', 'import sys; print(end=sys.argv[1])', argument, 'end']
assert run_shell(args) == expected
assert run_shell(shell_quote(args, shell=True)) == expected
if __name__ == '__main__':

439
test/test_websockets.py Normal file
View File

@@ -0,0 +1,439 @@
#!/usr/bin/env python3
# Allow direct execution
import os
import sys
import time
import pytest
from test.helper import verify_address_availability
from yt_dlp.networking.common import Features, DEFAULT_TIMEOUT
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import http.client
import http.cookiejar
import http.server
import json
import random
import ssl
import threading
from yt_dlp import socks, traverse_obj
from yt_dlp.cookies import YoutubeDLCookieJar
from yt_dlp.dependencies import websockets
from yt_dlp.networking import Request
from yt_dlp.networking.exceptions import (
CertificateVerifyError,
HTTPError,
ProxyError,
RequestError,
SSLError,
TransportError,
)
from yt_dlp.utils.networking import HTTPHeaderDict
TEST_DIR = os.path.dirname(os.path.abspath(__file__))
def websocket_handler(websocket):
for message in websocket:
if isinstance(message, bytes):
if message == b'bytes':
return websocket.send('2')
elif isinstance(message, str):
if message == 'headers':
return websocket.send(json.dumps(dict(websocket.request.headers)))
elif message == 'path':
return websocket.send(websocket.request.path)
elif message == 'source_address':
return websocket.send(websocket.remote_address[0])
elif message == 'str':
return websocket.send('1')
return websocket.send(message)
def process_request(self, request):
if request.path.startswith('/gen_'):
status = http.HTTPStatus(int(request.path[5:]))
if 300 <= status.value <= 300:
return websockets.http11.Response(
status.value, status.phrase, websockets.datastructures.Headers([('Location', '/')]), b'')
return self.protocol.reject(status.value, status.phrase)
return self.protocol.accept(request)
def create_websocket_server(**ws_kwargs):
import websockets.sync.server
wsd = websockets.sync.server.serve(
websocket_handler, '127.0.0.1', 0,
process_request=process_request, open_timeout=2, **ws_kwargs)
ws_port = wsd.socket.getsockname()[1]
ws_server_thread = threading.Thread(target=wsd.serve_forever)
ws_server_thread.daemon = True
ws_server_thread.start()
return ws_server_thread, ws_port
def create_ws_websocket_server():
return create_websocket_server()
def create_wss_websocket_server():
certfn = os.path.join(TEST_DIR, 'testcert.pem')
sslctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER)
sslctx.load_cert_chain(certfn, None)
return create_websocket_server(ssl_context=sslctx)
MTLS_CERT_DIR = os.path.join(TEST_DIR, 'testdata', 'certificate')
def create_mtls_wss_websocket_server():
certfn = os.path.join(TEST_DIR, 'testcert.pem')
cacertfn = os.path.join(MTLS_CERT_DIR, 'ca.crt')
sslctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER)
sslctx.verify_mode = ssl.CERT_REQUIRED
sslctx.load_verify_locations(cafile=cacertfn)
sslctx.load_cert_chain(certfn, None)
return create_websocket_server(ssl_context=sslctx)
def ws_validate_and_send(rh, req):
rh.validate(req)
max_tries = 3
for i in range(max_tries):
try:
return rh.send(req)
except TransportError as e:
if i < (max_tries - 1) and 'connection closed during handshake' in str(e):
# websockets server sometimes hangs on new connections
continue
raise
@pytest.mark.skipif(not websockets, reason='websockets must be installed to test websocket request handlers')
@pytest.mark.parametrize('handler', ['Websockets'], indirect=True)
class TestWebsSocketRequestHandlerConformance:
@classmethod
def setup_class(cls):
cls.ws_thread, cls.ws_port = create_ws_websocket_server()
cls.ws_base_url = f'ws://127.0.0.1:{cls.ws_port}'
cls.wss_thread, cls.wss_port = create_wss_websocket_server()
cls.wss_base_url = f'wss://127.0.0.1:{cls.wss_port}'
cls.bad_wss_thread, cls.bad_wss_port = create_websocket_server(ssl_context=ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER))
cls.bad_wss_host = f'wss://127.0.0.1:{cls.bad_wss_port}'
cls.mtls_wss_thread, cls.mtls_wss_port = create_mtls_wss_websocket_server()
cls.mtls_wss_base_url = f'wss://127.0.0.1:{cls.mtls_wss_port}'
def test_basic_websockets(self, handler):
with handler() as rh:
ws = ws_validate_and_send(rh, Request(self.ws_base_url))
assert 'upgrade' in ws.headers
assert ws.status == 101
ws.send('foo')
assert ws.recv() == 'foo'
ws.close()
# https://www.rfc-editor.org/rfc/rfc6455.html#section-5.6
@pytest.mark.parametrize('msg,opcode', [('str', 1), (b'bytes', 2)])
def test_send_types(self, handler, msg, opcode):
with handler() as rh:
ws = ws_validate_and_send(rh, Request(self.ws_base_url))
ws.send(msg)
assert int(ws.recv()) == opcode
ws.close()
def test_verify_cert(self, handler):
with handler() as rh:
with pytest.raises(CertificateVerifyError):
ws_validate_and_send(rh, Request(self.wss_base_url))
with handler(verify=False) as rh:
ws = ws_validate_and_send(rh, Request(self.wss_base_url))
assert ws.status == 101
ws.close()
def test_ssl_error(self, handler):
with handler(verify=False) as rh:
with pytest.raises(SSLError, match=r'ssl(?:v3|/tls) alert handshake failure') as exc_info:
ws_validate_and_send(rh, Request(self.bad_wss_host))
assert not issubclass(exc_info.type, CertificateVerifyError)
@pytest.mark.parametrize('path,expected', [
# Unicode characters should be encoded with uppercase percent-encoding
('/中文', '/%E4%B8%AD%E6%96%87'),
# don't normalize existing percent encodings
('/%c7%9f', '/%c7%9f'),
])
def test_percent_encode(self, handler, path, expected):
with handler() as rh:
ws = ws_validate_and_send(rh, Request(f'{self.ws_base_url}{path}'))
ws.send('path')
assert ws.recv() == expected
assert ws.status == 101
ws.close()
def test_remove_dot_segments(self, handler):
with handler() as rh:
# This isn't a comprehensive test,
# but it should be enough to check whether the handler is removing dot segments
ws = ws_validate_and_send(rh, Request(f'{self.ws_base_url}/a/b/./../../test'))
assert ws.status == 101
ws.send('path')
assert ws.recv() == '/test'
ws.close()
# We are restricted to known HTTP status codes in http.HTTPStatus
# Redirects are not supported for websockets
@pytest.mark.parametrize('status', (200, 204, 301, 302, 303, 400, 500, 511))
def test_raise_http_error(self, handler, status):
with handler() as rh:
with pytest.raises(HTTPError) as exc_info:
ws_validate_and_send(rh, Request(f'{self.ws_base_url}/gen_{status}'))
assert exc_info.value.status == status
@pytest.mark.parametrize('params,extensions', [
({'timeout': sys.float_info.min}, {}),
({}, {'timeout': sys.float_info.min}),
])
def test_read_timeout(self, handler, params, extensions):
with handler(**params) as rh:
with pytest.raises(TransportError):
ws_validate_and_send(rh, Request(self.ws_base_url, extensions=extensions))
def test_connect_timeout(self, handler):
# nothing should be listening on this port
connect_timeout_url = 'ws://10.255.255.255'
with handler(timeout=0.01) as rh, pytest.raises(TransportError):
now = time.time()
ws_validate_and_send(rh, Request(connect_timeout_url))
assert time.time() - now < DEFAULT_TIMEOUT
# Per request timeout, should override handler timeout
request = Request(connect_timeout_url, extensions={'timeout': 0.01})
with handler() as rh, pytest.raises(TransportError):
now = time.time()
ws_validate_and_send(rh, request)
assert time.time() - now < DEFAULT_TIMEOUT
def test_cookies(self, handler):
cookiejar = YoutubeDLCookieJar()
cookiejar.set_cookie(http.cookiejar.Cookie(
version=0, name='test', value='ytdlp', port=None, port_specified=False,
domain='127.0.0.1', domain_specified=True, domain_initial_dot=False, path='/',
path_specified=True, secure=False, expires=None, discard=False, comment=None,
comment_url=None, rest={}))
with handler(cookiejar=cookiejar) as rh:
ws = ws_validate_and_send(rh, Request(self.ws_base_url))
ws.send('headers')
assert json.loads(ws.recv())['cookie'] == 'test=ytdlp'
ws.close()
with handler() as rh:
ws = ws_validate_and_send(rh, Request(self.ws_base_url))
ws.send('headers')
assert 'cookie' not in json.loads(ws.recv())
ws.close()
ws = ws_validate_and_send(rh, Request(self.ws_base_url, extensions={'cookiejar': cookiejar}))
ws.send('headers')
assert json.loads(ws.recv())['cookie'] == 'test=ytdlp'
ws.close()
def test_source_address(self, handler):
source_address = f'127.0.0.{random.randint(5, 255)}'
verify_address_availability(source_address)
with handler(source_address=source_address) as rh:
ws = ws_validate_and_send(rh, Request(self.ws_base_url))
ws.send('source_address')
assert source_address == ws.recv()
ws.close()
def test_response_url(self, handler):
with handler() as rh:
url = f'{self.ws_base_url}/something'
ws = ws_validate_and_send(rh, Request(url))
assert ws.url == url
ws.close()
def test_request_headers(self, handler):
with handler(headers=HTTPHeaderDict({'test1': 'test', 'test2': 'test2'})) as rh:
# Global Headers
ws = ws_validate_and_send(rh, Request(self.ws_base_url))
ws.send('headers')
headers = HTTPHeaderDict(json.loads(ws.recv()))
assert headers['test1'] == 'test'
ws.close()
# Per request headers, merged with global
ws = ws_validate_and_send(rh, Request(
self.ws_base_url, headers={'test2': 'changed', 'test3': 'test3'}))
ws.send('headers')
headers = HTTPHeaderDict(json.loads(ws.recv()))
assert headers['test1'] == 'test'
assert headers['test2'] == 'changed'
assert headers['test3'] == 'test3'
ws.close()
@pytest.mark.parametrize('client_cert', (
{'client_certificate': os.path.join(MTLS_CERT_DIR, 'clientwithkey.crt')},
{
'client_certificate': os.path.join(MTLS_CERT_DIR, 'client.crt'),
'client_certificate_key': os.path.join(MTLS_CERT_DIR, 'client.key'),
},
{
'client_certificate': os.path.join(MTLS_CERT_DIR, 'clientwithencryptedkey.crt'),
'client_certificate_password': 'foobar',
},
{
'client_certificate': os.path.join(MTLS_CERT_DIR, 'client.crt'),
'client_certificate_key': os.path.join(MTLS_CERT_DIR, 'clientencrypted.key'),
'client_certificate_password': 'foobar',
},
))
def test_mtls(self, handler, client_cert):
with handler(
# Disable client-side validation of unacceptable self-signed testcert.pem
# The test is of a check on the server side, so unaffected
verify=False,
client_cert=client_cert,
) as rh:
ws_validate_and_send(rh, Request(self.mtls_wss_base_url)).close()
def test_request_disable_proxy(self, handler):
for proxy_proto in handler._SUPPORTED_PROXY_SCHEMES or ['ws']:
# Given handler is configured with a proxy
with handler(proxies={'ws': f'{proxy_proto}://10.255.255.255'}, timeout=5) as rh:
# When a proxy is explicitly set to None for the request
ws = ws_validate_and_send(rh, Request(self.ws_base_url, proxies={'http': None}))
# Then no proxy should be used
assert ws.status == 101
ws.close()
@pytest.mark.skip_handlers_if(
lambda _, handler: Features.NO_PROXY not in handler._SUPPORTED_FEATURES, 'handler does not support NO_PROXY')
def test_noproxy(self, handler):
for proxy_proto in handler._SUPPORTED_PROXY_SCHEMES or ['ws']:
# Given the handler is configured with a proxy
with handler(proxies={'ws': f'{proxy_proto}://10.255.255.255'}, timeout=5) as rh:
for no_proxy in (f'127.0.0.1:{self.ws_port}', '127.0.0.1', 'localhost'):
# When request no proxy includes the request url host
ws = ws_validate_and_send(rh, Request(self.ws_base_url, proxies={'no': no_proxy}))
# Then the proxy should not be used
assert ws.status == 101
ws.close()
@pytest.mark.skip_handlers_if(
lambda _, handler: Features.ALL_PROXY not in handler._SUPPORTED_FEATURES, 'handler does not support ALL_PROXY')
def test_allproxy(self, handler):
supported_proto = traverse_obj(handler._SUPPORTED_PROXY_SCHEMES, 0, default='ws')
# This is a bit of a hacky test, but it should be enough to check whether the handler is using the proxy.
# 0.1s might not be enough of a timeout if proxy is not used in all cases, but should still get failures.
with handler(proxies={'all': f'{supported_proto}://10.255.255.255'}, timeout=0.1) as rh:
with pytest.raises(TransportError):
ws_validate_and_send(rh, Request(self.ws_base_url)).close()
with handler(timeout=0.1) as rh:
with pytest.raises(TransportError):
ws_validate_and_send(
rh, Request(self.ws_base_url, proxies={'all': f'{supported_proto}://10.255.255.255'})).close()
def create_fake_ws_connection(raised):
import websockets.sync.client
class FakeWsConnection(websockets.sync.client.ClientConnection):
def __init__(self, *args, **kwargs):
class FakeResponse:
body = b''
headers = {}
status_code = 101
reason_phrase = 'test'
self.response = FakeResponse()
def send(self, *args, **kwargs):
raise raised()
def recv(self, *args, **kwargs):
raise raised()
def close(self, *args, **kwargs):
return
return FakeWsConnection()
@pytest.mark.parametrize('handler', ['Websockets'], indirect=True)
class TestWebsocketsRequestHandler:
@pytest.mark.parametrize('raised,expected', [
# https://websockets.readthedocs.io/en/stable/reference/exceptions.html
(lambda: websockets.exceptions.InvalidURI(msg='test', uri='test://'), RequestError),
# Requires a response object. Should be covered by HTTP error tests.
# (lambda: websockets.exceptions.InvalidStatus(), TransportError),
(lambda: websockets.exceptions.InvalidHandshake(), TransportError),
# These are subclasses of InvalidHandshake
(lambda: websockets.exceptions.InvalidHeader(name='test'), TransportError),
(lambda: websockets.exceptions.NegotiationError(), TransportError),
# Catch-all
(lambda: websockets.exceptions.WebSocketException(), TransportError),
(lambda: TimeoutError(), TransportError),
# These may be raised by our create_connection implementation, which should also be caught
(lambda: OSError(), TransportError),
(lambda: ssl.SSLError(), SSLError),
(lambda: ssl.SSLCertVerificationError(), CertificateVerifyError),
(lambda: socks.ProxyError(), ProxyError),
])
def test_request_error_mapping(self, handler, monkeypatch, raised, expected):
import websockets.sync.client
import yt_dlp.networking._websockets
with handler() as rh:
def fake_connect(*args, **kwargs):
raise raised()
monkeypatch.setattr(yt_dlp.networking._websockets, 'create_connection', lambda *args, **kwargs: None)
monkeypatch.setattr(websockets.sync.client, 'connect', fake_connect)
with pytest.raises(expected) as exc_info:
rh.send(Request('ws://fake-url'))
assert exc_info.type is expected
@pytest.mark.parametrize('raised,expected,match', [
# https://websockets.readthedocs.io/en/stable/reference/sync/client.html#websockets.sync.client.ClientConnection.send
(lambda: websockets.exceptions.ConnectionClosed(None, None), TransportError, None),
(lambda: RuntimeError(), TransportError, None),
(lambda: TimeoutError(), TransportError, None),
(lambda: TypeError(), RequestError, None),
(lambda: socks.ProxyError(), ProxyError, None),
# Catch-all
(lambda: websockets.exceptions.WebSocketException(), TransportError, None),
])
def test_ws_send_error_mapping(self, handler, monkeypatch, raised, expected, match):
from yt_dlp.networking._websockets import WebsocketsResponseAdapter
ws = WebsocketsResponseAdapter(create_fake_ws_connection(raised), url='ws://fake-url')
with pytest.raises(expected, match=match) as exc_info:
ws.send('test')
assert exc_info.type is expected
@pytest.mark.parametrize('raised,expected,match', [
# https://websockets.readthedocs.io/en/stable/reference/sync/client.html#websockets.sync.client.ClientConnection.recv
(lambda: websockets.exceptions.ConnectionClosed(None, None), TransportError, None),
(lambda: RuntimeError(), TransportError, None),
(lambda: TimeoutError(), TransportError, None),
(lambda: socks.ProxyError(), ProxyError, None),
# Catch-all
(lambda: websockets.exceptions.WebSocketException(), TransportError, None),
])
def test_ws_recv_error_mapping(self, handler, monkeypatch, raised, expected, match):
from yt_dlp.networking._websockets import WebsocketsResponseAdapter
ws = WebsocketsResponseAdapter(create_fake_ws_connection(raised), url='ws://fake-url')
with pytest.raises(expected, match=match) as exc_info:
ws.recv()
assert exc_info.type is expected

View File

@@ -13,7 +13,7 @@
class TestYoutubeMisc(unittest.TestCase):
def test_youtube_extract(self):
assertExtractId = lambda url, id: self.assertEqual(YoutubeIE.extract_id(url), id)
assertExtractId = lambda url, video_id: self.assertEqual(YoutubeIE.extract_id(url), video_id)
assertExtractId('http://www.youtube.com/watch?&v=BaW_jenozKc', 'BaW_jenozKc')
assertExtractId('https://www.youtube.com/watch?&v=BaW_jenozKc', 'BaW_jenozKc')
assertExtractId('https://www.youtube.com/watch?feature=player_embedded&v=BaW_jenozKc', 'BaW_jenozKc')

View File

@@ -46,17 +46,17 @@
(
'https://s.ytimg.com/yts/jsbin/html5player-en_US-vflBb0OQx.js',
84,
'123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQ0STUVWXYZ!"#$%&\'()*+,@./:;<=>'
'123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQ0STUVWXYZ!"#$%&\'()*+,@./:;<=>',
),
(
'https://s.ytimg.com/yts/jsbin/html5player-en_US-vfl9FYC6l.js',
83,
'123456789abcdefghijklmnopqr0tuvwxyzABCDETGHIJKLMNOPQRS>UVWXYZ!"#$%&\'()*+,-./:;<=F'
'123456789abcdefghijklmnopqr0tuvwxyzABCDETGHIJKLMNOPQRS>UVWXYZ!"#$%&\'()*+,-./:;<=F',
),
(
'https://s.ytimg.com/yts/jsbin/html5player-en_US-vflCGk6yw/html5player.js',
'4646B5181C6C3020DF1D9C7FCFEA.AD80ABF70C39BD369CCCAE780AFBB98FA6B6CB42766249D9488C288',
'82C8849D94266724DC6B6AF89BBFA087EACCD963.B93C07FBA084ACAEFCF7C9D1FD0203C6C1815B6B'
'82C8849D94266724DC6B6AF89BBFA087EACCD963.B93C07FBA084ACAEFCF7C9D1FD0203C6C1815B6B',
),
(
'https://s.ytimg.com/yts/jsbin/html5player-en_US-vflKjOTVq/html5player.js',
@@ -163,6 +163,14 @@
'https://www.youtube.com/s/player/b7910ca8/player_ias.vflset/en_US/base.js',
'_hXMCwMt9qE310D', 'LoZMgkkofRMCZQ',
),
(
'https://www.youtube.com/s/player/590f65a6/player_ias.vflset/en_US/base.js',
'1tm7-g_A9zsI8_Lay_', 'xI4Vem4Put_rOg',
),
(
'https://www.youtube.com/s/player/b22ef6e7/player_ias.vflset/en_US/base.js',
'b6HcntHGkvBLk_FRf', 'kNPW6A7FyP2l8A',
),
]
@@ -207,7 +215,7 @@ def tearDown(self):
def t_factory(name, sig_func, url_pattern):
def make_tfunc(url, sig_input, expected_sig):
m = url_pattern.match(url)
assert m, '%r should follow URL format' % url
assert m, f'{url!r} should follow URL format'
test_id = m.group('id')
def test_func(self):

View File

@@ -1,34 +0,0 @@
{
"latest": "2013.01.06",
"signature": "72158cdba391628569ffdbea259afbcf279bbe3d8aeb7492690735dc1cfa6afa754f55c61196f3871d429599ab22f2667f1fec98865527b32632e7f4b3675a7ef0f0fbe084d359256ae4bba68f0d33854e531a70754712f244be71d4b92e664302aa99653ee4df19800d955b6c4149cd2b3f24288d6e4b40b16126e01f4c8ce6",
"versions": {
"2013.01.02": {
"bin": [
"http://youtube-dl.org/downloads/2013.01.02/youtube-dl",
"f5b502f8aaa77675c4884938b1e4871ebca2611813a0c0e74f60c0fbd6dcca6b"
],
"exe": [
"http://youtube-dl.org/downloads/2013.01.02/youtube-dl.exe",
"75fa89d2ce297d102ff27675aa9d92545bbc91013f52ec52868c069f4f9f0422"
],
"tar": [
"http://youtube-dl.org/downloads/2013.01.02/youtube-dl-2013.01.02.tar.gz",
"6a66d022ac8e1c13da284036288a133ec8dba003b7bd3a5179d0c0daca8c8196"
]
},
"2013.01.06": {
"bin": [
"http://youtube-dl.org/downloads/2013.01.06/youtube-dl",
"64b6ed8865735c6302e836d4d832577321b4519aa02640dc508580c1ee824049"
],
"exe": [
"http://youtube-dl.org/downloads/2013.01.06/youtube-dl.exe",
"58609baf91e4389d36e3ba586e21dab882daaaee537e4448b1265392ae86ff84"
],
"tar": [
"http://youtube-dl.org/downloads/2013.01.06/youtube-dl-2013.01.06.tar.gz",
"fe77ab20a95d980ed17a659aa67e371fdd4d656d19c4c7950e7b720b0c2f1a86"
]
}
}
}

View File

@@ -1 +1 @@
@py -bb -Werror -Xdev "%~dp0yt_dlp\__main__.py" %*
@py -Werror -Xdev "%~dp0yt_dlp\__main__.py" %*

Some files were not shown because too many files have changed in this diff Show More