1
0
mirror of https://github.com/yt-dlp/yt-dlp synced 2025-12-16 14:15:41 +07:00

Compare commits

...

281 Commits

Author SHA1 Message Date
pukkandan
91f071af60 Release 2021.12.01 2021-12-01 05:46:15 +05:30
pukkandan
2aa5e2cc01 Ensure same config file is not loaded multiple times 2021-12-01 03:35:31 +05:30
j54vc1bk
1bad50eced [CableAV] Add extractor (#1842)
Authored by: j54vc1bk
2021-12-01 00:49:47 +05:30
u-spec-png
ac0efabf12 [Bilibili] Fix title extraction (#1716)
Closes #1714
Authored by: u-spec-png
2021-11-30 21:48:46 +05:30
Ashish Gupta
73f035e1fe [Cleanup] Remove some unnecessary groups in regexes (#1738)
Authored by: Ashish0804
2021-11-30 21:44:47 +05:30
nyuszika7h
0cbed930c8 [trovo] Fix extractor (#1818)
Closes #1782

Authored by: nyuszika7h
2021-11-30 21:41:07 +05:30
Ashish Gupta
5118d2ec58 [DiscoveryPlus] Rewrite extractors (see desc) (#1766)
* Add `DiscoveryPlusItalyShowIE`
* Use `uuid.uuid4().hex` for device id so no cookies are required
* Fix dash formats not being downloaded
* Extract subtitles from manifests
* Move all extractors to one file and restructure inheritances

Authored by: Ashish0804, pukkandan
2021-11-30 21:39:15 +05:30
pukkandan
717216b093 Validate --get-bypass-country
Closes #1834
2021-11-30 01:02:33 +05:30
pukkandan
5c22c63da3 Fix --trim-filename when filename has .
Closes #1837
2021-11-30 00:14:18 +05:30
pukkandan
ee8dd27a73 [cleanup] Add deprecation warnings 2021-11-29 23:34:33 +05:30
pukkandan
f304da8a29 [cleanup] Misc cleanup
Closes #1805, closes #1800
2021-11-29 23:34:33 +05:30
pukkandan
06dfe0a0a2 [cleanup] Refactor JSInterpreter._seperate 2021-11-29 22:56:35 +05:30
pukkandan
75b725a7cc [build] Use workflow_dispatch for release 2021-11-29 22:52:01 +05:30
pukkandan
13ab5fa586 [build] Fix MacOS Build
Closes #1624
2021-11-29 22:52:01 +05:30
pukkandan
36eaf3039a [build] Save Git HEAD at release alongside version info 2021-11-29 22:52:01 +05:30
pukkandan
f2ebc5c7be Option --wait-for-video to wait for scheduled streams 2021-11-29 22:52:01 +05:30
pukkandan
b222c27145 Option --break-per-input to apply --break-on... to each input URL 2021-11-29 22:52:01 +05:30
pukkandan
5e5be0c0b2 Fix --break-on-archive when pre-checking 2021-11-29 22:52:01 +05:30
pukkandan
7578d77d8c [downloader] Add colors to download progress 2021-11-29 22:51:18 +05:30
pukkandan
b29165267f [youtube] Decrypt n-sig for URLs with ratebypass
Closes #1796
2021-11-29 22:51:18 +05:30
pukkandan
bc104778d6 [vimeo] Sort http formats higher
Closes #1821
2021-11-29 22:51:18 +05:30
MinePlayersPE
d298d33fe6 [Instagram] Display more login errors (#1822)
Authored by: MinePlayersPE
2021-11-28 17:59:55 +05:30
Deer-Spangle
bf57cfa8b7 [RedGifs] Add Search and User extractors (#1808)
Authored by: Deer-Spangle
2021-11-28 10:34:06 +05:30
std-move
3c2208f82d [NovaEmbed] Fix extractor (#1814)
Authored by: std-move
2021-11-28 00:59:06 +05:30
shirt
93e597ba28 Fix logic error in report_unplayable_conflict 2021-11-27 12:13:08 -05:00
pukkandan
b28cdcc0e4 [tiktok:user] Set webpage_url correctly
Closes #1802
2021-11-27 19:25:28 +05:30
DEvmIb
a33c0d9c5d [twitch:vod] Extract live status (#1722)
Authored by: DEvmIb
2021-11-27 19:25:24 +05:30
pukkandan
75689fe59b Ensure directory exists when checking formats 2021-11-27 19:21:48 +05:30
pukkandan
5ce1d13eba [EmbedSubtitles] Slightly relax duration check
and related cleanup
Closes #1385
2021-11-27 19:21:47 +05:30
pukkandan
e04b003e64 [FixupM3u8] Fixup MPEG-TS in MP4 container
Closes #1701, https://github.com/ytdl-org/youtube-dl/issues/26410
2021-11-27 19:21:47 +05:30
Grabien
909b0d66f4 [Senate.gov] Add SenateGovIE and fix SenateISVPIE (#1435)
Authored by: Grabien, pukkandan
2021-11-27 16:07:45 +05:30
u-spec-png
dfd78699f5 [Aljazeera] Fix extractor (#1577)
Closes #1518
Authored by: u-spec-png
2021-11-27 13:42:56 +05:30
mpeter50
639f80c1f9 [Twitch:vod] Add chapters (#1515)
Authored by: mpeter50
2021-11-27 13:30:58 +05:30
gustaf
896a88c5c6 [Tvplayhome] Fix extractor (#1357)
Authored by: pukkandan, 18928172992817182 (gustaf)
2021-11-27 12:54:48 +05:30
chio0hai
4e4ba1d75f [redgifs] Add extractor (#1631)
Closes #1504
Authored by: chio0hai
2021-11-27 12:40:29 +05:30
Yakabuff
2abf081554 [xvideos] Fix extractor (#1799)
Closes #1788 
Authored by: Yakabuff
2021-11-27 12:34:51 +05:30
Henrik Heimbuerger
359df0fc42 [nebula] Add NebulaCollectionIE and rewrite extractor (#1694)
Closes #1690
Authored by: hheimbuerger
2021-11-27 12:21:32 +05:30
Ashish Gupta
3938a9212c [CPTwentyFour] Add extractor (#1769)
Closes #1768
Authored by: Ashish0804
2021-11-27 12:01:42 +05:30
shirt
cf1f13b817 [generic] Support mpd manifests without extension (#1806)
Authored by: shirt-dev
2021-11-27 10:45:59 +05:30
Grabien
18d6dd4e01 [extractor/breitbart] Breitbart.com website support (#1434)
Authored by: Grabien
2021-11-27 00:30:04 +05:30
cntrl-s
883ecd5494 Streamff extractor (#1736)
Closes #1359 
Authored by: cntrl-s
2021-11-27 00:05:39 +05:30
pukkandan
eb56d132d2 [cleanup,instagram] Refactor extractors
Closes #1561
2021-11-24 18:24:01 +05:30
Aurora
17b4540662 [radiozet] Add extractor (#1593)
Authored by: 0xA7404A (Aurora)
2021-11-24 16:17:53 +05:30
Tim
da27aeea5c [ITV] Fix extractor (#1776)
Closes #1775
Authored by: staubichsauger
2021-11-24 15:38:58 +05:30
Sipherdrakon
fec41d17a5 [MTV] Improve mgid extraction (#1713)
Original PR: https://github.com/ytdl-org/youtube-dl/pull/30149
Fixes: #713, #1580, https://github.com/ytdl-org/youtube-dl/issues/30139

Authored by: Sipherdrakon, kikuyan
2021-11-24 13:31:49 +05:30
pukkandan
a61fd4cf6f [youtube:search_url] Add playlist/channel support
Closes #1213, #1214
2021-11-24 09:33:15 +05:30
pukkandan
a6213a4925 [cleanup,youtube] Reorganize Tab and Search extractor inheritances 2021-11-24 09:28:59 +05:30
pukkandan
9941a1e127 [PatreonUser] Do not capture RSS URLs
Closes #1777
2021-11-24 08:28:36 +05:30
pukkandan
ff51ed588f Clarify video/audio-only formats in -F
Related: #1759
2021-11-23 20:42:20 +05:30
pukkandan
57dbe8077f [jsinterp] Fix splice to handle float
Needed for new youtube js player f1ca6900
Closes #1767
2021-11-23 20:34:34 +05:30
pukkandan
e5d731f35d [tv2] Expand valid URL
Closes #1764
2021-11-23 20:34:25 +05:30
pukkandan
d52cd2f5cd [sbs] Fix for movies and livestreams
Closes #1640
2021-11-23 13:30:40 +05:30
pukkandan
bc8ab44ea0 [itv] Fix for Python 3.6/3.7
Closes #1758
2021-11-23 13:30:40 +05:30
pukkandan
8f122fa070 [extractor] Extract average_rating from JSON-LD
Eg: Crunchyroll
2021-11-23 13:14:06 +05:30
pukkandan
14a086058a [ARDBetaMediathek] Handle new URLs
Adapted from 8562218350
Closes #1601
2021-11-23 02:33:41 +05:30
Zirro
0e6b018a10 Ensure path for link files exists (#1755)
Authored by: Zirro
2021-11-23 01:41:49 +05:30
pukkandan
f7b558df4d [mediaklikk] Expand valid URL
Partial fix for #1409
2021-11-23 01:29:11 +05:30
pukkandan
1ee34c76bb [vimeo] Add fallback for config URL
Closes #1662
2021-11-23 01:29:11 +05:30
pukkandan
234416e4bf [downloader/ffmpeg] Fix for direct videos inside mpd manifests
Closes #1751
2021-11-23 01:29:10 +05:30
pukkandan
c98d4df23b [WDR] Expand valid URL
Closes #1749
2021-11-23 01:29:08 +05:30
4a1e2y5
849d699a8b [xvideos] Detect embed URLs (#1729)
Authored by: 4a1e2y5
2021-11-21 04:54:05 +05:30
Ashish Gupta
77fcc65158 [CozyTV] Add extractor (#1727)
Authored by: Ashish0804
2021-11-20 14:55:14 +05:30
aarubui
545ad64988 [willow] Add extractor (#1723)
Authored by: aarubui
2021-11-20 09:33:43 +05:30
pukkandan
d76991ab07 Fix --check-formats for mhtml
Closes #1709
2021-11-20 08:33:55 +05:30
pukkandan
282f570918 [utils] Fix error when copying LazyList 2021-11-20 08:33:55 +05:30
pukkandan
c07a39ae8e [utils] Fix PagedList
Bug in d8cf8d97a8
2021-11-20 08:33:53 +05:30
pukkandan
c5e3f84972 [utils] Allow alignment in render_table
and add tests
2021-11-20 08:33:51 +05:30
nyuszika7h
c45b87419f [bbc] Get all available formats (#1717)
Authored by: nyuszika7h
2021-11-19 20:27:01 +05:30
Paper
7333296ff5 [VidLii] Add 720p support (#1681)
Authored by: mrpapersonic
2021-11-19 11:41:36 +05:30
The Hatsune Daishi
a04e005521 [AES] Add ECB mode (#1686)
Needed for #1688
Authored by: nao20010128nao
2021-11-19 07:24:10 +05:30
nyuszika7h
6b993ca765 [hls] Better FairPlay DRM detection (#1661)
Authored by: nyuszika7h
2021-11-19 07:19:51 +05:30
pukkandan
dd2a987d3f [tests] Fix tests 2021-11-19 06:30:25 +05:30
pukkandan
9222c38182 [cleanup] Minor cleanup
Closes #1696, Closes #1673
2021-11-19 05:36:28 +05:30
pukkandan
467b6b8387 [ExtractAudio] Support alac
Closes #1707
2021-11-19 05:20:13 +05:30
pukkandan
8863c8f09e [soundcloud:search] Fix pagination 2021-11-19 04:23:13 +05:30
Joshua Lochner
e16fefd869 [Reddit] Add support for 1080p videos (#1682)
Fixes: https://github.com/ytdl-org/youtube-dl/issues/29565

Authored by: xenova
2021-11-19 04:18:48 +05:30
zulaport
c6118ca2cc [Stripchat] Add extractor (#1668)
Authored by: zulaport
2021-11-19 04:15:13 +05:30
Paul Wise
764f5de2f4 [blogger] Add extractor (#1629)
Authored by: pabs3
2021-11-19 03:45:41 +05:30
Paul Wise
cfcaf64a4b [rtrfm] Add extractor (#1628)
Authored by: pabs3
2021-11-19 03:44:38 +05:30
u-spec-png
402cd603a4 [LinkedIn] Add extractor (#1597)
Closes #1206 
Authored by: u-spec-png
2021-11-19 03:27:40 +05:30
The Hatsune Daishi
22a510ff44 [mixch] add support for mixch.tv (#1586)
Authored by: nao20010128nao
2021-11-19 03:13:22 +05:30
u-spec-png
61be785a67 [peer.tv] Add extractor (#1499)
Closes #1388 
Authored by: u-spec-png
2021-11-19 02:50:45 +05:30
Ashish Gupta
11852843e7 [AmazonStoreIE] Fix regex to not match vdp urls (#1699)
Closes: #1698 
Authored by: Ashish0804
2021-11-18 21:43:39 +05:30
Ashish Gupta
525d9e0c7d [HotStar] Set language field from tags (#1700)
Authored by: Ashish0804
2021-11-18 21:30:48 +05:30
Ashish Gupta
9d63137eac [CanalAlpha] Add extractor (#1655)
Closes: #1528 
Authored by: Ashish0804
2021-11-18 21:29:53 +05:30
Ashish Gupta
266a1b5d52 [ESPNCricInfo] Add extractor (#1652)
Closes: #1635 
Authored by: Ashish0804
2021-11-18 21:28:51 +05:30
Ashish Gupta
450bdf69bc [OneFootball] Add extractor (#1613)
Closes: #1598 
Authored by: Ashish0804
2021-11-18 21:27:50 +05:30
pukkandan
720c309932 [youtube] Add storyboard formats
Closes: #1553, https://github.com/ytdl-org/youtube-dl/issues/9868
Related: https://github.com/ytdl-org/youtube-dl/pull/14951
2021-11-17 01:29:34 +05:30
pukkandan
d8cf8d97a8 [utils] Fix PagedList 2021-11-16 21:16:05 +05:30
coletdjnz
d0d012d4e7 [youtube] Add default player client (#1685)
Authored-by: coletdjnz
2021-11-16 01:22:01 +00:00
pukkandan
013b50b794 Fix 'postprocessor_hooks`
Closes #1650
2021-11-15 04:51:11 +05:30
pukkandan
dac5df5a98 Add option --embed-info-json to embed info-json in mkv
Closes #1644
2021-11-15 04:51:11 +05:30
pukkandan
f279aaee8e Add compat-option embed-metadata 2021-11-15 01:25:47 +05:30
pukkandan
d0e6121adf [curiositystream] Fix login
Bug from 92775d8a40
2021-11-13 23:56:56 +05:30
pukkandan
9ac24e235e [curiositystream] Add more metadata
Closes #1568
2021-11-13 23:49:14 +05:30
pukkandan
7c7f7161fc Fix --load-info-json of playlists with failed entries 2021-11-13 17:32:07 +05:30
pukkandan
e339d25a0d [youtube] Minor improvement to format sorting 2021-11-13 15:15:23 +05:30
pukkandan
39c04074e7 [ExtractAudio] Fix conversion to wav
Closes #1645
2021-11-13 15:15:23 +05:30
pukkandan
92775d8a40 [CuriosityStream] Fix series
Bug indroduced in ed807c1837
2021-11-13 15:14:22 +05:30
MinePlayersPE
df03de2c02 [RoosterTeethSeries] Fix for multiple pages (#1642)
Authored by: MinePlayersPE
2021-11-12 19:16:19 +05:30
pukkandan
48e9310660 [nexx] Better error message for unsupported format
Related: #1637
2021-11-12 03:59:32 +05:30
pukkandan
c1dc0ee56e [NovaEmbed] Fix extractor
Closes #1570
2021-11-12 03:50:10 +05:30
pukkandan
bf5f605e76 bugfix for e08a85d865 2021-11-11 08:44:54 +05:30
pukkandan
e08a85d865 Fix writing playlist infojson with --no-clean-infojson 2021-11-11 08:18:35 +05:30
pukkandan
093a17107e Allow using a custom format selector through API
Closes #1619, #1464
2021-11-11 08:18:34 +05:30
pukkandan
44bcb8d122 Fix bug in parsing --add-header
Closes #1614
2021-11-11 08:18:34 +05:30
makeworld
013ae2e503 [CBC Gem] Fix for shows that don't have all seasons (#1621)
Closes #1594
Authored by: makeworld-the-better-one
2021-11-11 01:07:05 +05:30
u-spec-png
b47d236d72 [Tokentube] Fix description (#1578)
Authored by: u-spec-png
2021-11-10 20:58:38 +05:30
pukkandan
9ebf3c6ab9 [version] update
:ci skip all
2021-11-10 01:47:10 +00:00
pukkandan
7144b697fc Release 2021.11.10.1
:ci skip all
2021-11-10 07:15:11 +05:30
pukkandan
2e9a445bc3 [version] update
:ci skip all
2021-11-10 01:14:33 +00:00
pukkandan
86c1a8aae4 Release 2021.11.10 2021-11-10 06:41:44 +05:30
Lauren Liberda
ebfab36fca [tvp] Add TVPStreamIE (#1401)
Authored by: selfisekai
2021-11-10 06:16:51 +05:30
Lauren Liberda
c15de6ffe6 [tvp] Fix extractor (#1401)
Authored by: selfisekai
2021-11-10 06:16:40 +05:30
Lauren Liberda
56bb56f3cf [tvp] Fix embeds (#1401)
Authored by: selfisekai
2021-11-10 06:16:30 +05:30
Lauren Liberda
c0599d4fe4 [wppilot] Add extractors (#1401)
Authored by: selfisekai
2021-11-10 06:16:18 +05:30
Lauren Liberda
3f771f75d7 [radiokapital] Add extractors (#1401)
Authored by: selfisekai
2021-11-10 06:15:46 +05:30
Lauren Liberda
ed76230b3f [polsatgo] Add extractor (#1386)
Authored by: selfisekai, sdomi

Co-authored-by: Dominika Liberda <ja@sdomi.pl>
2021-11-10 06:13:53 +05:30
Lauren Liberda
89fcdff5d8 [polskieradio] Add extractors (#1386)
Authored by: selfisekai
2021-11-10 06:11:24 +05:30
Lauren Liberda
f98709af31 [extractor] Add _search_nextjs_data (#1386)
Authored by: selfisekai
2021-11-10 06:11:05 +05:30
pukkandan
c586f9e8de [cleanup] minor fixes 2021-11-10 04:19:54 +05:30
pukkandan
59a7a13ef9 [docs] Minor documentation improvements
Closes #1583, #1599
2021-11-10 04:19:52 +05:30
pukkandan
4476d2c764 [outtmpl] Add alternate forms for q and j 2021-11-10 04:19:42 +05:30
pukkandan
aa9369a2d8 [cleanup] Minor improvements to error and debug messages 2021-11-10 04:19:33 +05:30
stanoarn
d54c6003ab fix for e1b7c54d78
Authored by: stanoarn
2021-11-10 03:44:17 +05:30
u-spec-png
1ee316a34a [Gab] Add extractor (#1505)
Closes #1462 
Authored by: u-spec-png
2021-11-10 03:41:51 +05:30
ozburo
358247ed2a [imdb] Fix thumbnail (#1581)
Authored by: ozburo
2021-11-10 02:56:57 +05:30
nixxo
9b12e9a573 [la7] Fix extractor (#1575)
Closes #1065 
Authored by: nixxo
2021-11-10 02:37:52 +05:30
u-spec-png
a109acbf82 [ZenYandex] Fix extractor (#1558)
Closes #1545
Authored by: u-spec-png
2021-11-09 00:06:01 +05:30
pukkandan
a49891c761 Fix bug in --load-infojson of playlists
Fixes: https://github.com/yt-dlp/yt-dlp/issues/1514#issuecomment-962659529
2021-11-08 00:26:08 +05:30
pukkandan
582fad70f5 [outtmpl] Do not traverse None
Closes #1585
2021-11-08 00:26:08 +05:30
pgaig
aeec0e44e2 [VRT] Fix login (#1566)
Closes #1557 
Authored by: pgaig
2021-11-06 22:57:40 +05:30
Ryan Hendrickson
d9190e4467 [youtube] Add Invidious list for playlists/channels (#1567)
Authored by: rhendric
2021-11-06 08:37:34 +05:30
stanoarn
e1b7c54d78 [iPrima] Fix extractor (#1541)
Authored by: stanoarn
2021-11-06 07:55:18 +05:30
pukkandan
244644c02c [roosterteeth] Add series extractor 2021-11-06 07:53:58 +05:30
pukkandan
34921b4345 [utils] Add join_nonempty 2021-11-06 07:53:55 +05:30
pukkandan
a331949df3 [test/download] Fallback test to bv 2021-11-06 07:53:53 +05:30
u-spec-png
2c5e8a961e [Newgrounds] Fix description (#1562)
Authored by: u-spec-png
2021-11-06 03:42:16 +05:30
u-spec-png
b515b37cc4 [Vupload] Fix extractor (#1549)
Authored by: u-spec-png
2021-11-06 03:35:13 +05:30
pukkandan
3c4eebf772 [AmazonStore] Add extractor (#1512)
Closes #1509

Authored by: Ashish0804
2021-11-06 03:13:50 +05:30
u-spec-png
fb2d1ee6cc [Instagram] Add IOS URL support (#1560)
Authored by: u-spec-png
2021-11-06 03:01:34 +05:30
pukkandan
9cb070f9c0 [vimeo] Detect source extension
and misc cleanup

Cherry-picked from #1477
Closes #1402

Authored by: flashdagger
2021-11-06 02:33:06 +05:30
pukkandan
2a6f8475ac [vimeo] Fix ondemand videos and direct URLs with hash
Closes #1353, #1471
2021-11-06 02:33:05 +05:30
Francesco Frassinelli
73673ccff3 [RaiplayRadio] Add extractors (#780)
Original PR: https://github.com/ytdl-org/youtube-dl/pull/21837
Authored by: frafra
2021-11-05 22:24:56 +05:30
pukkandan
aeb2a9ad27 [FormatSort] eac3 is better than ac3 2021-11-05 20:40:45 +05:30
pukkandan
df6c409d1f [piksel] Fix sorting 2021-11-05 20:39:16 +05:30
pukkandan
a9d4da606d [crunchyroll] Add extractor-args language and hardsub
Closes #1516
2021-11-05 00:12:12 +05:30
pukkandan
c18d4482b1 [youtube] Fix sorting for some videos 2021-11-05 00:12:11 +05:30
u-spec-png
0f6518938d [N1] Add support for nova.rs (#1537)
Authored by: u-spec-png
2021-11-04 20:59:59 +05:30
u-spec-png
22cd06c452 [Instagram] Improve thumbnail extraction (#1496)
Authored by: u-spec-png
2021-11-04 08:52:10 +05:30
pukkandan
a4211baff5 [cleanup] Minor cleanup 2021-11-04 03:53:15 +05:30
pukkandan
8913ef74d7 [ffmpeg] Detect libavformat version for aac_adtstoasc
and print available features in verbose head
Based on https://github.com/ytdl-org/youtube-dl/pull/29581
2021-11-04 03:13:37 +05:30
pukkandan
832e9000c7 [ffmpeg] Accurately detect presence of setts
Closes #1237
2021-11-04 02:24:12 +05:30
CrypticSignal
673c0057e8 [ExtractAudio] Use libfdk_aac if available
Closes #1502
Authored by: CrypticSignal
2021-11-04 02:23:45 +05:30
pukkandan
9af98e17bd [ffmpeg] Framework for feature detection
Related: #1502, #1237, https://github.com/ytdl-org/youtube-dl/pull/29581
2021-11-04 02:16:39 +05:30
pukkandan
31c49255bf [ExtractAudio] Rescale --audio-quality correctly
Authored by: CrypticSignal, pukkandan
2021-11-04 00:05:53 +05:30
pukkandan
bd93fd5d45 [fragment] Fix progress display in fragmented downloads
Closes #1517
2021-11-03 16:45:58 +05:30
pukkandan
d89257f398 [youtube] Remove unnecessary no-playlist warning 2021-11-03 16:35:09 +05:30
pukkandan
9bd979ca40 [utils] Parse vp09 as vp9 2021-11-03 16:35:08 +05:30
pukkandan
a1fc7ca074 [jsinterp] Handle default in switch better 2021-11-03 16:35:08 +05:30
u-spec-png
c588b602d3 [Instagram] Fix incorrect resolution (#1494)
Authored by: u-spec-png
2021-10-31 19:50:09 +05:30
kaz-us
f0ffaa1621 [vk] Fix login (#1495)
Closes #1459
Authored by: kaz-us
2021-10-31 19:46:12 +05:30
pukkandan
0930b11fda [docs,cleanup] Improve docs and minor cleanup
Closes #1387, #1404, #1408, #1485, #1415, #1450, #1492
2021-10-31 14:47:33 +05:30
pukkandan
a0bb6ce58d [youtube] refactor itag processing 2021-10-31 13:26:44 +05:30
pukkandan
da48320075 [linkedin] Don't login multiple times 2021-10-31 13:08:03 +05:30
kaz-us
5b6cb56207 [vk] Add subtitles (#1480)
Authored by: kaz-us
2021-10-31 10:43:49 +05:30
u-spec-png
b2f25dc242 [Olympics] Fix extractor (#1483)
Authored by: u-spec-png
2021-10-31 10:40:42 +05:30
Ashish Gupta
2f9e021299 [PlanetMarathi] Add extractor (#1484)
Authored by: Ashish0804
2021-10-31 10:39:26 +05:30
u-spec-png
8dcf65c92e [Instagram] Add login to playlist (#1488)
Authored by: u-spec-png
2021-10-31 10:38:04 +05:30
Marcel
92592bd305 [ceskatelevize] Fix extractor (#1489)
Authored by: flashdagger
2021-10-31 10:19:03 +05:30
pukkandan
404f611f1c [youtube] Fix throttling by decrypting n-sig (#1437) 2021-10-31 09:53:58 +05:30
u-spec-png
cd9ea4104b [instagram] Add more formats when logged in (#1487)
Authored by: u-spec-png
2021-10-31 08:24:39 +05:30
Ashish Gupta
652fb0d446 [VLive] Add upload_date and thumbnail (#1486)
Closes #1472
Authored by: Ashish0804
2021-10-30 23:26:00 +05:30
Sipherdrakon
6b301aaa34 [mtv] Fix some videos (#1453)
Partial fix for #713
Authored by: Sipherdrakon
2021-10-30 06:48:59 +05:30
pukkandan
fa0b816e37 [generic] Detect more json_ld
Closes #1475
2021-10-30 02:03:53 +05:30
pukkandan
5e7bbac305 [generic] parse jwplayer with only the json URL
Closes #1476
2021-10-30 01:54:50 +05:30
pukkandan
10beccc980 [FormatSort] Fix some fields' defaults
Closes #1479
2021-10-30 01:14:14 +05:30
nixxo
e6ff66efc0 [mediaset] Add playlist support (#1463)
Closes #1372
Authored by: nixxo
2021-10-30 01:09:55 +05:30
Luc Ritchie
aeaf3b2b92 [Coub] Fix media format identification (#1469)
Authored by: wlritchi
2021-10-29 23:47:10 +05:30
Ashish Gupta
7b5f3f7c3d [MLSScoccer] Add extractor (#1452)
Authored by: Ashish0804
Closes #1451
2021-10-28 23:48:09 +05:30
ajj8
3783b5f1d1 [itv] Add support for ITV News (#1456)
Authored by: ajj8
2021-10-28 16:27:09 +05:30
pukkandan
ab630a57b9 [viewlift] Fix typo in 5be76d1ab7 2021-10-28 02:14:33 +05:30
pukkandan
16b0d7e621 [utils] Add jwt_decode_hs256
Code from #1340
Authored by: Ashish0804
2021-10-28 02:07:41 +05:30
pukkandan
5be76d1ab7 [viewlift] Add cookie-based login and series support
Closes #1340, #1316
Authored by: Ashish0804, pukkandan
2021-10-28 02:07:40 +05:30
ajj8
b7b186e7de [sky] Add SkyNewsStoryIE (#1443)
Authored by: ajj8
2021-10-27 21:38:48 +05:30
nyuszika7h
bd1c792327 [wakanim] Detect geo-restriction (#1429)
Authored by: nyuszika7h
2021-10-26 22:05:20 +05:30
nyuszika7h
dc88e9be03 [wakanim] Add support for MPD manifests (#1428)
Closes #1426
Authored by: nyuszika7h
2021-10-26 22:03:43 +05:30
pukkandan
673944b001 [compat] Don't create console in windows_enable_vt_mode
Closes #1420
2021-10-26 21:59:08 +05:30
Ashish Gupta
0c873df3a8 [3speak] Add extractors (#1430)
Closes #1421
Authored by: Ashish0804
2021-10-26 21:17:39 +05:30
pukkandan
c35ada3360 [twitter] Do not sort by codec
Closes #1431
2021-10-26 21:15:38 +05:30
pukkandan
0db3bae879 [extractor] Fix some errors being converted to ExtractorError 2021-10-26 20:27:09 +05:30
pukkandan
48f796874d [utils] Create DownloadCancelled exception
as super-class of ExistingVideoReached, RejectedVideoReached, MaxDownloadsReached

Third parties can also sub-class this to cancel the download queue from a hook
2021-10-26 20:27:09 +05:30
pukkandan
abad800058 [downloader/ffmpeg] Fix vtt download with ffmpeg 2021-10-26 20:27:09 +05:30
pukkandan
08438d2ca5 [outtmpl] Add type link for internet shortcut files
and refactor related code
Closes #1405
2021-10-26 20:27:09 +05:30
pukkandan
7de837a5e3 [utils] Sanitize URL when determining protocol
Closes #1406
2021-10-26 20:26:08 +05:30
pukkandan
7e59ca440a [DiscoveryPlus] Allow language codes in URL
Closes #1425
2021-10-26 20:26:08 +05:30
u-spec-png
8e7ab2cf08 [Bilibili:comments] Fix infinite loop (#1423)
Closes #1412
Authored by: u-spec-png
2021-10-26 01:03:01 +05:30
u-spec-png
ad64a2323f [instagram] Fix bug in ab2ffab22d (#1403)
Authored by: u-spec-png
2021-10-24 22:01:33 +05:30
pukkandan
f2fe69c7b0 Approximate filesize from bitrate
Closes #1400
2021-10-24 18:02:00 +05:30
pukkandan
fccf502118 [youtube] Populate thumbnail with the best "known" thumbnail
Closes #402, Related: https://github.com/yt-dlp/yt-dlp/issues/340#issuecomment-950290624
2021-10-24 15:00:18 +05:30
pukkandan
9f1a1c36e6 Separate --check-all-formats from --check-formats
Previously, `--check-formats` tested only the selected video formats, but ALL thumbnails
2021-10-24 15:00:17 +05:30
pukkandan
96565c7e55 [cleanup] Add keyword automatically to SearchIE descriptions
and some minor cleanup of docs
2021-10-23 21:20:19 +05:30
pukkandan
ec11a9f4a2 [minicurses] Add more colors 2021-10-23 05:23:38 +05:30
Alf Marius
93c7f3398d [Nrk] See desc (#1382)
* Endpoint has changed. Currently the old one redirects to the new one, but this may change
* Descriptions use \r instead of \n. So translate it

Authored by: fractalf
2021-10-23 04:22:01 +05:30
pukkandan
1117579b94 [version] update
:ci skip all
2021-10-22 20:47:18 +00:00
pukkandan
0676afb126 Release 2021.10.22 2021-10-23 02:09:15 +05:30
pukkandan
49a57e70a9 [cleanup] misc 2021-10-23 02:09:10 +05:30
pukkandan
457f6d6866 [vlive:channel] Fix extraction
Based on https://github.com/ytdl-org/youtube-dl/pull/29866
Closes #749, #927, https://github.com/ytdl-org/youtube-dl/issues/29837
Authored by kikuyan, pukkandan
2021-10-22 23:19:38 +05:30
pukkandan
ad0090d0d2 [cookies] Local State should be opened as utf-8
Closes #1276
2021-10-22 23:19:37 +05:30
makeworld
d183af3cc1 [CBC] Support CBC Gem member content (#1294)
Authored by: makeworld-the-better-one
2021-10-22 06:28:32 +05:30
makeworld
3c239332b0 [CBC] Fix Gem livestream (#1289)
Authored by: makeworld-the-better-one
2021-10-22 06:26:29 +05:30
u-spec-png
ab2ffab22d [Instagram] Add login (#1288)
Authored by: u-spec-png
2021-10-22 06:23:45 +05:30
zenerdi0de
f656a23cb1 [patreon] Fix vimeo player regex (#1332)
Closes #1323
Authored by: zenerdi0de
2021-10-22 06:20:49 +05:30
pukkandan
58ab5cbc58 [vimeo] Fix embedded player.vimeo URL
Closes #1138, partially fixes #1323
Cherry-picked from upstream commit 3ae9c0f410b1d4f63e8bada67dd62a8d2852be32
2021-10-22 06:15:51 +05:30
Damiano Amatruda
17ec8bcfa9 [microsoftstream] Add extractor (#1201)
Based on: https://github.com/ytdl-org/youtube-dl/pull/24649
Fixes: https://github.com/ytdl-org/youtube-dl/issues/24440
Authored by: damianoamatruda, nixklai
2021-10-22 05:34:00 +05:30
u-spec-png
0f6e60bb57 [tagesschau] Fix extractor (#1227)
Closes #1124
Authored by: u-spec-png
2021-10-22 05:09:50 +05:30
pukkandan
ef58c47637 [SponsorBlock] Obey extractor-retries and sleep-requests 2021-10-22 04:42:44 +05:30
pukkandan
19b824f693 Re-implement deprecated option --id
Despite `--title`, `--literal` etc being deprecated,
`--id` is still documented in youtube-dl and so should be kept
2021-10-22 04:42:24 +05:30
jfogelman
f0ded3dad3 [AdobePass] Fix RCN MSO (#1349)
Authored by: jfogelman
2021-10-22 01:06:03 +05:30
pukkandan
733d8e8f99 [build] Refactor pyinst.py and misc cleanup
Closes #1361
2021-10-21 20:11:05 +05:30
pukkandan
386cdfdb5b [build] Release windows exe built with py2exe
Closes: #855
Related: #661, #705, #890, #1024, #1160
2021-10-21 20:11:05 +05:30
pukkandan
6e21fdd279 [build] Enable lazy-extractors in releases
Set the environment variable `YTDLP_NO_LAZY_EXTRACTORS`
to forcefully disable lazy extractor loading
2021-10-21 19:41:33 +05:30
Ricardo
0e5927eebf [build] Build standalone MacOS packages (#1221)
Closes #1075 
Authored by: smplayer-dev
2021-10-21 16:18:46 +05:30
Ashish Gupta
27f817a84b [docs] Migrate issues to use forms (#1302)
Authored by: Ashish0804
2021-10-21 15:26:36 +05:30
pukkandan
d3c93ec2b7 Don't create console for subprocesses on Windows (#1261)
Closes #1251
2021-10-20 21:49:40 +05:30
pukkandan
b4b855ebc7 [fragment] Print error message when skipping fragment 2021-10-19 22:58:26 +05:30
pukkandan
2cda6b401d Revert "[fragments] Pad fragments before decrypting (#1298)"
This reverts commit 373475f035.
2021-10-19 22:58:25 +05:30
pukkandan
aa7785f860 [utils] Standardize timestamp formatting code
Closes #1285
2021-10-19 22:58:25 +05:30
pukkandan
9fab498fbf [http] Retry on socket timeout
Closes #1222
2021-10-19 22:58:24 +05:30
Nil Admirari
e619d8a752 [ModifyChapters] Do not mutate original chapters (#1322)
Closes #1295 
Authored by: nihil-admirari
2021-10-19 14:21:05 +05:30
Zirro
1e520b5535 Add option --no-batch-file (#1335)
Authored by: Zirro
2021-10-19 00:41:07 +05:30
pukkandan
176f1866cb Add HDR information to formats 2021-10-18 18:35:02 +05:30
pukkandan
17bddf3e95 Reduce default --socket-timeout 2021-10-18 16:40:12 +05:30
pukkandan
2d9ec70423 [ModifyChapters] Allow removing sections by timestamp
Eg: --remove-chapters "*10:15-15:00".
The `*` prefix is used so as to avoid any conflicts with other valid regex
2021-10-18 16:06:51 +05:30
pukkandan
e820fbaa6f Do not verify thumbnail URLs by default
Partially reverts cca80fe611 and 0ba692acc8

Unless `--check-formats` is specified, this causes yt-dlp to return incorrect thumbnail urls.
See https://github.com/yt-dlp/yt-dlp/issues/340#issuecomment-877909966, #402

But the overhead in general use is not worth it

Closes #694, #725
2021-10-18 15:44:47 +05:30
pukkandan
b11d210156 [EmbedMetadata] Allow overwriting all default metadata
with `meta_default` key
2021-10-18 10:31:56 +05:30
pukkandan
24b0a72b30 [cleanup] Remove broken youtube login code 2021-10-18 09:25:51 +05:30
coletdjnz
aae16f6ed9 [youtube:comments] Fix comment section not being extracted in new layouts (#1324)
Co-authored-by: coletdjnz, pukkandan
2021-10-18 02:58:42 +00:00
shirt
373475f035 [fragments] Pad fragments before decrypting (#1298)
Closes #197, #1297, #1007
Authored by: shirt-dev
2021-10-18 08:14:20 +05:30
Ashish Gupta
920134b2e5 [Gronkh] Add extractor (#1299)
Closes #1293
Authored by: Ashish0804
2021-10-18 08:11:31 +05:30
Ashish Gupta
72ab768719 [SkyNewsAU] Add extractor (#1308)
Closes #1287
Authored by: Ashish0804
2021-10-18 08:09:50 +05:30
LE
01b052b2b1 [tbs] Add tbs live streams (#1326)
Authored by: llacb47
2021-10-18 07:58:20 +05:30
Ákos Sülyi
019a94f7d6 [utils] Use importlib to load plugins (#1277)
Authored by: sulyi
2021-10-18 07:16:49 +05:30
nyuszika7h
e69585f8c6 [7plus] Add cookie based authentication (#1202)
Closes #1103
Authored by: nyuszika7h
2021-10-18 07:04:56 +05:30
Damiano Amatruda
693ec74401 [on24] Add extractor (#1200)
Authored by: damianoamatruda
2021-10-18 07:02:46 +05:30
pukkandan
239df02103 Make duration_string and resolution available in --match-filter
Related: #1309
2021-10-17 17:39:33 +05:30
pukkandan
18f96d129b [utils] Allow duration strings in filter
Closes #1309
2021-10-17 17:39:33 +05:30
pukkandan
ec3f6640c1 [crunchyroll] Add season to flat-playlist
Closes #1319
2021-10-17 17:39:23 +05:30
pukkandan
dd078970ba [crunchyroll] Add support for beta.crunchyroll URLs
and fix series URLs with language code
2021-10-17 17:38:57 +05:30
pukkandan
71ce444a3f Fix --restrict-filename when used with default template 2021-10-17 01:03:04 +05:30
pukkandan
580d3274e5 [youtube] Expose different formats with same itag 2021-10-16 20:28:17 +05:30
pukkandan
03b4de722a [downloader] Fix slow progress hooks
Closes #1301
2021-10-16 20:02:40 +05:30
pukkandan
48ee10ee8a Fix conflict b/w id and ext in format selection
Closes #1282
2021-10-16 20:02:30 +05:30
Ashish Gupta
6ff34542d2 [Hotstar] Raise appropriate error for DRM 2021-10-16 14:08:52 +05:30
gustaf
e3950399e4 [Viafree] add support for Finland (#1253)
Authored by: 18928172992817182 (gustaf)
2021-10-14 17:34:40 +05:30
Ashish Gupta
974208e151 [trovo] Support channel clips and VODs (#1246)
Closes #229
Authored by: Ashish0804
2021-10-14 17:32:48 +05:30
pukkandan
883d4b1eec [YoutubeDL] Write verbose header to logger 2021-10-14 14:44:30 +05:30
pukkandan
a0c716bb61 [instagram] Show appropriate error when login is needed
Closes #1264
2021-10-14 14:44:29 +05:30
pukkandan
d5a39f0bad [http] Show the last encountered error
Closes #1262
2021-10-14 14:44:28 +05:30
Ashish Gupta
a64907d0ac [Hotstar] Mention Dynamic Range in format id (#1265)
Authored by: Ashish0804
2021-10-14 14:44:14 +05:30
pukkandan
6993f78d1b [extractor,utils] Detect more codecs/mimetypes
Fixes: https://github.com/ytdl-org/youtube-dl/issues/29943
2021-10-13 05:05:29 +05:30
pukkandan
993191c0d5 Fix bug in c111cefa5d 2021-10-13 04:43:26 +05:30
pukkandan
fc5c8b6492 [eria2c] Fix --skip-unavailable fragment 2021-10-13 04:14:12 +05:30
pukkandan
b836dc94f2 [outtmpl] Fix bug in expanding environment variables 2021-10-13 04:14:11 +05:30
pukkandan
c111cefa5d [downloader/ffmpeg] Improve simultaneous download and merge 2021-10-13 04:14:11 +05:30
pukkandan
975a0d0df9 Calculate more fields for merged formats
Closes #947
2021-10-13 04:14:11 +05:30
Ákos Sülyi
a387b69a7c [devscripts/run_tests] Use markers to filter tests (#1258)
`-k` filters using a substring match on test name.
`-m` checks markers for an exact match.
Authored by: sulyi
2021-10-13 00:24:27 +05:30
pukkandan
ecdc9049c0 [YouTube] Add auto-translated subtitles
Closes #1245
2021-10-12 15:21:32 +05:30
pukkandan
7b38649845 Fix verbose head not showing custom configs 2021-10-12 15:21:31 +05:30
pukkandan
e88d44c6ee [cleanup] Cleanup bilibili code
Closes #1169
Authored by pukkandan, u-spec-png
2021-10-12 15:21:31 +05:30
pukkandan
a2160aa45f [extractor] Generalize getcomments implementation 2021-10-12 15:21:30 +05:30
pukkandan
cc16383ff3 [extractor] Simplify search extractors 2021-10-12 15:21:30 +05:30
pukkandan
a903d8285c Fix bug in storyboards
Caused by 9359f3d4f0
2021-10-11 17:27:39 +05:30
pukkandan
9dda99f2fc [Merger] Do not add aac_adtstoasc to non-hls audio 2021-10-11 17:09:28 +05:30
pukkandan
ba10757412 [extractor] Detect EXT-X-KEY Apple FairPlay 2021-10-11 17:09:21 +05:30
pukkandan
e6faf2be36 [update] Clean up error reporting
Closes #1224
2021-10-11 09:58:24 +05:30
pukkandan
ed39cac53d Load archive only after printing verbose head
If there is some issue in loading archive, the verbose head should be visible in the logs
2021-10-11 09:49:52 +05:30
pukkandan
a169858f24 Fix check_formats output being written to stdout when -qv
Closes #1229
2021-10-11 09:49:52 +05:30
pukkandan
0481e266f5 [tiktok] Fix typo in 943d5ab133
and update tests
Closes #1226
2021-10-11 09:49:51 +05:30
Ashish Gupta
2c4bba96ac [EUScreen] Add Extractor (#1219)
Closes #1207
Authored by: Ashish0804
2021-10-11 03:36:27 +05:30
pukkandan
e8f726a57f [hidive] Fix typo in b5ae35ee6d 2021-10-10 11:44:44 +05:30
239 changed files with 11533 additions and 5873 deletions

View File

@@ -1,73 +0,0 @@
---
name: Broken site support
about: Report broken or misfunctioning site
title: "[Broken] Website Name: A short description of the issue"
labels: ['triage', 'extractor-bug']
assignees: ''
---
<!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
- First of, make sure you are using the latest version of yt-dlp. Run `yt-dlp --version` and ensure your version is 2021.10.10. If it's not, see https://github.com/yt-dlp/yt-dlp#update on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that all URLs and arguments with special characters are properly quoted or escaped.
- Search the bugtracker for similar issues: https://github.com/yt-dlp/yt-dlp/issues. DO NOT post duplicates.
- Read "opening an issue" section in CONTRIBUTING.md: https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue
- Finally, confirm all RELEVANT tasks from the following by putting x into all the boxes like this [x] (Dont forget to delete the empty space)
-->
- [ ] I'm reporting a broken site support
- [ ] I've verified that I'm running yt-dlp version **2021.10.10**
- [ ] I've checked that all provided URLs are alive and playable in a browser
- [ ] I've checked that all URLs and arguments with special characters are properly quoted or escaped
- [ ] I've searched the bugtracker for similar issues including closed ones
- [ ] I've read the opening an issue section in CONTRIBUTING.md
- [ ] I have given an appropriate title to the issue
## Verbose log
<!--
Provide the complete verbose output of yt-dlp that clearly demonstrates the problem.
Add the `-v` flag to your command line you run yt-dlp with (`yt-dlp -v <your command line>`), copy the WHOLE output and insert it below. It should look similar to this:
[debug] System config: []
[debug] User config: []
[debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKc']
[debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251
[debug] yt-dlp version 2021.10.10
[debug] Python version 2.7.11 - Windows-2003Server-5.2.3790-SP2
[debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4
[debug] Proxy map: {}
<more lines>
-->
```
PASTE VERBOSE LOG HERE
```
<!--
Do not remove the above ```
-->
## Description
<!--
Provide an explanation of your issue in an arbitrary form. Provide any additional information, suggested solution and as much context and examples as possible.
If work on your issue requires account credentials please provide them or explain how one can obtain them.
-->
WRITE DESCRIPTION HERE

View File

@@ -0,0 +1,63 @@
name: Broken site support
description: Report broken or misfunctioning site
labels: [triage, extractor-bug]
body:
- type: checkboxes
id: checklist
attributes:
label: Checklist
description: |
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
options:
- label: I'm reporting a broken site
required: true
- label: I've verified that I'm running yt-dlp version **2021.11.10.1**. ([update instructions](https://github.com/yt-dlp/yt-dlp#update))
required: true
- label: I've checked that all provided URLs are alive and playable in a browser
required: true
- label: I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/ytdl-org/youtube-dl#video-url-contains-an-ampersand-and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
required: true
- label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates
required: true
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
required: true
- label: I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
- type: input
id: region
attributes:
label: Region
description: "Enter the region the site is accessible from"
placeholder: "India"
- type: textarea
id: description
attributes:
label: Description
description: |
Provide an explanation of your issue in an arbitrary form.
Provide any additional information, any suggested solutions, and as much context and examples as possible
placeholder: WRITE DESCRIPTION HERE
validations:
required: true
- type: textarea
id: log
attributes:
label: Verbose log
description: |
Provide the complete verbose output of yt-dlp **that clearly demonstrates the problem**.
Add the `-Uv` flag to your command line you run yt-dlp with (`yt-dlp -Uv <your command line>`), copy the WHOLE output and insert it below.
It should look similar to this:
placeholder: |
[debug] Command-line config: ['-Uv', 'http://www.youtube.com/watch?v=BaW_jenozKc']
[debug] Portable config file: yt-dlp.conf
[debug] Portable config: ['-i']
[debug] Encodings: locale cp1252, fs utf-8, stdout utf-8, stderr utf-8, pref cp1252
[debug] yt-dlp version 2021.11.10.1 (exe)
[debug] Python version 3.8.8 (CPython 64bit) - Windows-10-10.0.19041-SP0
[debug] exe versions: ffmpeg 3.0.1, ffprobe 3.0.1
[debug] Optional libraries: Cryptodome, keyring, mutagen, sqlite, websockets
[debug] Proxy map: {}
yt-dlp is up to date (2021.11.10.1)
<more lines>
render: shell
validations:
required: true

View File

@@ -1,60 +0,0 @@
---
name: Site support request
about: Request support for a new site
title: "[Site Request] Website Name"
labels: ['triage', 'site-request']
assignees: ''
---
<!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
- First of, make sure you are using the latest version of yt-dlp. Run `yt-dlp --version` and ensure your version is 2021.10.10. If it's not, see https://github.com/yt-dlp/yt-dlp#update on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that site you are requesting is not dedicated to copyright infringement. yt-dlp does not support such sites. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
- Search the bugtracker for similar site support requests: https://github.com/yt-dlp/yt-dlp/issues. DO NOT post duplicates.
- Read "opening an issue" section in CONTRIBUTING.md: https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue
- Finally, confirm all RELEVANT tasks from the following by putting x into all the boxes like this [x] (Dont forget to delete the empty space)
-->
- [ ] I'm reporting a new site support request
- [ ] I've verified that I'm running yt-dlp version **2021.10.10**
- [ ] I've checked that all provided URLs are alive and playable in a browser
- [ ] I've checked that none of provided URLs violate any copyrights
- [ ] The provided URLs do not contain any DRM to the best of my knowledge
- [ ] I've searched the bugtracker for similar site support requests including closed ones
- [ ] I've read the opening an issue section in CONTRIBUTING.md
- [ ] I have given an appropriate title to the issue
## Example URLs
<!--
Provide all kinds of example URLs support for which should be included. Replace following example URLs by yours.
-->
- Single video: https://www.youtube.com/watch?v=BaW_jenozKc
- Single video: https://youtu.be/BaW_jenozKc
- Playlist: https://www.youtube.com/playlist?list=PL4lCao7KL_QFVb7Iudeipvc2BCavECqzc
## Description
<!--
Provide any additional information.
If work on your issue requires account credentials please provide them or explain how one can obtain them.
-->
WRITE DESCRIPTION HERE

View File

@@ -0,0 +1,74 @@
name: Site support request
description: Request support for a new site
labels: [triage, site-request]
body:
- type: checkboxes
id: checklist
attributes:
label: Checklist
description: |
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
options:
- label: I'm reporting a new site support request
required: true
- label: I've verified that I'm running yt-dlp version **2021.11.10.1**. ([update instructions](https://github.com/yt-dlp/yt-dlp#update))
required: true
- label: I've checked that all provided URLs are alive and playable in a browser
required: true
- label: I've checked that none of provided URLs [violate any copyrights](https://github.com/ytdl-org/youtube-dl#can-you-add-support-for-this-anime-video-site-or-site-which-shows-current-movies-for-free) or contain any [DRM](https://en.wikipedia.org/wiki/Digital_rights_management) to the best of my knowledge
required: true
- label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates
required: true
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
required: true
- label: I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and am willing to share it if required
- type: input
id: region
attributes:
label: Region
description: "Enter the region the site is accessible from"
placeholder: "India"
- type: textarea
id: example-urls
attributes:
label: Example URLs
description: |
Provide all kinds of example URLs for which support should be added
value: |
- Single video: https://www.youtube.com/watch?v=BaW_jenozKc
- Single video: https://youtu.be/BaW_jenozKc
- Playlist: https://www.youtube.com/playlist?list=PL4lCao7KL_QFVb7Iudeipvc2BCavECqzc
validations:
required: true
- type: textarea
id: description
attributes:
label: Description
description: |
Provide any additional information
placeholder: WRITE DESCRIPTION HERE
validations:
required: true
- type: textarea
id: log
attributes:
label: Verbose log
description: |
Provide the complete verbose output **using one of the example URLs provided above**.
Add the `-Uv` flag to your command line you run yt-dlp with (`yt-dlp -Uv <your command line>`), copy the WHOLE output and insert it below.
It should look similar to this:
placeholder: |
[debug] Command-line config: ['-Uv', 'http://www.youtube.com/watch?v=BaW_jenozKc']
[debug] Portable config file: yt-dlp.conf
[debug] Portable config: ['-i']
[debug] Encodings: locale cp1252, fs utf-8, stdout utf-8, stderr utf-8, pref cp1252
[debug] yt-dlp version 2021.11.10.1 (exe)
[debug] Python version 3.8.8 (CPython 64bit) - Windows-10-10.0.19041-SP0
[debug] exe versions: ffmpeg 3.0.1, ffprobe 3.0.1
[debug] Optional libraries: Cryptodome, keyring, mutagen, sqlite, websockets
[debug] Proxy map: {}
yt-dlp is up to date (2021.11.10.1)
<more lines>
render: shell
validations:
required: true

View File

@@ -1,43 +0,0 @@
---
name: Site feature request
about: Request a new functionality for a site
title: "[Site Feature] Website Name: A short description of the feature"
labels: ['triage', 'site-enhancement']
assignees: ''
---
<!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
- First of, make sure you are using the latest version of yt-dlp. Run `yt-dlp --version` and ensure your version is 2021.10.10. If it's not, see https://github.com/yt-dlp/yt-dlp#update on how to update. Issues with outdated version will be REJECTED.
- Search the bugtracker for similar site feature requests: https://github.com/yt-dlp/yt-dlp/issues. DO NOT post duplicates.
- Read "opening an issue" section in CONTRIBUTING.md: https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue
- Finally, confirm all RELEVANT tasks from the following by putting x into all the boxes like this [x] (Dont forget to delete the empty space)
-->
- [ ] I'm reporting a site feature request
- [ ] I've verified that I'm running yt-dlp version **2021.10.10**
- [ ] I've searched the bugtracker for similar site feature requests including closed ones
- [ ] I've read the opening an issue section in CONTRIBUTING.md
- [ ] I have given an appropriate title to the issue
## Description
<!--
Provide an explanation of your site feature request in an arbitrary form. Please make sure the description is worded well enough to be understood, see https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient. Provide any additional information, suggested solution and as much context and examples as possible.
-->
WRITE DESCRIPTION HERE

View File

@@ -0,0 +1,49 @@
name: Site feature request
description: Request a new functionality for a site
labels: [triage, site-enhancement]
body:
- type: checkboxes
id: checklist
attributes:
label: Checklist
description: |
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
options:
- label: I'm reporting a site feature request
required: true
- label: I've verified that I'm running yt-dlp version **2021.11.10.1**. ([update instructions](https://github.com/yt-dlp/yt-dlp#update))
required: true
- label: I've checked that all provided URLs are alive and playable in a browser
required: true
- label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates
required: true
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
required: true
- label: I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
- type: input
id: region
attributes:
label: Region
description: "Enter the region the site is accessible from"
placeholder: "India"
- type: textarea
id: example-urls
attributes:
label: Example URLs
description: |
Example URLs that can be used to demonstrate the requested feature
value: |
https://www.youtube.com/watch?v=BaW_jenozKc
validations:
required: true
- type: textarea
id: description
attributes:
label: Description
description: |
Provide an explanation of your site feature request in an arbitrary form.
Please make sure the description is worded well enough to be understood, see [is-the-description-of-the-issue-itself-sufficient](https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient).
Provide any additional information, any suggested solutions, and as much context and examples as possible
placeholder: WRITE DESCRIPTION HERE
validations:
required: true

View File

@@ -1,74 +0,0 @@
---
name: Bug report
about: Report a bug unrelated to any particular site or extractor
title: '[Bug] A short description of the issue'
labels: ['triage', 'bug']
assignees: ''
---
<!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
- First of, make sure you are using the latest version of yt-dlp. Run `yt-dlp --version` and ensure your version is 2021.10.10. If it's not, see https://github.com/yt-dlp/yt-dlp#update on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that all URLs and arguments with special characters are properly quoted or escaped.
- Search the bugtracker for similar issues: https://github.com/yt-dlp/yt-dlp/issues. DO NOT post duplicates.
- Read "opening an issue" section in CONTRIBUTING.md: https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue
- Finally, confirm all RELEVANT tasks from the following by putting x into all the boxes like this [x] (Dont forget to delete the empty space)
-->
- [ ] I'm reporting a bug unrelated to a specific site
- [ ] I've verified that I'm running yt-dlp version **2021.10.10**
- [ ] I've checked that all provided URLs are alive and playable in a browser
- [ ] The provided URLs do not contain any DRM to the best of my knowledge
- [ ] I've checked that all URLs and arguments with special characters are properly quoted or escaped
- [ ] I've searched the bugtracker for similar bug reports including closed ones
- [ ] I've read the opening an issue section in CONTRIBUTING.md
- [ ] I have given an appropriate title to the issue
## Verbose log
<!--
Provide the complete verbose output of yt-dlp that clearly demonstrates the problem.
Add the `-v` flag to your command line you run yt-dlp with (`yt-dlp -v <your command line>`), copy the WHOLE output and insert it below. It should look similar to this:
[debug] System config: []
[debug] User config: []
[debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKc']
[debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251
[debug] yt-dlp version 2021.10.10
[debug] Python version 2.7.11 - Windows-2003Server-5.2.3790-SP2
[debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4
[debug] Proxy map: {}
<more lines>
-->
```
PASTE VERBOSE LOG HERE
```
<!--
Do not remove the above ```
-->
## Description
<!--
Provide an explanation of your issue in an arbitrary form. Please make sure the description is worded well enough to be understood, see https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient. Provide any additional information, suggested solution and as much context and examples as possible.
If work on your issue requires account credentials please provide them or explain how one can obtain them.
-->
WRITE DESCRIPTION HERE

57
.github/ISSUE_TEMPLATE/4_bug_report.yml vendored Normal file
View File

@@ -0,0 +1,57 @@
name: Bug report
description: Report a bug unrelated to any particular site or extractor
labels: [triage,bug]
body:
- type: checkboxes
id: checklist
attributes:
label: Checklist
description: |
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
options:
- label: I'm reporting a bug unrelated to a specific site
required: true
- label: I've verified that I'm running yt-dlp version **2021.11.10.1**. ([update instructions](https://github.com/yt-dlp/yt-dlp#update))
required: true
- label: I've checked that all provided URLs are alive and playable in a browser
required: true
- label: I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/ytdl-org/youtube-dl#video-url-contains-an-ampersand-and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
required: true
- label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates
required: true
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
required: true
- type: textarea
id: description
attributes:
label: Description
description: |
Provide an explanation of your issue in an arbitrary form.
Please make sure the description is worded well enough to be understood, see [is-the-description-of-the-issue-itself-sufficient](https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient).
Provide any additional information, any suggested solutions, and as much context and examples as possible
placeholder: WRITE DESCRIPTION HERE
validations:
required: true
- type: textarea
id: log
attributes:
label: Verbose log
description: |
Provide the complete verbose output of yt-dlp **that clearly demonstrates the problem**.
Add the `-Uv` flag to **your** command line you run yt-dlp with (`yt-dlp -Uv <your command line>`), copy the WHOLE output and insert it below.
It should look similar to this:
placeholder: |
[debug] Command-line config: ['-Uv', 'http://www.youtube.com/watch?v=BaW_jenozKc']
[debug] Portable config file: yt-dlp.conf
[debug] Portable config: ['-i']
[debug] Encodings: locale cp1252, fs utf-8, stdout utf-8, stderr utf-8, pref cp1252
[debug] yt-dlp version 2021.11.10.1 (exe)
[debug] Python version 3.8.8 (CPython 64bit) - Windows-10-10.0.19041-SP0
[debug] exe versions: ffmpeg 3.0.1, ffprobe 3.0.1
[debug] Optional libraries: Cryptodome, keyring, mutagen, sqlite, websockets
[debug] Proxy map: {}
yt-dlp is up to date (2021.11.10.1)
<more lines>
render: shell
validations:
required: true

View File

@@ -1,43 +0,0 @@
---
name: Feature request
about: Request a new functionality unrelated to any particular site or extractor
title: "[Feature Request] A short description of your feature"
labels: ['triage', 'enhancement']
assignees: ''
---
<!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
- First of, make sure you are using the latest version of yt-dlp. Run `yt-dlp --version` and ensure your version is 2021.10.10. If it's not, see https://github.com/yt-dlp/yt-dlp#update on how to update. Issues with outdated version will be REJECTED.
- Search the bugtracker for similar feature requests: https://github.com/yt-dlp/yt-dlp/issues. DO NOT post duplicates.
- Read "opening an issue" section in CONTRIBUTING.md: https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue
- Finally, put x into all relevant boxes like this [x] (Dont forget to delete the empty space)
-->
- [ ] I'm reporting a feature request
- [ ] I've verified that I'm running yt-dlp version **2021.10.10**
- [ ] I've searched the bugtracker for similar feature requests including closed ones
- [ ] I've read the opening an issue section in CONTRIBUTING.md
- [ ] I have given an appropriate title to the issue
## Description
<!--
Provide an explanation of your issue in an arbitrary form. Please make sure the description is worded well enough to be understood, see https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient. Provide any additional information, suggested solution and as much context and examples as possible.
-->
WRITE DESCRIPTION HERE

View File

@@ -0,0 +1,30 @@
name: Feature request request
description: Request a new functionality unrelated to any particular site or extractor
labels: [triage, enhancement]
body:
- type: checkboxes
id: checklist
attributes:
label: Checklist
description: |
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
options:
- label: I'm reporting a feature request
required: true
- label: I've verified that I'm running yt-dlp version **2021.11.10.1**. ([update instructions](https://github.com/yt-dlp/yt-dlp#update))
required: true
- label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates
required: true
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
required: true
- type: textarea
id: description
attributes:
label: Description
description: |
Provide an explanation of your site feature request in an arbitrary form.
Please make sure the description is worded well enough to be understood, see [is-the-description-of-the-issue-itself-sufficient](https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient).
Provide any additional information, any suggested solutions, and as much context and examples as possible
placeholder: WRITE DESCRIPTION HERE
validations:
required: true

View File

@@ -1,43 +0,0 @@
---
name: Ask question
about: Ask yt-dlp related question
title: "[Question] A short description of your question"
labels: question
assignees: ''
---
<!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
- Look through the README (https://github.com/yt-dlp/yt-dlp)
- Read "opening an issue" section in CONTRIBUTING.md: https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue
- Search the bugtracker for similar questions: https://github.com/yt-dlp/yt-dlp/issues
- Finally, put x into all relevant boxes like this [x] (Dont forget to delete the empty space)
-->
- [ ] I'm asking a question
- [ ] I've looked through the README
- [ ] I've read the opening an issue section in CONTRIBUTING.md
- [ ] I've searched the bugtracker for similar questions including closed ones
- [ ] I have given an appropriate title to the issue
## Question
<!--
Ask your question in an arbitrary form. Please make sure it's worded well enough to be understood, see https://github.com/yt-dlp/yt-dlp.
-->
WRITE QUESTION HERE

30
.github/ISSUE_TEMPLATE/6_question.yml vendored Normal file
View File

@@ -0,0 +1,30 @@
name: Ask question
description: Ask yt-dlp related question
labels: [question]
body:
- type: checkboxes
id: checklist
attributes:
label: Checklist
description: |
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
options:
- label: I'm asking a question and not reporting a bug/feature request
required: true
- label: I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
required: true
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
required: true
- label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions including closed ones
required: true
- type: textarea
id: question
attributes:
label: Question
description: |
Ask your question in an arbitrary form.
Please make sure it's worded well enough to be understood, see [is-the-description-of-the-issue-itself-sufficient](https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient).
Provide any additional information and as much context and examples as possible
placeholder: WRITE QUESTION HERE
validations:
required: true

5
.github/ISSUE_TEMPLATE/config.yml vendored Normal file
View File

@@ -0,0 +1,5 @@
blank_issues_enabled: false
contact_links:
- name: Get help from the community on Discord
url: https://discord.gg/H5MNcFW63r
about: Join the yt-dlp Discord for community-powered support!

View File

@@ -1,73 +0,0 @@
---
name: Broken site support
about: Report broken or misfunctioning site
title: "[Broken] Website Name: A short description of the issue"
labels: ['triage', 'extractor-bug']
assignees: ''
---
<!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
- First of, make sure you are using the latest version of yt-dlp. Run `yt-dlp --version` and ensure your version is %(version)s. If it's not, see https://github.com/yt-dlp/yt-dlp#update on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that all URLs and arguments with special characters are properly quoted or escaped.
- Search the bugtracker for similar issues: https://github.com/yt-dlp/yt-dlp/issues. DO NOT post duplicates.
- Read "opening an issue" section in CONTRIBUTING.md: https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue
- Finally, confirm all RELEVANT tasks from the following by putting x into all the boxes like this [x] (Dont forget to delete the empty space)
-->
- [ ] I'm reporting a broken site support
- [ ] I've verified that I'm running yt-dlp version **%(version)s**
- [ ] I've checked that all provided URLs are alive and playable in a browser
- [ ] I've checked that all URLs and arguments with special characters are properly quoted or escaped
- [ ] I've searched the bugtracker for similar issues including closed ones
- [ ] I've read the opening an issue section in CONTRIBUTING.md
- [ ] I have given an appropriate title to the issue
## Verbose log
<!--
Provide the complete verbose output of yt-dlp that clearly demonstrates the problem.
Add the `-v` flag to your command line you run yt-dlp with (`yt-dlp -v <your command line>`), copy the WHOLE output and insert it below. It should look similar to this:
[debug] System config: []
[debug] User config: []
[debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKc']
[debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251
[debug] yt-dlp version %(version)s
[debug] Python version 2.7.11 - Windows-2003Server-5.2.3790-SP2
[debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4
[debug] Proxy map: {}
<more lines>
-->
```
PASTE VERBOSE LOG HERE
```
<!--
Do not remove the above ```
-->
## Description
<!--
Provide an explanation of your issue in an arbitrary form. Provide any additional information, suggested solution and as much context and examples as possible.
If work on your issue requires account credentials please provide them or explain how one can obtain them.
-->
WRITE DESCRIPTION HERE

View File

@@ -0,0 +1,63 @@
name: Broken site support
description: Report broken or misfunctioning site
labels: [triage, extractor-bug]
body:
- type: checkboxes
id: checklist
attributes:
label: Checklist
description: |
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
options:
- label: I'm reporting a broken site
required: true
- label: I've verified that I'm running yt-dlp version **%(version)s**. ([update instructions](https://github.com/yt-dlp/yt-dlp#update))
required: true
- label: I've checked that all provided URLs are alive and playable in a browser
required: true
- label: I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/ytdl-org/youtube-dl#video-url-contains-an-ampersand-and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
required: true
- label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates
required: true
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
required: true
- label: I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
- type: input
id: region
attributes:
label: Region
description: "Enter the region the site is accessible from"
placeholder: "India"
- type: textarea
id: description
attributes:
label: Description
description: |
Provide an explanation of your issue in an arbitrary form.
Provide any additional information, any suggested solutions, and as much context and examples as possible
placeholder: WRITE DESCRIPTION HERE
validations:
required: true
- type: textarea
id: log
attributes:
label: Verbose log
description: |
Provide the complete verbose output of yt-dlp **that clearly demonstrates the problem**.
Add the `-Uv` flag to your command line you run yt-dlp with (`yt-dlp -Uv <your command line>`), copy the WHOLE output and insert it below.
It should look similar to this:
placeholder: |
[debug] Command-line config: ['-Uv', 'http://www.youtube.com/watch?v=BaW_jenozKc']
[debug] Portable config file: yt-dlp.conf
[debug] Portable config: ['-i']
[debug] Encodings: locale cp1252, fs utf-8, stdout utf-8, stderr utf-8, pref cp1252
[debug] yt-dlp version %(version)s (exe)
[debug] Python version 3.8.8 (CPython 64bit) - Windows-10-10.0.19041-SP0
[debug] exe versions: ffmpeg 3.0.1, ffprobe 3.0.1
[debug] Optional libraries: Cryptodome, keyring, mutagen, sqlite, websockets
[debug] Proxy map: {}
yt-dlp is up to date (%(version)s)
<more lines>
render: shell
validations:
required: true

View File

@@ -1,60 +0,0 @@
---
name: Site support request
about: Request support for a new site
title: "[Site Request] Website Name"
labels: ['triage', 'site-request']
assignees: ''
---
<!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
- First of, make sure you are using the latest version of yt-dlp. Run `yt-dlp --version` and ensure your version is %(version)s. If it's not, see https://github.com/yt-dlp/yt-dlp#update on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that site you are requesting is not dedicated to copyright infringement. yt-dlp does not support such sites. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
- Search the bugtracker for similar site support requests: https://github.com/yt-dlp/yt-dlp/issues. DO NOT post duplicates.
- Read "opening an issue" section in CONTRIBUTING.md: https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue
- Finally, confirm all RELEVANT tasks from the following by putting x into all the boxes like this [x] (Dont forget to delete the empty space)
-->
- [ ] I'm reporting a new site support request
- [ ] I've verified that I'm running yt-dlp version **%(version)s**
- [ ] I've checked that all provided URLs are alive and playable in a browser
- [ ] I've checked that none of provided URLs violate any copyrights
- [ ] The provided URLs do not contain any DRM to the best of my knowledge
- [ ] I've searched the bugtracker for similar site support requests including closed ones
- [ ] I've read the opening an issue section in CONTRIBUTING.md
- [ ] I have given an appropriate title to the issue
## Example URLs
<!--
Provide all kinds of example URLs support for which should be included. Replace following example URLs by yours.
-->
- Single video: https://www.youtube.com/watch?v=BaW_jenozKc
- Single video: https://youtu.be/BaW_jenozKc
- Playlist: https://www.youtube.com/playlist?list=PL4lCao7KL_QFVb7Iudeipvc2BCavECqzc
## Description
<!--
Provide any additional information.
If work on your issue requires account credentials please provide them or explain how one can obtain them.
-->
WRITE DESCRIPTION HERE

View File

@@ -0,0 +1,74 @@
name: Site support request
description: Request support for a new site
labels: [triage, site-request]
body:
- type: checkboxes
id: checklist
attributes:
label: Checklist
description: |
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
options:
- label: I'm reporting a new site support request
required: true
- label: I've verified that I'm running yt-dlp version **%(version)s**. ([update instructions](https://github.com/yt-dlp/yt-dlp#update))
required: true
- label: I've checked that all provided URLs are alive and playable in a browser
required: true
- label: I've checked that none of provided URLs [violate any copyrights](https://github.com/ytdl-org/youtube-dl#can-you-add-support-for-this-anime-video-site-or-site-which-shows-current-movies-for-free) or contain any [DRM](https://en.wikipedia.org/wiki/Digital_rights_management) to the best of my knowledge
required: true
- label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates
required: true
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
required: true
- label: I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and am willing to share it if required
- type: input
id: region
attributes:
label: Region
description: "Enter the region the site is accessible from"
placeholder: "India"
- type: textarea
id: example-urls
attributes:
label: Example URLs
description: |
Provide all kinds of example URLs for which support should be added
value: |
- Single video: https://www.youtube.com/watch?v=BaW_jenozKc
- Single video: https://youtu.be/BaW_jenozKc
- Playlist: https://www.youtube.com/playlist?list=PL4lCao7KL_QFVb7Iudeipvc2BCavECqzc
validations:
required: true
- type: textarea
id: description
attributes:
label: Description
description: |
Provide any additional information
placeholder: WRITE DESCRIPTION HERE
validations:
required: true
- type: textarea
id: log
attributes:
label: Verbose log
description: |
Provide the complete verbose output **using one of the example URLs provided above**.
Add the `-Uv` flag to your command line you run yt-dlp with (`yt-dlp -Uv <your command line>`), copy the WHOLE output and insert it below.
It should look similar to this:
placeholder: |
[debug] Command-line config: ['-Uv', 'http://www.youtube.com/watch?v=BaW_jenozKc']
[debug] Portable config file: yt-dlp.conf
[debug] Portable config: ['-i']
[debug] Encodings: locale cp1252, fs utf-8, stdout utf-8, stderr utf-8, pref cp1252
[debug] yt-dlp version %(version)s (exe)
[debug] Python version 3.8.8 (CPython 64bit) - Windows-10-10.0.19041-SP0
[debug] exe versions: ffmpeg 3.0.1, ffprobe 3.0.1
[debug] Optional libraries: Cryptodome, keyring, mutagen, sqlite, websockets
[debug] Proxy map: {}
yt-dlp is up to date (%(version)s)
<more lines>
render: shell
validations:
required: true

View File

@@ -1,43 +0,0 @@
---
name: Site feature request
about: Request a new functionality for a site
title: "[Site Feature] Website Name: A short description of the feature"
labels: ['triage', 'site-enhancement']
assignees: ''
---
<!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
- First of, make sure you are using the latest version of yt-dlp. Run `yt-dlp --version` and ensure your version is %(version)s. If it's not, see https://github.com/yt-dlp/yt-dlp#update on how to update. Issues with outdated version will be REJECTED.
- Search the bugtracker for similar site feature requests: https://github.com/yt-dlp/yt-dlp/issues. DO NOT post duplicates.
- Read "opening an issue" section in CONTRIBUTING.md: https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue
- Finally, confirm all RELEVANT tasks from the following by putting x into all the boxes like this [x] (Dont forget to delete the empty space)
-->
- [ ] I'm reporting a site feature request
- [ ] I've verified that I'm running yt-dlp version **%(version)s**
- [ ] I've searched the bugtracker for similar site feature requests including closed ones
- [ ] I've read the opening an issue section in CONTRIBUTING.md
- [ ] I have given an appropriate title to the issue
## Description
<!--
Provide an explanation of your site feature request in an arbitrary form. Please make sure the description is worded well enough to be understood, see https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient. Provide any additional information, suggested solution and as much context and examples as possible.
-->
WRITE DESCRIPTION HERE

View File

@@ -0,0 +1,49 @@
name: Site feature request
description: Request a new functionality for a site
labels: [triage, site-enhancement]
body:
- type: checkboxes
id: checklist
attributes:
label: Checklist
description: |
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
options:
- label: I'm reporting a site feature request
required: true
- label: I've verified that I'm running yt-dlp version **%(version)s**. ([update instructions](https://github.com/yt-dlp/yt-dlp#update))
required: true
- label: I've checked that all provided URLs are alive and playable in a browser
required: true
- label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates
required: true
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
required: true
- label: I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
- type: input
id: region
attributes:
label: Region
description: "Enter the region the site is accessible from"
placeholder: "India"
- type: textarea
id: example-urls
attributes:
label: Example URLs
description: |
Example URLs that can be used to demonstrate the requested feature
value: |
https://www.youtube.com/watch?v=BaW_jenozKc
validations:
required: true
- type: textarea
id: description
attributes:
label: Description
description: |
Provide an explanation of your site feature request in an arbitrary form.
Please make sure the description is worded well enough to be understood, see [is-the-description-of-the-issue-itself-sufficient](https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient).
Provide any additional information, any suggested solutions, and as much context and examples as possible
placeholder: WRITE DESCRIPTION HERE
validations:
required: true

View File

@@ -1,74 +0,0 @@
---
name: Bug report
about: Report a bug unrelated to any particular site or extractor
title: '[Bug] A short description of the issue'
labels: ['triage', 'bug']
assignees: ''
---
<!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
- First of, make sure you are using the latest version of yt-dlp. Run `yt-dlp --version` and ensure your version is %(version)s. If it's not, see https://github.com/yt-dlp/yt-dlp#update on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that all URLs and arguments with special characters are properly quoted or escaped.
- Search the bugtracker for similar issues: https://github.com/yt-dlp/yt-dlp/issues. DO NOT post duplicates.
- Read "opening an issue" section in CONTRIBUTING.md: https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue
- Finally, confirm all RELEVANT tasks from the following by putting x into all the boxes like this [x] (Dont forget to delete the empty space)
-->
- [ ] I'm reporting a bug unrelated to a specific site
- [ ] I've verified that I'm running yt-dlp version **%(version)s**
- [ ] I've checked that all provided URLs are alive and playable in a browser
- [ ] The provided URLs do not contain any DRM to the best of my knowledge
- [ ] I've checked that all URLs and arguments with special characters are properly quoted or escaped
- [ ] I've searched the bugtracker for similar bug reports including closed ones
- [ ] I've read the opening an issue section in CONTRIBUTING.md
- [ ] I have given an appropriate title to the issue
## Verbose log
<!--
Provide the complete verbose output of yt-dlp that clearly demonstrates the problem.
Add the `-v` flag to your command line you run yt-dlp with (`yt-dlp -v <your command line>`), copy the WHOLE output and insert it below. It should look similar to this:
[debug] System config: []
[debug] User config: []
[debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKc']
[debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251
[debug] yt-dlp version %(version)s
[debug] Python version 2.7.11 - Windows-2003Server-5.2.3790-SP2
[debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4
[debug] Proxy map: {}
<more lines>
-->
```
PASTE VERBOSE LOG HERE
```
<!--
Do not remove the above ```
-->
## Description
<!--
Provide an explanation of your issue in an arbitrary form. Please make sure the description is worded well enough to be understood, see https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient. Provide any additional information, suggested solution and as much context and examples as possible.
If work on your issue requires account credentials please provide them or explain how one can obtain them.
-->
WRITE DESCRIPTION HERE

View File

@@ -0,0 +1,57 @@
name: Bug report
description: Report a bug unrelated to any particular site or extractor
labels: [triage,bug]
body:
- type: checkboxes
id: checklist
attributes:
label: Checklist
description: |
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
options:
- label: I'm reporting a bug unrelated to a specific site
required: true
- label: I've verified that I'm running yt-dlp version **%(version)s**. ([update instructions](https://github.com/yt-dlp/yt-dlp#update))
required: true
- label: I've checked that all provided URLs are alive and playable in a browser
required: true
- label: I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/ytdl-org/youtube-dl#video-url-contains-an-ampersand-and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
required: true
- label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates
required: true
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
required: true
- type: textarea
id: description
attributes:
label: Description
description: |
Provide an explanation of your issue in an arbitrary form.
Please make sure the description is worded well enough to be understood, see [is-the-description-of-the-issue-itself-sufficient](https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient).
Provide any additional information, any suggested solutions, and as much context and examples as possible
placeholder: WRITE DESCRIPTION HERE
validations:
required: true
- type: textarea
id: log
attributes:
label: Verbose log
description: |
Provide the complete verbose output of yt-dlp **that clearly demonstrates the problem**.
Add the `-Uv` flag to **your** command line you run yt-dlp with (`yt-dlp -Uv <your command line>`), copy the WHOLE output and insert it below.
It should look similar to this:
placeholder: |
[debug] Command-line config: ['-Uv', 'http://www.youtube.com/watch?v=BaW_jenozKc']
[debug] Portable config file: yt-dlp.conf
[debug] Portable config: ['-i']
[debug] Encodings: locale cp1252, fs utf-8, stdout utf-8, stderr utf-8, pref cp1252
[debug] yt-dlp version %(version)s (exe)
[debug] Python version 3.8.8 (CPython 64bit) - Windows-10-10.0.19041-SP0
[debug] exe versions: ffmpeg 3.0.1, ffprobe 3.0.1
[debug] Optional libraries: Cryptodome, keyring, mutagen, sqlite, websockets
[debug] Proxy map: {}
yt-dlp is up to date (%(version)s)
<more lines>
render: shell
validations:
required: true

View File

@@ -1,43 +0,0 @@
---
name: Feature request
about: Request a new functionality unrelated to any particular site or extractor
title: "[Feature Request] A short description of your feature"
labels: ['triage', 'enhancement']
assignees: ''
---
<!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
- First of, make sure you are using the latest version of yt-dlp. Run `yt-dlp --version` and ensure your version is %(version)s. If it's not, see https://github.com/yt-dlp/yt-dlp#update on how to update. Issues with outdated version will be REJECTED.
- Search the bugtracker for similar feature requests: https://github.com/yt-dlp/yt-dlp/issues. DO NOT post duplicates.
- Read "opening an issue" section in CONTRIBUTING.md: https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue
- Finally, put x into all relevant boxes like this [x] (Dont forget to delete the empty space)
-->
- [ ] I'm reporting a feature request
- [ ] I've verified that I'm running yt-dlp version **%(version)s**
- [ ] I've searched the bugtracker for similar feature requests including closed ones
- [ ] I've read the opening an issue section in CONTRIBUTING.md
- [ ] I have given an appropriate title to the issue
## Description
<!--
Provide an explanation of your issue in an arbitrary form. Please make sure the description is worded well enough to be understood, see https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient. Provide any additional information, suggested solution and as much context and examples as possible.
-->
WRITE DESCRIPTION HERE

View File

@@ -0,0 +1,30 @@
name: Feature request request
description: Request a new functionality unrelated to any particular site or extractor
labels: [triage, enhancement]
body:
- type: checkboxes
id: checklist
attributes:
label: Checklist
description: |
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
options:
- label: I'm reporting a feature request
required: true
- label: I've verified that I'm running yt-dlp version **%(version)s**. ([update instructions](https://github.com/yt-dlp/yt-dlp#update))
required: true
- label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates
required: true
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
required: true
- type: textarea
id: description
attributes:
label: Description
description: |
Provide an explanation of your site feature request in an arbitrary form.
Please make sure the description is worded well enough to be understood, see [is-the-description-of-the-issue-itself-sufficient](https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient).
Provide any additional information, any suggested solutions, and as much context and examples as possible
placeholder: WRITE DESCRIPTION HERE
validations:
required: true

View File

@@ -0,0 +1,30 @@
name: Ask question
description: Ask yt-dlp related question
labels: [question]
body:
- type: checkboxes
id: checklist
attributes:
label: Checklist
description: |
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
options:
- label: I'm asking a question and not reporting a bug/feature request
required: true
- label: I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
required: true
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
required: true
- label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions including closed ones
required: true
- type: textarea
id: question
attributes:
label: Question
description: |
Ask your question in an arbitrary form.
Please make sure it's worded well enough to be understood, see [is-the-description-of-the-issue-itself-sufficient](https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient).
Provide any additional information and as much context and examples as possible
placeholder: WRITE QUESTION HERE
validations:
required: true

View File

@@ -1,15 +1,11 @@
name: Build
on:
push:
branches:
- release
on: workflow_dispatch
jobs:
build_unix:
runs-on: ubuntu-latest
outputs:
version_suffix: ${{ steps.version_suffix.outputs.version_suffix }}
ytdlp_version: ${{ steps.bump_version.outputs.ytdlp_version }}
upload_url: ${{ steps.create_release.outputs.upload_url }}
sha256_bin: ${{ steps.sha256_bin.outputs.sha256_bin }}
@@ -27,23 +23,32 @@ jobs:
python-version: '3.8'
- name: Install packages
run: sudo apt-get -y install zip pandoc man
- name: Set version suffix
id: version_suffix
env:
PUSH_VERSION_COMMIT: ${{ secrets.PUSH_VERSION_COMMIT }}
if: "env.PUSH_VERSION_COMMIT == ''"
run: echo ::set-output name=version_suffix::$(date -u +"%H%M%S")
- name: Bump version
id: bump_version
run: |
python devscripts/update-version.py
python devscripts/update-version.py ${{ steps.version_suffix.outputs.version_suffix }}
make issuetemplates
- name: Print version
run: echo "${{ steps.bump_version.outputs.ytdlp_version }}"
- name: Update master
id: push_update
- name: Push to release
id: push_release
run: |
git config --global user.email "${{ github.event.pusher.email }}"
git config --global user.name "${{ github.event.pusher.name }}"
git config --global user.name github-actions
git config --global user.email github-actions@example.com
git add -u
git commit -m "[version] update" -m ":ci skip all"
git pull --rebase origin ${{ github.event.repository.master_branch }}
git push origin ${{ github.event.ref }}:${{ github.event.repository.master_branch }}
git commit -m "[version] update" -m "Created by: ${{ github.event.sender.login }}" -m ":ci skip all"
git push origin --force ${{ github.event.ref }}:release
echo ::set-output name=head_sha::$(git rev-parse HEAD)
- name: Update master
id: push_master
env:
PUSH_VERSION_COMMIT: ${{ secrets.PUSH_VERSION_COMMIT }}
if: "env.PUSH_VERSION_COMMIT != ''"
run: git push origin ${{ github.event.ref }}
- name: Get Changelog
id: get_changelog
run: |
@@ -51,6 +56,10 @@ jobs:
echo "changelog<<EOF" >> $GITHUB_ENV
echo "$changelog" >> $GITHUB_ENV
echo "EOF" >> $GITHUB_ENV
- name: Build lazy extractors
id: lazy_extractors
run: python devscripts/make_lazy_extractors.py
- name: Run Make
run: make all tar
- name: Get SHA2-256SUMS for yt-dlp
@@ -65,6 +74,7 @@ jobs:
- name: Get SHA2-512SUMS for yt-dlp.tar.gz
id: sha512_tar
run: echo "::set-output name=sha512_tar::$(sha512sum yt-dlp.tar.gz | awk '{print $1}')"
- name: Install dependencies for pypi
env:
PYPI_TOKEN: ${{ secrets.PYPI_TOKEN }}
@@ -81,6 +91,7 @@ jobs:
rm -rf dist/*
python setup.py sdist bdist_wheel
twine upload dist/*
- name: Install SSH private key
env:
BREW_TOKEN: ${{ secrets.BREW_TOKEN }}
@@ -99,6 +110,7 @@ jobs:
git -C taps/ config user.email github-actions@example.com
git -C taps/ commit -am 'yt-dlp: ${{ steps.bump_version.outputs.ytdlp_version }}'
git -C taps/ push
- name: Create Release
id: create_release
uses: actions/create-release@v1
@@ -109,7 +121,11 @@ jobs:
release_name: yt-dlp ${{ steps.bump_version.outputs.ytdlp_version }}
commitish: ${{ steps.push_update.outputs.head_sha }}
body: |
Changelog:
#### [A description of the various files]((https://github.com/yt-dlp/yt-dlp#release-files)) are in the README
---
### Changelog:
${{ env.changelog }}
draft: false
prerelease: false
@@ -133,13 +149,79 @@ jobs:
asset_name: yt-dlp.tar.gz
asset_content_type: application/gzip
build_macos:
runs-on: macos-11
needs: build_unix
outputs:
sha256_macos: ${{ steps.sha256_macos.outputs.sha256_macos }}
sha512_macos: ${{ steps.sha512_macos.outputs.sha512_macos }}
sha256_macos_zip: ${{ steps.sha256_macos_zip.outputs.sha256_macos_zip }}
sha512_macos_zip: ${{ steps.sha512_macos_zip.outputs.sha512_macos_zip }}
steps:
- uses: actions/checkout@v2
# In order to create a universal2 application, the version of python3 in /usr/bin has to be used
# Pyinstaller is pinned to 4.5.1 because the builds are failing in 4.6, 4.7
- name: Install Requirements
run: |
brew install coreutils
/usr/bin/python3 -m pip install -U --user pip Pyinstaller==4.5.1 mutagen pycryptodomex websockets
- name: Bump version
id: bump_version
run: /usr/bin/python3 devscripts/update-version.py
- name: Build lazy extractors
id: lazy_extractors
run: /usr/bin/python3 devscripts/make_lazy_extractors.py
- name: Run PyInstaller Script
run: /usr/bin/python3 pyinst.py --target-architecture universal2 --onefile
- name: Upload yt-dlp MacOS binary
id: upload-release-macos
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ needs.build_unix.outputs.upload_url }}
asset_path: ./dist/yt-dlp_macos
asset_name: yt-dlp_macos
asset_content_type: application/octet-stream
- name: Get SHA2-256SUMS for yt-dlp_macos
id: sha256_macos
run: echo "::set-output name=sha256_macos::$(sha256sum dist/yt-dlp_macos | awk '{print $1}')"
- name: Get SHA2-512SUMS for yt-dlp_macos
id: sha512_macos
run: echo "::set-output name=sha512_macos::$(sha512sum dist/yt-dlp_macos | awk '{print $1}')"
- name: Run PyInstaller Script with --onedir
run: /usr/bin/python3 pyinst.py --target-architecture universal2 --onedir
- uses: papeloto/action-zip@v1
with:
files: ./dist/yt-dlp_macos
dest: ./dist/yt-dlp_macos.zip
- name: Upload yt-dlp MacOS onedir
id: upload-release-macos-zip
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ needs.build_unix.outputs.upload_url }}
asset_path: ./dist/yt-dlp_macos.zip
asset_name: yt-dlp_macos.zip
asset_content_type: application/zip
- name: Get SHA2-256SUMS for yt-dlp_macos.zip
id: sha256_macos_zip
run: echo "::set-output name=sha256_macos_zip::$(sha256sum dist/yt-dlp_macos.zip | awk '{print $1}')"
- name: Get SHA2-512SUMS for yt-dlp_macos
id: sha512_macos_zip
run: echo "::set-output name=sha512_macos_zip::$(sha512sum dist/yt-dlp_macos.zip | awk '{print $1}')"
build_windows:
runs-on: windows-latest
needs: build_unix
outputs:
sha256_win: ${{ steps.sha256_win.outputs.sha256_win }}
sha512_win: ${{ steps.sha512_win.outputs.sha512_win }}
sha256_py2exe: ${{ steps.sha256_py2exe.outputs.sha256_py2exe }}
sha512_py2exe: ${{ steps.sha512_py2exe.outputs.sha512_py2exe }}
sha256_win_zip: ${{ steps.sha256_win_zip.outputs.sha256_win_zip }}
sha512_win_zip: ${{ steps.sha512_win_zip.outputs.sha512_win_zip }}
@@ -150,16 +232,19 @@ jobs:
uses: actions/setup-python@v2
with:
python-version: '3.8'
- name: Upgrade pip and enable wheel support
run: python -m pip install --upgrade pip setuptools wheel
- name: Install Requirements
# Custom pyinstaller built with https://github.com/yt-dlp/pyinstaller-builds
run: pip install "https://yt-dlp.github.io/Pyinstaller-Builds/x86_64/pyinstaller-4.5.1-py3-none-any.whl" mutagen pycryptodomex websockets
run: |
python -m pip install --upgrade pip setuptools wheel py2exe
pip install "https://yt-dlp.github.io/Pyinstaller-Builds/x86_64/pyinstaller-4.5.1-py3-none-any.whl" mutagen pycryptodomex websockets
- name: Bump version
id: bump_version
run: python devscripts/update-version.py
- name: Print version
run: echo "${{ steps.bump_version.outputs.ytdlp_version }}"
env:
version_suffix: ${{ needs.build_unix.outputs.version_suffix }}
run: python devscripts/update-version.py ${{ env.version_suffix }}
- name: Build lazy extractors
id: lazy_extractors
run: python devscripts/make_lazy_extractors.py
- name: Run PyInstaller Script
run: python pyinst.py
- name: Upload yt-dlp.exe Windows binary
@@ -178,32 +263,52 @@ jobs:
- name: Get SHA2-512SUMS for yt-dlp.exe
id: sha512_win
run: echo "::set-output name=sha512_win::$((Get-FileHash dist\yt-dlp.exe -Algorithm SHA512).Hash.ToLower())"
- name: Run PyInstaller Script with --onedir
run: python pyinst.py --onedir
- uses: papeloto/action-zip@v1
with:
files: ./dist/yt-dlp
dest: ./dist/yt-dlp.zip
- name: Upload yt-dlp.zip Windows onedir
dest: ./dist/yt-dlp_win.zip
- name: Upload yt-dlp Windows onedir
id: upload-release-windows-zip
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ needs.build_unix.outputs.upload_url }}
asset_path: ./dist/yt-dlp.zip
asset_name: yt-dlp.zip
asset_path: ./dist/yt-dlp_win.zip
asset_name: yt-dlp_win.zip
asset_content_type: application/zip
- name: Get SHA2-256SUMS for yt-dlp.zip
- name: Get SHA2-256SUMS for yt-dlp_win.zip
id: sha256_win_zip
run: echo "::set-output name=sha256_win_zip::$((Get-FileHash dist\yt-dlp.zip -Algorithm SHA256).Hash.ToLower())"
- name: Get SHA2-512SUMS for yt-dlp.zip
run: echo "::set-output name=sha256_win_zip::$((Get-FileHash dist\yt-dlp_win.zip -Algorithm SHA256).Hash.ToLower())"
- name: Get SHA2-512SUMS for yt-dlp_win.zip
id: sha512_win_zip
run: echo "::set-output name=sha512_win_zip::$((Get-FileHash dist\yt-dlp.zip -Algorithm SHA512).Hash.ToLower())"
run: echo "::set-output name=sha512_win_zip::$((Get-FileHash dist\yt-dlp_win.zip -Algorithm SHA512).Hash.ToLower())"
- name: Run py2exe Script
run: python setup.py py2exe
- name: Upload yt-dlp_min.exe Windows binary
id: upload-release-windows-py2exe
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ needs.build_unix.outputs.upload_url }}
asset_path: ./dist/yt-dlp.exe
asset_name: yt-dlp_min.exe
asset_content_type: application/vnd.microsoft.portable-executable
- name: Get SHA2-256SUMS for yt-dlp_min.exe
id: sha256_py2exe
run: echo "::set-output name=sha256_py2exe::$((Get-FileHash dist\yt-dlp.exe -Algorithm SHA256).Hash.ToLower())"
- name: Get SHA2-512SUMS for yt-dlp_min.exe
id: sha512_py2exe
run: echo "::set-output name=sha512_py2exe::$((Get-FileHash dist\yt-dlp.exe -Algorithm SHA512).Hash.ToLower())"
build_windows32:
runs-on: windows-latest
needs: [build_unix, build_windows]
needs: build_unix
outputs:
sha256_win32: ${{ steps.sha256_win32.outputs.sha256_win32 }}
@@ -217,15 +322,18 @@ jobs:
with:
python-version: '3.7'
architecture: 'x86'
- name: Upgrade pip and enable wheel support
run: python -m pip install --upgrade pip setuptools wheel
- name: Install Requirements
run: pip install "https://yt-dlp.github.io/Pyinstaller-Builds/i686/pyinstaller-4.5.1-py3-none-any.whl" mutagen pycryptodomex websockets
run: |
python -m pip install --upgrade pip setuptools wheel
pip install "https://yt-dlp.github.io/Pyinstaller-Builds/i686/pyinstaller-4.5.1-py3-none-any.whl" mutagen pycryptodomex websockets
- name: Bump version
id: bump_version
run: python devscripts/update-version.py
- name: Print version
run: echo "${{ steps.bump_version.outputs.ytdlp_version }}"
env:
version_suffix: ${{ needs.build_unix.outputs.version_suffix }}
run: python devscripts/update-version.py ${{ env.version_suffix }}
- name: Build lazy extractors
id: lazy_extractors
run: python devscripts/make_lazy_extractors.py
- name: Run PyInstaller Script for 32 Bit
run: python pyinst.py
- name: Upload Executable yt-dlp_x86.exe
@@ -247,22 +355,28 @@ jobs:
finish:
runs-on: ubuntu-latest
needs: [build_unix, build_windows, build_windows32]
needs: [build_unix, build_windows, build_windows32, build_macos]
steps:
- name: Make SHA2-256SUMS file
env:
SHA256_WIN: ${{ needs.build_windows.outputs.sha256_win }}
SHA256_WIN_ZIP: ${{ needs.build_windows.outputs.sha256_win_zip }}
SHA256_WIN32: ${{ needs.build_windows32.outputs.sha256_win32 }}
SHA256_BIN: ${{ needs.build_unix.outputs.sha256_bin }}
SHA256_TAR: ${{ needs.build_unix.outputs.sha256_tar }}
SHA256_WIN: ${{ needs.build_windows.outputs.sha256_win }}
SHA256_PY2EXE: ${{ needs.build_windows.outputs.sha256_py2exe }}
SHA256_WIN_ZIP: ${{ needs.build_windows.outputs.sha256_win_zip }}
SHA256_WIN32: ${{ needs.build_windows32.outputs.sha256_win32 }}
SHA256_MACOS: ${{ needs.build_macos.outputs.sha256_macos }}
SHA256_MACOS_ZIP: ${{ needs.build_macos.outputs.sha256_macos_zip }}
run: |
echo "${{ env.SHA256_WIN }} yt-dlp.exe" >> SHA2-256SUMS
echo "${{ env.SHA256_WIN32 }} yt-dlp_x86.exe" >> SHA2-256SUMS
echo "${{ env.SHA256_BIN }} yt-dlp" >> SHA2-256SUMS
echo "${{ env.SHA256_TAR }} yt-dlp.tar.gz" >> SHA2-256SUMS
echo "${{ env.SHA256_WIN_ZIP }} yt-dlp.zip" >> SHA2-256SUMS
echo "${{ env.SHA256_WIN }} yt-dlp.exe" >> SHA2-256SUMS
echo "${{ env.SHA256_PY2EXE }} yt-dlp_min.exe" >> SHA2-256SUMS
echo "${{ env.SHA256_WIN32 }} yt-dlp_x86.exe" >> SHA2-256SUMS
echo "${{ env.SHA256_WIN_ZIP }} yt-dlp_win.zip" >> SHA2-256SUMS
echo "${{ env.SHA256_MACOS }} yt-dlp_macos" >> SHA2-256SUMS
echo "${{ env.SHA256_MACOS_ZIP }} yt-dlp_macos.zip" >> SHA2-256SUMS
- name: Upload 256SUMS file
id: upload-sums
uses: actions/upload-release-asset@v1
@@ -275,17 +389,23 @@ jobs:
asset_content_type: text/plain
- name: Make SHA2-512SUMS file
env:
SHA512_WIN: ${{ needs.build_windows.outputs.sha512_win }}
SHA512_WIN_ZIP: ${{ needs.build_windows.outputs.sha512_win_zip }}
SHA512_WIN32: ${{ needs.build_windows32.outputs.sha512_win32 }}
SHA512_BIN: ${{ needs.build_unix.outputs.sha512_bin }}
SHA512_TAR: ${{ needs.build_unix.outputs.sha512_tar }}
SHA512_WIN: ${{ needs.build_windows.outputs.sha512_win }}
SHA512_PY2EXE: ${{ needs.build_windows.outputs.sha512_py2exe }}
SHA512_WIN_ZIP: ${{ needs.build_windows.outputs.sha512_win_zip }}
SHA512_WIN32: ${{ needs.build_windows32.outputs.sha512_win32 }}
SHA512_MACOS: ${{ needs.build_macos.outputs.sha512_macos }}
SHA512_MACOS_ZIP: ${{ needs.build_macos.outputs.sha512_macos_zip }}
run: |
echo "${{ env.SHA512_WIN }} yt-dlp.exe" >> SHA2-512SUMS
echo "${{ env.SHA512_WIN32 }} yt-dlp_x86.exe" >> SHA2-512SUMS
echo "${{ env.SHA512_BIN }} yt-dlp" >> SHA2-512SUMS
echo "${{ env.SHA512_TAR }} yt-dlp.tar.gz" >> SHA2-512SUMS
echo "${{ env.SHA512_WIN_ZIP }} yt-dlp.zip" >> SHA2-512SUMS
echo "${{ env.SHA512_WIN }} yt-dlp.exe" >> SHA2-512SUMS
echo "${{ env.SHA512_WIN_ZIP }} yt-dlp_win.zip" >> SHA2-512SUMS
echo "${{ env.SHA512_PY2EXE }} yt-dlp_min.exe" >> SHA2-512SUMS
echo "${{ env.SHA512_WIN32 }} yt-dlp_x86.exe" >> SHA2-512SUMS
echo "${{ env.SHA512_MACOS }} yt-dlp_macos" >> SHA2-512SUMS
echo "${{ env.SHA512_MACOS_ZIP }} yt-dlp_macos.zip" >> SHA2-512SUMS
- name: Upload 512SUMS file
id: upload-512sums
uses: actions/upload-release-asset@v1

View File

@@ -28,6 +28,6 @@ jobs:
- name: Install flake8
run: pip install flake8
- name: Make lazy extractors
run: python devscripts/make_lazy_extractors.py yt_dlp/extractor/lazy_extractors.py
run: python devscripts/make_lazy_extractors.py
- name: Run flake8
run: flake8 .

57
.gitignore vendored
View File

@@ -6,41 +6,48 @@ cookies
.netrc
# Downloaded
*.srt
*.ttml
*.sbv
*.vtt
*.flv
*.mp4
*.m4a
*.m4v
*.mp3
*.3gp
*.webm
*.wav
*.annotations.xml
*.ape
*.mkv
*.flac
*.aria2
*.avi
*.swf
*.part
*.part-*
*.ytdl
*.description
*.desktop
*.dump
*.flac
*.flv
*.frag
*.frag.urls
*.aria2
*.swp
*.info.json
*.jpeg
*.jpg
*.live_chat.json
*.m4a
*.m4v
*.mhtml
*.mkv
*.mov
*.mp3
*.mp4
*.ogg
*.opus
*.info.json
*.live_chat.json
*.jpg
*.jpeg
*.part
*.part-*
*.png
*.sbv
*.srt
*.swf
*.swp
*.ttml
*.unknown_video
*.url
*.vtt
*.wav
*.webloc
*.webm
*.webp
*.annotations.xml
*.description
*.ytdl
.cache/
# Allow config/media files in testdata
!test/**

View File

@@ -105,10 +105,22 @@ ### Is anyone going to need the feature?
### Is your question about yt-dlp?
Some bug reports are completely unrelated to yt-dlp and relate to a different, or even the reporter's own, application. Please make sure that you are actually using yt-dlp. If you are using a UI for yt-dlp, report the bug to the maintainer of the actual application providing the UI. On the other hand, if your UI for yt-dlp fails in some way you believe is related to yt-dlp, by all means, go ahead and report the bug.
Some bug reports are completely unrelated to yt-dlp and relate to a different, or even the reporter's own, application. Please make sure that you are actually using yt-dlp. If you are using a UI for yt-dlp, report the bug to the maintainer of the actual application providing the UI. In general, if you are unable to provide the verbose log, you should not be opening the issue here.
If the issue is with `youtube-dl` (the upstream fork of yt-dlp) and not with yt-dlp, the issue should be raised in the youtube-dl project.
### Are you willing to share account details if needed?
The maintainers and potential contributors of the project often do not have an account for the website you are asking support for. So any developer interested in solving your issue may ask you for account details. It is your personal discression whether you are willing to share the account in order for the developer to try and solve your issue. However, if you are unwilling or unable to provide details, they obviously cannot work on the issue and it cannot be solved unless some developer who both has an account and is willing/able to contribute decides to solve it.
By sharing an account with anyone, you agree to bear all risks associated with it. The maintainers and yt-dlp can't be held responsible for any misuse of the credentials.
While these steps won't necessarily ensure that no misuse of the account takes place, these are still some good practices to follow.
- Look for people with `Member` (maintainers of the project) or `Contributor` (people who have previously contributed code) tag on their messages.
- Change the password before sharing the account to something random (use [this](https://passwordsgenerator.net/) if you don't have a random password generator).
- Change the password after receiving the account back.
@@ -136,7 +148,7 @@ ## Adding new feature or making overarching changes
Before you start writing code for implementing a new feature, open an issue explaining your feature request and atleast one use case. This allows the maintainers to decide whether such a feature is desired for the project in the first place, and will provide an avenue to discuss some implementation details. If you open a pull request for a new feature without discussing with us first, do not be surprised when we ask for large changes to the code, or even reject it outright.
The same applies for overarching changes to the architecture, documentation or code style
The same applies for changes to the documentation, code style, or overarching changes to the architecture
## Adding support for a new site
@@ -197,7 +209,7 @@ ## Adding support for a new site
```
1. Add an import in [`yt_dlp/extractor/extractors.py`](yt_dlp/extractor/extractors.py).
1. Run `python test/test_download.py TestDownload.test_YourExtractor`. This *should fail* at first, but you can continually re-run it until you're done. If you decide to add more than one test, the tests will then be named `TestDownload.test_YourExtractor`, `TestDownload.test_YourExtractor_1`, `TestDownload.test_YourExtractor_2`, etc. Note that tests with `only_matching` key in test's dict are not counted in. You can also run all the tests in one go with `TestDownload.test_YourExtractor_all`
1. Make sure you have atleast one test for your extractor. Even if all videos covered by the extractor are expected to be inaccessible for automated testing, tests should still be added with a `skip` parameter indicating why the purticular test is disabled from running.
1. Make sure you have atleast one test for your extractor. Even if all videos covered by the extractor are expected to be inaccessible for automated testing, tests should still be added with a `skip` parameter indicating why the particular test is disabled from running.
1. Have a look at [`yt_dlp/extractor/common.py`](yt_dlp/extractor/common.py) for possible helper methods and a [detailed description of what your extractor should and may return](yt_dlp/extractor/common.py#L91-L426). Add tests and code for as many as you want.
1. Make sure your code follows [yt-dlp coding conventions](#yt-dlp-coding-conventions) and check the code with [flake8](https://flake8.pycqa.org/en/latest/index.html#quickstart):

View File

@@ -125,3 +125,33 @@ jfogelman
timethrow
sarnoud
Bojidarist
18928172992817182/gustaf
nixklai
smplayer-dev
Zirro
CrypticSignal
flashdagger
fractalf
frafra
kaz-us
ozburo
rhendric
sdomi
selfisekai
stanoarn
0xA7404A/Aurora
4a1e2y5
aarubui
chio0hai
cntrl-s
Deer-Spangle
DEvmIb
Grabien
j54vc1bk
mpeter50
mrpapersonic
pabs3
staubichsauger
xenova
Yakabuff
zulaport

View File

@@ -5,14 +5,283 @@ # Instuctions for creating release
* Run `make doc`
* Update Changelog.md and CONTRIBUTORS
* Change "Merged with ytdl" version in Readme.md if needed
* Add new/fixed extractors in "new features" section of Readme.md
* Commit as `Release <version>`
* Push to origin/release using `git push origin master:release`
build task will now run
* Change "Based on ytdl" version in Readme.md if needed
* Commit as `Release <version>` and push to master
* Dispatch the workflow https://github.com/yt-dlp/yt-dlp/actions/workflows/build.yml on master
-->
### 2021.12.01
* **Add option `--wait-for-video` to wait for scheduled streams**
* Add option `--break-per-input` to apply --break-on... to each input URL
* Add option `--embed-info-json` to embed info.json in mkv
* Add compat-option `embed-metadata`
* Allow using a custom format selector through API
* [AES] Add ECB mode by [nao20010128nao](https://github.com/nao20010128nao)
* [build] Fix MacOS Build
* [build] Save Git HEAD at release alongside version info
* [build] Use `workflow_dispatch` for release
* [downloader/ffmpeg] Fix for direct videos inside mpd manifests
* [downloader] Add colors to download progress
* [EmbedSubtitles] Slightly relax duration check and related cleanup
* [ExtractAudio] Fix conversion to `wav` and `vorbis`
* [ExtractAudio] Support `alac`
* [extractor] Extract `average_rating` from JSON-LD
* [FixupM3u8] Fixup MPEG-TS in MP4 container
* [generic] Support mpd manifests without extension by [shirt](https://github.com/shirt-dev)
* [hls] Better FairPlay DRM detection by [nyuszika7h](https://github.com/nyuszika7h)
* [jsinterp] Fix splice to handle float (for youtube js player f1ca6900)
* [utils] Allow alignment in `render_table` and add tests
* [utils] Fix `PagedList`
* [utils] Fix error when copying `LazyList`
* Clarify video/audio-only formats in -F
* Ensure directory exists when checking formats
* Ensure path for link files exists by [Zirro](https://github.com/Zirro)
* Ensure same config file is not loaded multiple times
* Fix 'postprocessor_hooks`
* Fix `--break-on-archive` when pre-checking
* Fix `--check-formats` for `mhtml`
* Fix `--load-info-json` of playlists with failed entries
* Fix `--trim-filename` when filename has `.`
* Fix bug in parsing `--add-header`
* Fix error in `report_unplayable_conflict` by [shirt](https://github.com/shirt-dev)
* Fix writing playlist infojson with `--no-clean-infojson`
* Validate --get-bypass-country
* [blogger] Add extractor by [pabs3](https://github.com/pabs3)
* [breitbart] Add extractor by [Grabien](https://github.com/Grabien)
* [CableAV] Add extractor by [j54vc1bk](https://github.com/j54vc1bk)
* [CanalAlpha] Add extractor by [Ashish0804](https://github.com/Ashish0804)
* [CozyTV] Add extractor by [Ashish0804](https://github.com/Ashish0804)
* [CPTwentyFour] Add extractor by [Ashish0804](https://github.com/Ashish0804)
* [DiscoveryPlus] Add `DiscoveryPlusItalyShowIE` by [Ashish0804](https://github.com/Ashish0804)
* [ESPNCricInfo] Add extractor by [Ashish0804](https://github.com/Ashish0804)
* [LinkedIn] Add extractor by [u-spec-png](https://github.com/u-spec-png)
* [mixch] Add extractor by [nao20010128nao](https://github.com/nao20010128nao)
* [nebula] Add `NebulaCollectionIE` and rewrite extractor by [hheimbuerger](https://github.com/hheimbuerger)
* [OneFootball] Add extractor by [Ashish0804](https://github.com/Ashish0804)
* [peer.tv] Add extractor by [u-spec-png](https://github.com/u-spec-png)
* [radiozet] Add extractor by [0xA7404A](https://github.com/0xA7404A) (Aurora)
* [redgifs] Add extractor by [chio0hai](https://github.com/chio0hai)
* [RedGifs] Add Search and User extractors by [Deer-Spangle](https://github.com/Deer-Spangle)
* [rtrfm] Add extractor by [pabs3](https://github.com/pabs3)
* [Streamff] Add extractor by [cntrl-s](https://github.com/cntrl-s)
* [Stripchat] Add extractor by [zulaport](https://github.com/zulaport)
* [Aljazeera] Fix extractor by [u-spec-png](https://github.com/u-spec-png)
* [AmazonStoreIE] Fix regex to not match vdp urls by [Ashish0804](https://github.com/Ashish0804)
* [ARDBetaMediathek] Handle new URLs
* [bbc] Get all available formats by [nyuszika7h](https://github.com/nyuszika7h)
* [Bilibili] Fix title extraction by [u-spec-png](https://github.com/u-spec-png)
* [CBC Gem] Fix for shows that don't have all seasons by [makeworld-the-better-one](https://github.com/makeworld-the-better-one)
* [curiositystream] Add more metadata
* [CuriosityStream] Fix series
* [DiscoveryPlus] Rewrite extractors by [Ashish0804](https://github.com/Ashish0804), [pukkandan](https://github.com/pukkandan)
* [HotStar] Set language field from tags by [Ashish0804](https://github.com/Ashish0804)
* [instagram, cleanup] Refactor extractors
* [Instagram] Display more login errors by [MinePlayersPE](https://github.com/MinePlayersPE)
* [itv] Fix extractor by [staubichsauger](https://github.com/staubichsauger), [pukkandan](https://github.com/pukkandan)
* [mediaklikk] Expand valid URL
* [MTV] Improve mgid extraction by [Sipherdrakon](https://github.com/Sipherdrakon), [kikuyan](https://github.com/kikuyan)
* [nexx] Better error message for unsupported format
* [NovaEmbed] Fix extractor by [pukkandan](https://github.com/pukkandan), [std-move](https://github.com/std-move)
* [PatreonUser] Do not capture RSS URLs
* [Reddit] Add support for 1080p videos by [xenova](https://github.com/xenova)
* [RoosterTeethSeries] Fix for multiple pages by [MinePlayersPE](https://github.com/MinePlayersPE)
* [sbs] Fix for movies and livestreams
* [Senate.gov] Add SenateGovIE and fix SenateISVPIE by [Grabien](https://github.com/Grabien), [pukkandan](https://github.com/pukkandan)
* [soundcloud:search] Fix pagination
* [tiktok:user] Set `webpage_url` correctly
* [Tokentube] Fix description by [u-spec-png](https://github.com/u-spec-png)
* [trovo] Fix extractor by [nyuszika7h](https://github.com/nyuszika7h)
* [tv2] Expand valid URL
* [Tvplayhome] Fix extractor by [pukkandan](https://github.com/pukkandan), [18928172992817182](https://github.com/18928172992817182)
* [Twitch:vod] Add chapters by [mpeter50](https://github.com/mpeter50)
* [twitch:vod] Extract live status by [DEvmIb](https://github.com/DEvmIb)
* [VidLii] Add 720p support by [mrpapersonic](https://github.com/mrpapersonic)
* [vimeo] Add fallback for config URL
* [vimeo] Sort http formats higher
* [WDR] Expand valid URL
* [willow] Add extractor by [aarubui](https://github.com/aarubui)
* [xvideos] Detect embed URLs by [4a1e2y5](https://github.com/4a1e2y5)
* [xvideos] Fix extractor by [Yakabuff](https://github.com/Yakabuff)
* [youtube, cleanup] Reorganize Tab and Search extractor inheritances
* [youtube:search_url] Add playlist/channel support
* [youtube] Add `default` player client by [coletdjnz](https://github.com/coletdjnz)
* [youtube] Add storyboard formats
* [youtube] Decrypt n-sig for URLs with `ratebypass`
* [youtube] Minor improvement to format sorting
* [cleanup] Add deprecation warnings
* [cleanup] Minor cleanup
* [cleanup] Misc cleanup
* [cleanup] Refactor `JSInterpreter._seperate`
* [Cleanup] Remove some unnecessary groups in regexes by [Ashish0804](https://github.com/Ashish0804)
### 2021.11.10.1
* Temporarily disable MacOS Build
### 2021.11.10
* [youtube] **Fix throttling by decrypting n-sig**
* Merging extractors from [haruhi-dl](https://git.sakamoto.pl/laudom/haruhi-dl) by [selfisekai](https://github.com/selfisekai)
* [extractor] Add `_search_nextjs_data`
* [tvp] Fix extractors
* [tvp] Add TVPStreamIE
* [wppilot] Add extractors
* [polskieradio] Add extractors
* [radiokapital] Add extractors
* [polsatgo] Add extractor by [selfisekai](https://github.com/selfisekai), [sdomi](https://github.com/sdomi)
* Separate `--check-all-formats` from `--check-formats`
* Approximate filesize from bitrate
* Don't create console in `windows_enable_vt_mode`
* Fix bug in `--load-infojson` of playlists
* [minicurses] Add colors to `-F` and standardize color-printing code
* [outtmpl] Add type `link` for internet shortcut files
* [outtmpl] Add alternate forms for `q` and `j`
* [outtmpl] Do not traverse `None`
* [fragment] Fix progress display in fragmented downloads
* [downloader/ffmpeg] Fix vtt download with ffmpeg
* [ffmpeg] Detect presence of setts and libavformat version
* [ExtractAudio] Rescale `--audio-quality` correctly by [CrypticSignal](https://github.com/CrypticSignal), [pukkandan](https://github.com/pukkandan)
* [ExtractAudio] Use `libfdk_aac` if available by [CrypticSignal](https://github.com/CrypticSignal)
* [FormatSort] `eac3` is better than `ac3`
* [FormatSort] Fix some fields' defaults
* [generic] Detect more json_ld
* [generic] parse jwplayer with only the json URL
* [extractor] Add keyword automatically to SearchIE descriptions
* [extractor] Fix some errors being converted to `ExtractorError`
* [utils] Add `join_nonempty`
* [utils] Add `jwt_decode_hs256` by [Ashish0804](https://github.com/Ashish0804)
* [utils] Create `DownloadCancelled` exception
* [utils] Parse `vp09` as vp9
* [utils] Sanitize URL when determining protocol
* [test/download] Fallback test to `bv`
* [docs] Minor documentation improvements
* [cleanup] Improvements to error and debug messages
* [cleanup] Minor fixes and cleanup
* [3speak] Add extractors by [Ashish0804](https://github.com/Ashish0804)
* [AmazonStore] Add extractor by [Ashish0804](https://github.com/Ashish0804)
* [Gab] Add extractor by [u-spec-png](https://github.com/u-spec-png)
* [mediaset] Add playlist support by [nixxo](https://github.com/nixxo)
* [MLSScoccer] Add extractor by [Ashish0804](https://github.com/Ashish0804)
* [N1] Add support for nova.rs by [u-spec-png](https://github.com/u-spec-png)
* [PlanetMarathi] Add extractor by [Ashish0804](https://github.com/Ashish0804)
* [RaiplayRadio] Add extractors by [frafra](https://github.com/frafra)
* [roosterteeth] Add series extractor
* [sky] Add `SkyNewsStoryIE` by [ajj8](https://github.com/ajj8)
* [youtube] Fix sorting for some videos
* [youtube] Populate `thumbnail` with the best "known" thumbnail
* [youtube] Refactor itag processing
* [youtube] Remove unnecessary no-playlist warning
* [youtube:tab] Add Invidious list for playlists/channels by [rhendric](https://github.com/rhendric)
* [Bilibili:comments] Fix infinite loop by [u-spec-png](https://github.com/u-spec-png)
* [ceskatelevize] Fix extractor by [flashdagger](https://github.com/flashdagger)
* [Coub] Fix media format identification by [wlritchi](https://github.com/wlritchi)
* [crunchyroll] Add extractor-args `language` and `hardsub`
* [DiscoveryPlus] Allow language codes in URL
* [imdb] Fix thumbnail by [ozburo](https://github.com/ozburo)
* [instagram] Add IOS URL support by [u-spec-png](https://github.com/u-spec-png)
* [instagram] Improve login code by [u-spec-png](https://github.com/u-spec-png)
* [Instagram] Improve metadata extraction by [u-spec-png](https://github.com/u-spec-png)
* [iPrima] Fix extractor by [stanoarn](https://github.com/stanoarn)
* [itv] Add support for ITV News by [ajj8](https://github.com/ajj8)
* [la7] Fix extractor by [nixxo](https://github.com/nixxo)
* [linkedin] Don't login multiple times
* [mtv] Fix some videos by [Sipherdrakon](https://github.com/Sipherdrakon)
* [Newgrounds] Fix description by [u-spec-png](https://github.com/u-spec-png)
* [Nrk] Minor fixes by [fractalf](https://github.com/fractalf)
* [Olympics] Fix extractor by [u-spec-png](https://github.com/u-spec-png)
* [piksel] Fix sorting
* [twitter] Do not sort by codec
* [viewlift] Add cookie-based login and series support by [Ashish0804](https://github.com/Ashish0804), [pukkandan](https://github.com/pukkandan)
* [vimeo] Detect source extension and misc cleanup by [flashdagger](https://github.com/flashdagger)
* [vimeo] Fix ondemand videos and direct URLs with hash
* [vk] Fix login and add subtitles by [kaz-us](https://github.com/kaz-us)
* [VLive] Add upload_date and thumbnail by [Ashish0804](https://github.com/Ashish0804)
* [VRT] Fix login by [pgaig](https://github.com/pgaig)
* [Vupload] Fix extractor by [u-spec-png](https://github.com/u-spec-png)
* [wakanim] Add support for MPD manifests by [nyuszika7h](https://github.com/nyuszika7h)
* [wakanim] Detect geo-restriction by [nyuszika7h](https://github.com/nyuszika7h)
* [ZenYandex] Fix extractor by [u-spec-png](https://github.com/u-spec-png)
### 2021.10.22
* [build] Improvements
* Build standalone MacOS packages by [smplayer-dev](https://github.com/smplayer-dev)
* Release windows exe built with `py2exe`
* Enable lazy-extractors in releases.
* Set env var `YTDLP_NO_LAZY_EXTRACTORS` to forcefully disable this (experimental)
* Clean up error reporting in update
* Refactor `pyinst.py`, misc cleanup and improve docs
* [docs] Migrate issues to use forms by [Ashish0804](https://github.com/Ashish0804)
* [downloader] **Fix slow progress hooks**
* This was causing HLS/DASH downloads to be extremely slow in some situations
* [downloader/ffmpeg] Improve simultaneous download and merge
* [EmbedMetadata] Allow overwriting all default metadata with `meta_default` key
* [ModifyChapters] Add ability for `--remove-chapters` to remove sections by timestamp
* [utils] Allow duration strings in `--match-filter`
* Add HDR information to formats
* Add negative option `--no-batch-file` by [Zirro](https://github.com/Zirro)
* Calculate more fields for merged formats
* Do not verify thumbnail URLs unless `--check-formats` is specified
* Don't create console for subprocesses on Windows
* Fix `--restrict-filename` when used with default template
* Fix `check_formats` output being written to stdout when `-qv`
* Fix bug in storyboards
* Fix conflict b/w id and ext in format selection
* Fix verbose head not showing custom configs
* Load archive only after printing verbose head
* Make `duration_string` and `resolution` available in --match-filter
* Re-implement deprecated option `--id`
* Reduce default `--socket-timeout`
* Write verbose header to logger
* [outtmpl] Fix bug in expanding environment variables
* [cookies] Local State should be opened as utf-8
* [extractor,utils] Detect more codecs/mimetypes
* [extractor] Detect `EXT-X-KEY` Apple FairPlay
* [utils] Use `importlib` to load plugins by [sulyi](https://github.com/sulyi)
* [http] Retry on socket timeout and show the last encountered error
* [fragment] Print error message when skipping fragment
* [aria2c] Fix `--skip-unavailable-fragment`
* [SponsorBlock] Obey `extractor-retries` and `sleep-requests`
* [Merger] Do not add `aac_adtstoasc` to non-hls audio
* [ModifyChapters] Do not mutate original chapters by [nihil-admirari](https://github.com/nihil-admirari)
* [devscripts/run_tests] Use markers to filter tests by [sulyi](https://github.com/sulyi)
* [7plus] Add cookie based authentication by [nyuszika7h](https://github.com/nyuszika7h)
* [AdobePass] Fix RCN MSO by [jfogelman](https://github.com/jfogelman)
* [CBC] Fix Gem livestream by [makeworld-the-better-one](https://github.com/makeworld-the-better-one)
* [CBC] Support CBC Gem member content by [makeworld-the-better-one](https://github.com/makeworld-the-better-one)
* [crunchyroll] Add season to flat-playlist
* [crunchyroll] Add support for `beta.crunchyroll` URLs and fix series URLs with language code
* [EUScreen] Add Extractor by [Ashish0804](https://github.com/Ashish0804)
* [Gronkh] Add extractor by [Ashish0804](https://github.com/Ashish0804)
* [hidive] Fix typo
* [Hotstar] Mention Dynamic Range in `format_id` by [Ashish0804](https://github.com/Ashish0804)
* [Hotstar] Raise appropriate error for DRM
* [instagram] Add login by [u-spec-png](https://github.com/u-spec-png)
* [instagram] Show appropriate error when login is needed
* [microsoftstream] Add extractor by [damianoamatruda](https://github.com/damianoamatruda), [nixklai](https://github.com/nixklai)
* [on24] Add extractor by [damianoamatruda](https://github.com/damianoamatruda)
* [patreon] Fix vimeo player regex by [zenerdi0de](https://github.com/zenerdi0de)
* [SkyNewsAU] Add extractor by [Ashish0804](https://github.com/Ashish0804)
* [tagesschau] Fix extractor by [u-spec-png](https://github.com/u-spec-png)
* [tbs] Add tbs live streams by [llacb47](https://github.com/llacb47)
* [tiktok] Fix typo and update tests
* [trovo] Support channel clips and VODs by [Ashish0804](https://github.com/Ashish0804)
* [Viafree] Add support for Finland by [18928172992817182](https://github.com/18928172992817182)
* [vimeo] Fix embedded `player.vimeo`
* [vlive:channel] Fix extraction by [kikuyan](https://github.com/kikuyan), [pukkandan](https://github.com/pukkandan)
* [youtube] Add auto-translated subtitles
* [youtube] Expose different formats with same itag
* [youtube:comments] Fix for new layout by [coletdjnz](https://github.com/coletdjnz)
* [cleanup] Cleanup bilibili code by [pukkandan](https://github.com/pukkandan), [u-spec-png](https://github.com/u-spec-png)
* [cleanup] Remove broken youtube login code
* [cleanup] Standardize timestamp formatting code
* [cleanup] Generalize `getcomments` implementation for extractors
* [cleanup] Simplify search extractors code
* [cleanup] misc
### 2021.10.10
@@ -1205,9 +1474,8 @@ ### 2021.01.05
* Cleaned up the fork for public use
**PS**: All uncredited changes above this point are authored by [pukkandan](https://github.com/pukkandan)
**Note**: All uncredited changes above this point are authored by [pukkandan](https://github.com/pukkandan)
### Unreleased changes in [blackjack4494/yt-dlc](https://github.com/blackjack4494/yt-dlc)
* Updated to youtube-dl release 2020.11.26 by [pukkandan](https://github.com/pukkandan)
* Youtube improvements by [pukkandan](https://github.com/pukkandan)
* Implemented all Youtube Feeds (ytfav, ytwatchlater, ytsubs, ythistory, ytrec) and SearchURL
@@ -1230,8 +1498,110 @@ ### Unreleased changes in [blackjack4494/yt-dlc](https://github.com/blackjack449
* [spreaker] fix SpreakerShowIE test URL by [pukkandan](https://github.com/pukkandan)
* [Vlive] Fix playlist handling when downloading a channel by [kyuyeunk](https://github.com/kyuyeunk)
* [tmz] Fix extractor by [diegorodriguezv](https://github.com/diegorodriguezv)
* [ITV] BTCC URL update by [WolfganP](https://github.com/WolfganP)
* [generic] Detect embedded bitchute videos by [pukkandan](https://github.com/pukkandan)
* [generic] Extract embedded youtube and twitter videos by [diegorodriguezv](https://github.com/diegorodriguezv)
* [ffmpeg] Ensure all streams are copied by [pukkandan](https://github.com/pukkandan)
* [embedthumbnail] Fix for os.rename error by [pukkandan](https://github.com/pukkandan)
* make_win.bat: don't use UPX to pack vcruntime140.dll by [jbruchon](https://github.com/jbruchon)
### Changelog of [blackjack4494/yt-dlc](https://github.com/blackjack4494/yt-dlc) till release 2020.11.11-3
**Note**: This was constructed from the merge commit messages and may not be entirely accurate
* [bandcamp] fix failing test. remove subclass hack by [insaneracist](https://github.com/insaneracist)
* [bandcamp] restore album downloads by [insaneracist](https://github.com/insaneracist)
* [francetv] fix extractor by [Surkal](https://github.com/Surkal)
* [gdcvault] fix extractor by [blackjack4494](https://github.com/blackjack4494)
* [hotstar] Move to API v1 by [theincognito-inc](https://github.com/theincognito-inc)
* [hrfernsehen] add extractor by [blocktrron](https://github.com/blocktrron)
* [kakao] new apis by [blackjack4494](https://github.com/blackjack4494)
* [la7] fix missing protocol by [nixxo](https://github.com/nixxo)
* [mailru] removed escaped braces, use urljoin, added tests by [nixxo](https://github.com/nixxo)
* [MTV/Nick] universal mgid extractor + fix nick.de feed by [blackjack4494](https://github.com/blackjack4494)
* [mtv] Fix a missing match_id by [nixxo](https://github.com/nixxo)
* [Mtv] updated extractor logic & more by [blackjack4494](https://github.com/blackjack4494)
* [ndr] support Daserste ndr by [blackjack4494](https://github.com/blackjack4494)
* [Netzkino] Only use video id to find metadata by [TobiX](https://github.com/TobiX)
* [newgrounds] fix: video download by [insaneracist](https://github.com/insaneracist)
* [nitter] Add new extractor by [B0pol](https://github.com/B0pol)
* [soundcloud] Resolve audio/x-wav by [tfvlrue](https://github.com/tfvlrue)
* [soundcloud] sets pattern and tests by [blackjack4494](https://github.com/blackjack4494)
* [SouthparkDE/MTV] another mgid extraction (mtv_base) feed url updated by [blackjack4494](https://github.com/blackjack4494)
* [StoryFire] Add new extractor by [sgstair](https://github.com/sgstair)
* [twitch] by [geauxlo](https://github.com/geauxlo)
* [videa] Adapt to updates by [adrianheine](https://github.com/adrianheine)
* [Viki] subtitles, formats by [blackjack4494](https://github.com/blackjack4494)
* [vlive] fix extractor for revamped website by [exwm](https://github.com/exwm)
* [xtube] fix extractor by [insaneracist](https://github.com/insaneracist)
* [youtube] Convert subs when download is skipped by [blackjack4494](https://github.com/blackjack4494)
* [youtube] Fix age gate detection by [random-nick](https://github.com/random-nick)
* [youtube] fix yt-only playback when age restricted/gated - requires cookies by [blackjack4494](https://github.com/blackjack4494)
* [youtube] fix: extract artist metadata from ytInitialData by [insaneracist](https://github.com/insaneracist)
* [youtube] fix: extract mix playlist ids from ytInitialData by [insaneracist](https://github.com/insaneracist)
* [youtube] fix: mix playlist title by [insaneracist](https://github.com/insaneracist)
* [youtube] fix: Youtube Music playlists by [insaneracist](https://github.com/insaneracist)
* [Youtube] Fixed problem with new youtube player by [peet1993](https://github.com/peet1993)
* [zoom] Fix url parsing for url's containing /share/ and dots by [Romern](https://github.com/Romern)
* [zoom] new extractor by [insaneracist](https://github.com/insaneracist)
* abc by [adrianheine](https://github.com/adrianheine)
* Added Comcast_SSO fix by [merval](https://github.com/merval)
* Added DRM logic to brightcove by [merval](https://github.com/merval)
* Added regex for ABC.com site. by [kucksdorfs](https://github.com/kucksdorfs)
* alura by [hugohaa](https://github.com/hugohaa)
* Arbitrary merges by [fstirlitz](https://github.com/fstirlitz)
* ard.py_add_playlist_support by [martin54](https://github.com/martin54)
* Bugfix/youtube/chapters fix extractor by [gschizas](https://github.com/gschizas)
* bugfix_youtube_like_extraction by [RedpointsBots](https://github.com/RedpointsBots)
* Create build workflow by [blackjack4494](https://github.com/blackjack4494)
* deezer by [LucBerge](https://github.com/LucBerge)
* Detect embedded bitchute videos by [pukkandan](https://github.com/pukkandan)
* Don't install tests by [l29ah](https://github.com/l29ah)
* Don't try to embed/convert json subtitles generated by [youtube](https://github.com/youtube) livechat by [pukkandan](https://github.com/pukkandan)
* Doodstream by [sxvghd](https://github.com/sxvghd)
* duboku by [lkho](https://github.com/lkho)
* elonet by [tpikonen](https://github.com/tpikonen)
* ext/remuxe-video by [Zocker1999NET](https://github.com/Zocker1999NET)
* fall-back to the old way to fetch subtitles, if needed by [RobinD42](https://github.com/RobinD42)
* feature_subscriber_count by [RedpointsBots](https://github.com/RedpointsBots)
* Fix external downloader when there is no http_header by [pukkandan](https://github.com/pukkandan)
* Fix issue triggered by [tubeup](https://github.com/tubeup) by [nsapa](https://github.com/nsapa)
* Fix YoutubePlaylistsIE by [ZenulAbidin](https://github.com/ZenulAbidin)
* fix-mitele' by [DjMoren](https://github.com/DjMoren)
* fix/google-drive-cookie-issue by [legraphista](https://github.com/legraphista)
* fix_tiktok by [mervel-mervel](https://github.com/mervel-mervel)
* Fixed problem with JS player URL by [peet1993](https://github.com/peet1993)
* fixYTSearch by [xarantolus](https://github.com/xarantolus)
* FliegendeWurst-3sat-zdf-merger-bugfix-feature
* gilou-bandcamp_update
* implement ThisVid extractor by [rigstot](https://github.com/rigstot)
* JensTimmerman-patch-1 by [JensTimmerman](https://github.com/JensTimmerman)
* Keep download archive in memory for better performance by [jbruchon](https://github.com/jbruchon)
* la7-fix by [iamleot](https://github.com/iamleot)
* magenta by [adrianheine](https://github.com/adrianheine)
* Merge 26564 from [adrianheine](https://github.com/adrianheine)
* Merge code from [ddland](https://github.com/ddland)
* Merge code from [nixxo](https://github.com/nixxo)
* Merge code from [ssaqua](https://github.com/ssaqua)
* Merge code from [zubearc](https://github.com/zubearc)
* mkvthumbnail by [MrDoritos](https://github.com/MrDoritos)
* myvideo_ge by [fonkap](https://github.com/fonkap)
* naver by [SeonjaeHyeon](https://github.com/SeonjaeHyeon)
* ondemandkorea by [julien-hadleyjack](https://github.com/julien-hadleyjack)
* rai-update by [iamleot](https://github.com/iamleot)
* RFC: youtube: Polymer UI and JSON endpoints for playlists by [wlritchi](https://github.com/wlritchi)
* rutv by [adrianheine](https://github.com/adrianheine)
* Sc extractor web auth by [blackjack4494](https://github.com/blackjack4494)
* Switch from binary search tree to Python sets by [jbruchon](https://github.com/jbruchon)
* tiktok by [skyme5](https://github.com/skyme5)
* tvnow by [TinyToweringTree](https://github.com/TinyToweringTree)
* twitch-fix by [lel-amri](https://github.com/lel-amri)
* Twitter shortener by [blackjack4494](https://github.com/blackjack4494)
* Update README.md by [JensTimmerman](https://github.com/JensTimmerman)
* Update to reflect website changes. by [amigatomte](https://github.com/amigatomte)
* use webarchive to fix a dead link in README by [B0pol](https://github.com/B0pol)
* Viki the second by [blackjack4494](https://github.com/blackjack4494)
* wdr-subtitles by [mrtnmtth](https://github.com/mrtnmtth)
* Webpfix by [alexmerkel](https://github.com/alexmerkel)
* Youtube live chat by [siikamiika](https://github.com/siikamiika)

View File

@@ -1,4 +1,4 @@
all: yt-dlp doc pypi-files
all: lazy-extractors yt-dlp doc pypi-files
clean: clean-test clean-dist clean-cache
completions: completion-bash completion-fish completion-zsh
doc: README.md CONTRIBUTING.md issuetemplates supportedsites
@@ -15,9 +15,11 @@ pypi-files: AUTHORS Changelog.md LICENSE README.md README.txt supportedsites com
clean-test:
rm -rf *.3gp *.annotations.xml *.ape *.avi *.description *.dump *.flac *.flv *.frag *.frag.aria2 *.frag.urls \
*.info.json *.jpeg *.jpg *.live_chat.json *.m4a *.m4v *.mkv *.mp3 *.mp4 *.ogg *.opus *.part* *.png *.sbv *.srt \
*.swf *.swp *.ttml *.vtt *.wav *.webm *.webp *.ytdl test/testdata/player-*.js
*.swf *.swp *.ttml *.vtt *.wav *.webm *.webp *.mhtml *.mov *.unknown_video *.desktop *.url *.webloc *.ytdl \
test/testdata/player-*.js tmp/
clean-dist:
rm -rf yt-dlp.1.temp.md yt-dlp.1 README.txt MANIFEST build/ dist/ .coverage cover/ yt-dlp.tar.gz completions/ yt_dlp/extractor/lazy_extractors.py *.spec CONTRIBUTING.md.tmp yt-dlp yt-dlp.exe yt_dlp.egg-info/ AUTHORS .mailmap
rm -rf yt-dlp.1.temp.md yt-dlp.1 README.txt MANIFEST build/ dist/ .coverage cover/ yt-dlp.tar.gz completions/ \
yt_dlp/extractor/lazy_extractors.py *.spec CONTRIBUTING.md.tmp yt-dlp yt-dlp.exe yt_dlp.egg-info/ AUTHORS .mailmap
clean-cache:
find . -name "*.pyc" -o -name "*.class" -delete
@@ -31,7 +33,6 @@ DESTDIR ?= .
BINDIR ?= $(PREFIX)/bin
MANDIR ?= $(PREFIX)/man
SHAREDIR ?= $(PREFIX)/share
# make_supportedsites.py doesnot work correctly in python2
PYTHON ?= /usr/bin/env python3
# set SYSCONFDIR to /etc if PREFIX=/usr or PREFIX=/usr/local
@@ -40,9 +41,9 @@ SYSCONFDIR = $(shell if [ $(PREFIX) = /usr -o $(PREFIX) = /usr/local ]; then ech
# set markdown input format to "markdown-smart" for pandoc version 2 and to "markdown" for pandoc prior to version 2
MARKDOWN = $(shell if [ `pandoc -v | head -n1 | cut -d" " -f2 | head -c1` = "2" ]; then echo markdown-smart; else echo markdown; fi)
install: yt-dlp yt-dlp.1 completions
install -Dm755 yt-dlp $(DESTDIR)$(BINDIR)
install -Dm644 yt-dlp.1 $(DESTDIR)$(MANDIR)/man1
install: lazy-extractors yt-dlp yt-dlp.1 completions
install -Dm755 yt-dlp $(DESTDIR)$(BINDIR)/yt-dlp
install -Dm644 yt-dlp.1 $(DESTDIR)$(MANDIR)/man1/yt-dlp.1
install -Dm644 completions/bash/yt-dlp $(DESTDIR)$(SHAREDIR)/bash-completion/completions/yt-dlp
install -Dm644 completions/zsh/_yt-dlp $(DESTDIR)$(SHAREDIR)/zsh/site-functions/_yt-dlp
install -Dm644 completions/fish/yt-dlp.fish $(DESTDIR)$(SHAREDIR)/fish/vendor_completions.d/yt-dlp.fish
@@ -78,12 +79,13 @@ README.md: yt_dlp/*.py yt_dlp/*/*.py
CONTRIBUTING.md: README.md
$(PYTHON) devscripts/make_contributing.py README.md CONTRIBUTING.md
issuetemplates: devscripts/make_issue_template.py .github/ISSUE_TEMPLATE_tmpl/1_broken_site.md .github/ISSUE_TEMPLATE_tmpl/2_site_support_request.md .github/ISSUE_TEMPLATE_tmpl/3_site_feature_request.md .github/ISSUE_TEMPLATE_tmpl/4_bug_report.md .github/ISSUE_TEMPLATE_tmpl/5_feature_request.md yt_dlp/version.py
$(PYTHON) devscripts/make_issue_template.py .github/ISSUE_TEMPLATE_tmpl/1_broken_site.md .github/ISSUE_TEMPLATE/1_broken_site.md
$(PYTHON) devscripts/make_issue_template.py .github/ISSUE_TEMPLATE_tmpl/2_site_support_request.md .github/ISSUE_TEMPLATE/2_site_support_request.md
$(PYTHON) devscripts/make_issue_template.py .github/ISSUE_TEMPLATE_tmpl/3_site_feature_request.md .github/ISSUE_TEMPLATE/3_site_feature_request.md
$(PYTHON) devscripts/make_issue_template.py .github/ISSUE_TEMPLATE_tmpl/4_bug_report.md .github/ISSUE_TEMPLATE/4_bug_report.md
$(PYTHON) devscripts/make_issue_template.py .github/ISSUE_TEMPLATE_tmpl/5_feature_request.md .github/ISSUE_TEMPLATE/5_feature_request.md
issuetemplates: devscripts/make_issue_template.py .github/ISSUE_TEMPLATE_tmpl/1_broken_site.yml .github/ISSUE_TEMPLATE_tmpl/2_site_support_request.yml .github/ISSUE_TEMPLATE_tmpl/3_site_feature_request.yml .github/ISSUE_TEMPLATE_tmpl/4_bug_report.yml .github/ISSUE_TEMPLATE_tmpl/5_feature_request.yml yt_dlp/version.py
$(PYTHON) devscripts/make_issue_template.py .github/ISSUE_TEMPLATE_tmpl/1_broken_site.yml .github/ISSUE_TEMPLATE/1_broken_site.yml
$(PYTHON) devscripts/make_issue_template.py .github/ISSUE_TEMPLATE_tmpl/2_site_support_request.yml .github/ISSUE_TEMPLATE/2_site_support_request.yml
$(PYTHON) devscripts/make_issue_template.py .github/ISSUE_TEMPLATE_tmpl/3_site_feature_request.yml .github/ISSUE_TEMPLATE/3_site_feature_request.yml
$(PYTHON) devscripts/make_issue_template.py .github/ISSUE_TEMPLATE_tmpl/4_bug_report.yml .github/ISSUE_TEMPLATE/4_bug_report.yml
$(PYTHON) devscripts/make_issue_template.py .github/ISSUE_TEMPLATE_tmpl/5_feature_request.yml .github/ISSUE_TEMPLATE/5_feature_request.yml
$(PYTHON) devscripts/make_issue_template.py .github/ISSUE_TEMPLATE_tmpl/6_question.yml .github/ISSUE_TEMPLATE/6_question.yml
supportedsites:
$(PYTHON) devscripts/make_supportedsites.py supportedsites.md

390
README.md
View File

@@ -22,6 +22,7 @@
* [Differences in default behavior](#differences-in-default-behavior)
* [INSTALLATION](#installation)
* [Update](#update)
* [Release Files](#release-files)
* [Dependencies](#dependencies)
* [Compile](#compile)
* [USAGE AND OPTIONS](#usage-and-options)
@@ -60,26 +61,24 @@
* [Opening an Issue](CONTRIBUTING.md#opening-an-issue)
* [Developer Instructions](CONTRIBUTING.md#developer-instructions)
* [MORE](#more)
</div>
# NEW FEATURES
The major new features from the latest release of [blackjack4494/yt-dlc](https://github.com/blackjack4494/yt-dlc) are:
* Based on **youtube-dl 2021.06.06 [commit/379f52a](https://github.com/ytdl-org/youtube-dl/commit/379f52a4954013767219d25099cce9e0f9401961)** and **youtube-dlc 2020.11.11-3 [commit/98e248f](https://github.com/blackjack4494/yt-dlc/commit/98e248faa49e69d795abc60f7cdefcf91e2612aa)**: You get all the features and patches of [youtube-dlc](https://github.com/blackjack4494/yt-dlc) in addition to the latest [youtube-dl](https://github.com/ytdl-org/youtube-dl)
* **[SponsorBlock Integration](#sponsorblock-options)**: You can mark/remove sponsor sections in youtube videos by utilizing the [SponsorBlock](https://sponsor.ajay.app) API
* **[Format Sorting](#sorting-formats)**: The default format sorting options have been changed so that higher resolution and better codecs will be now preferred instead of simply using larger bitrate. Furthermore, you can now specify the sort order using `-S`. This allows for much easier format selection than what is possible by simply using `--format` ([examples](#format-selection-examples))
* **Merged with youtube-dl [commit/379f52a](https://github.com/ytdl-org/youtube-dl/commit/379f52a4954013767219d25099cce9e0f9401961)**: (v2021.06.06) You get all the latest features and patches of [youtube-dl](https://github.com/ytdl-org/youtube-dl) in addition to all the features of [youtube-dlc](https://github.com/blackjack4494/yt-dlc)
* **Merged with animelover1984/youtube-dl**: You get most of the features and improvements from [animelover1984/youtube-dl](https://github.com/animelover1984/youtube-dl) including `--write-comments`, `BiliBiliSearch`, `BilibiliChannel`, Embedding thumbnail in mp4/ogg/opus, playlist infojson etc. Note that the NicoNico improvements are not available. See [#31](https://github.com/yt-dlp/yt-dlp/pull/31) for details.
* **Youtube improvements**:
* All Feeds (`:ytfav`, `:ytwatchlater`, `:ytsubs`, `:ythistory`, `:ytrec`) and private playlists supports downloading multiple pages of content
* Search (`ytsearch:`, `ytsearchdate:`), search URLs and in-channel search works
* Mixes supports downloading multiple pages of content
* Most (but not all) age-gated content can be downloaded without cookies
* Partial workaround for throttling issue
* Some (but not all) age-gated content can be downloaded without cookies
* Fix for [n-sig based throttling](https://github.com/ytdl-org/youtube-dl/issues/29326)
* Redirect channel's home URL automatically to `/video` to preserve the old behaviour
* `255kbps` audio is extracted (if available) from youtube music when premium cookies are given
* Youtube music Albums, channels etc can be downloaded ([except self-uploaded music](https://github.com/yt-dlp/yt-dlp/issues/723))
@@ -92,9 +91,9 @@ # NEW FEATURES
* **Aria2c with HLS/DASH**: You can use `aria2c` as the external downloader for DASH(mpd) and HLS(m3u8) formats
* **New extractors**: AnimeLab, Philo MSO, Spectrum MSO, SlingTV MSO, Cablevision MSO, RCN MSO, Rcs, Gedi, bitwave.tv, mildom, audius, zee5, mtv.it, wimtv, pluto.tv, niconico users, discoveryplus.in, mediathek, NFHSNetwork, nebula, ukcolumn, whowatch, MxplayerShow, parlview (au), YoutubeWebArchive, fancode, Saitosan, ShemarooMe, telemundo, VootSeries, SonyLIVSeries, HotstarSeries, VidioPremier, VidioLive, RCTIPlus, TBS Live, douyin, pornflip, ParamountPlusSeries, ScienceChannel, Utreon, OpenRec, BandcampMusic, blackboardcollaborate, eroprofile albums, mirrativ, BannedVideo, bilibili categories, Epicon, filmmodu, GabTV, HungamaAlbum, ManotoTV, Niconico search, Patreon User, peloton, ProjectVeritas, radiko, StarTV, tiktok user, Tokentube, voicy, TV2HuSeries, biliintl, 17live, NewgroundsUser, peertube channel/playlist, ZenYandex, CAM4, CGTN, damtomo, gotostage, Koo, Mediaite, Mediaklikk, MuseScore, nzherald, Olympics replay, radlive, SovietsCloset, Streamanity, Theta, Chingari, ciscowebex, Gettr, GoPro, N1, Theta, Veo, Vupload, NovaPlay
* **New and fixed extractors**: Many new extractors have been added and a lot of exisiting ones have been fixed. See the [changelog](Changelog.md) or the [list of supported sites](supportedsites.md)
* **Fixed/improved extractors**: archive.org, roosterteeth.com, skyit, instagram, itv, SouthparkDe, spreaker, Vlive, akamai, ina, rumble, tennistv, amcnetworks, la7 podcasts, linuxacadamy, nitter, twitcasting, viu, crackle, curiositystream, mediasite, rmcdecouverte, sonyliv, tubi, tenplay, patreon, videa, yahoo, BravoTV, crunchyroll playlist, RTP, viki, Hotstar, vidio, vimeo, mediaset, Mxplayer, nbcolympics, ParamountPlus, Newgrounds, SAML Verizon login, Hungama, afreecatv, aljazeera, ATV, bitchute, camtube, CDA, eroprofile, facebook, HearThisAtIE, iwara, kakao, Motherless, Nova, peertube, pornhub, reddit, tiktok, TV2, TV2Hu, tv5mondeplus, VH1, Viafree, XHamster, 9Now, AnimalPlanet, Arte, CBC, Chingari, comedycentral, DIYNetwork, niconico, dw, funimation, globo, HiDive, NDR, Nuvid, Oreilly, pbs, plutotv, reddit, redtube, soundcloud, SpankBang, VrtNU, bbc, Bilibili, LinkedInLearning, parliamentlive, PolskieRadio, Streamable, vidme, francetv
* **New MSOs**: Philo, Spectrum, SlingTV, Cablevision, RCN
* **Subtitle extraction from manifests**: Subtitles can be extracted from streaming media manifests. See [commit/be6202f](https://github.com/yt-dlp/yt-dlp/commit/be6202f12b97858b9d716e608394b51065d0419f) for details
@@ -104,35 +103,30 @@ # NEW FEATURES
* **Output template improvements**: Output templates can now have date-time formatting, numeric offsets, object traversal etc. See [output template](#output-template) for details. Even more advanced operations can also be done with the help of `--parse-metadata` and `--replace-in-metadata`
* **Other new options**: `--print`, `--sleep-requests`, `--convert-thumbnails`, `--write-link`, `--force-download-archive`, `--force-overwrites`, `--break-on-reject` etc
* **Other new options**: Many new options have been added such as `--print`, `--wait-for-video`, `--sleep-requests`, `--convert-thumbnails`, `--write-link`, `--force-download-archive`, `--force-overwrites`, `--break-on-reject` etc
* **Improvements**: Regex and other operators in `--match-filter`, multiple `--postprocessor-args` and `--downloader-args`, faster archive checking, more [format selection options](#format-selection) etc
* **Improvements**: Regex and other operators in `--match-filter`, multiple `--postprocessor-args` and `--downloader-args`, faster archive checking, more [format selection options](#format-selection), merge multi-video/audio etc
* **Plugin extractors**: Extractors can be loaded from an external file. See [plugins](#plugins) for details
* **Plugins**: Extractors and PostProcessors can be loaded from an external file. See [plugins](#plugins) for details
* **Self-updater**: The releases can be updated using `yt-dlp -U`
See [changelog](Changelog.md) or [commits](https://github.com/yt-dlp/yt-dlp/commits) for the full list of changes
**PS**: Some of these changes are already in youtube-dlc, but are still unreleased. See [this](Changelog.md#unreleased-changes-in-blackjack4494yt-dlc) for details
If you are coming from [youtube-dl](https://github.com/ytdl-org/youtube-dl), the amount of changes are very large. Compare [options](#options) and [supported sites](supportedsites.md) with youtube-dl's to get an idea of the massive number of features/patches [youtube-dlc](https://github.com/blackjack4494/yt-dlc) has accumulated.
### Differences in default behavior
Some of yt-dlp's default options are different from that of youtube-dl and youtube-dlc.
Some of yt-dlp's default options are different from that of youtube-dl and youtube-dlc:
* The options `--id`, `--auto-number` (`-A`), `--title` (`-t`) and `--literal` (`-l`), no longer work. See [removed options](#Removed) for details
* The options `--auto-number` (`-A`), `--title` (`-t`) and `--literal` (`-l`), no longer work. See [removed options](#Removed) for details
* `avconv` is not supported as as an alternative to `ffmpeg`
* The default [output template](#output-template) is `%(title)s [%(id)s].%(ext)s`. There is no real reason for this change. This was changed before yt-dlp was ever made public and now there are no plans to change it back to `%(title)s.%(id)s.%(ext)s`. Instead, you may use `--compat-options filename`
* The default [output template](#output-template) is `%(title)s [%(id)s].%(ext)s`. There is no real reason for this change. This was changed before yt-dlp was ever made public and now there are no plans to change it back to `%(title)s-%(id)s.%(ext)s`. Instead, you may use `--compat-options filename`
* The default [format sorting](#sorting-formats) is different from youtube-dl and prefers higher resolution and better codecs rather than higher bitrates. You can use the `--format-sort` option to change this to any order you prefer, or use `--compat-options format-sort` to use youtube-dl's sorting order
* The default format selector is `bv*+ba/b`. This means that if a combined video + audio format that is better than the best video-only format is found, the former will be prefered. Use `-f bv+ba/b` or `--compat-options format-spec` to revert this
* Unlike youtube-dlc, yt-dlp does not allow merging multiple audio/video streams into one file by default (since this conflicts with the use of `-f bv*+ba`). If needed, this feature must be enabled using `--audio-multistreams` and `--video-multistreams`. You can also use `--compat-options multistreams` to enable both
* `--ignore-errors` is enabled by default. Use `--abort-on-error` or `--compat-options abort-on-error` to abort on errors instead
* When writing metadata files such as thumbnails, description or infojson, the same information (if available) is also written for playlists. Use `--no-write-playlist-metafiles` or `--compat-options no-playlist-metafiles` to not write these files
* `--add-metadata` attaches the `infojson` to `mkv` files in addition to writing the metadata when used with `--write-infojson`. Use `--compat-options no-attach-info-json` to revert this
* `--add-metadata` attaches the `infojson` to `mkv` files in addition to writing the metadata when used with `--write-info-json`. Use `--no-embed-info-json` or `--compat-options no-attach-info-json` to revert this
* Some metadata are embedded into different fields when using `--add-metadata` as compared to youtube-dl. Most notably, `comment` field contains the `webpage_url` and `synopsis` contains the `description`. You can [use `--parse-metadata`](https://github.com/yt-dlp/yt-dlp#modifying-metadata) to modify this to your liking or use `--compat-options embed-metadata` to revert this
* `playlist_index` behaves differently when used with options like `--playlist-reverse` and `--playlist-items`. See [#302](https://github.com/yt-dlp/yt-dlp/issues/302) for details. You can use `--compat-options playlist-index` if you want to keep the earlier behavior
* The output of `-F` is listed in a new format. Use `--compat-options list-formats` to revert this
* All *experiences* of a funimation episode are considered as a single video. This behavior breaks existing archives. Use `--compat-options seperate-video-versions` to extract information from only the default player
@@ -142,7 +136,7 @@ ### Differences in default behavior
* If `ffmpeg` is used as the downloader, the downloading and merging of formats happen in a single step when possible. Use `--compat-options no-direct-merge` to revert this
* Thumbnail embedding in `mp4` is done with mutagen if possible. Use `--compat-options embed-thumbnail-atomicparsley` to force the use of AtomicParsley instead
* Some private fields such as filenames are removed by default from the infojson. Use `--no-clean-infojson` or `--compat-options no-clean-infojson` to revert this
* When `--embed-subs` and `--write-subs` are used together, the subtitles are written to disk and also embedded in the media file. You can use just `--embed-subs` to embed the subs and automatically delete the seperate file. See [#630 (comment)](https://github.com/yt-dlp/yt-dlp/issues/630#issuecomment-893659460) for more info. `--compat-options no-keep-subs` can be used to revert this.
* When `--embed-subs` and `--write-subs` are used together, the subtitles are written to disk and also embedded in the media file. You can use just `--embed-subs` to embed the subs and automatically delete the seperate file. See [#630 (comment)](https://github.com/yt-dlp/yt-dlp/issues/630#issuecomment-893659460) for more info. `--compat-options no-keep-subs` can be used to revert this
For ease of use, a few more compat options are available:
* `--compat-options all`: Use all compat options
@@ -151,18 +145,14 @@ ### Differences in default behavior
# INSTALLATION
yt-dlp is not platform specific. So it should work on your Unix box, on Windows or on macOS
You can install yt-dlp using one of the following methods:
* Download the binary from the [latest release](https://github.com/yt-dlp/yt-dlp/releases/latest)
* With Homebrew, `brew install yt-dlp/taps/yt-dlp`
* Use [PyPI package](https://pypi.org/project/yt-dlp): `python3 -m pip install --upgrade yt-dlp`
* Use pip+git: `python3 -m pip install --upgrade git+https://github.com/yt-dlp/yt-dlp.git@release`
* Install master branch: `python3 -m pip install --upgrade git+https://github.com/yt-dlp/yt-dlp`
Note that on some systems, you may need to use `py` or `python` instead of `python3`
### Using the release binary
UNIX users (Linux, macOS, BSD) can also install the [latest release](https://github.com/yt-dlp/yt-dlp/releases/latest) one of the following ways:
You can simply download the [correct binary file](#release-files) for your OS: **[[Windows](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp.exe)] [[UNIX-like](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp)]**
In UNIX-like OSes (MacOS, Linux, BSD), you can also install the same in one of the following ways:
```
sudo curl -L https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp -o /usr/local/bin/yt-dlp
@@ -179,18 +169,70 @@ # INSTALLATION
sudo chmod a+rx /usr/local/bin/yt-dlp
```
macOS or Linux users that are using Homebrew (formerly known as Linuxbrew for Linux users) can also install it by:
PS: The manpages, shell completion files etc. are available in [yt-dlp.tar.gz](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp.tar.gz)
### With [PIP](https://pypi.org/project/pip)
You can install the [PyPI package](https://pypi.org/project/yt-dlp) with:
```
python3 -m pip install -U yt-dlp
```
You can install without any of the optional dependencies using:
```
python3 -m pip install --no-deps -U yt-dlp
```
If you want to be on the cutting edge, you can also install the master branch with:
```
python3 -m pip install --force-reinstall https://github.com/yt-dlp/yt-dlp/archive/master.zip
```
Note that on some systems, you may need to use `py` or `python` instead of `python3`
### With [Homebrew](https://brew.sh)
macOS or Linux users that are using Homebrew can also install it by:
```
brew install yt-dlp/taps/yt-dlp
```
### UPDATE
You can use `yt-dlp -U` to update if you are using the provided release.
If you are using `pip`, simply re-run the same command that was used to install the program.
If you have installed using Homebrew, run `brew upgrade yt-dlp/taps/yt-dlp`
## UPDATE
You can use `yt-dlp -U` to update if you are [using the provided release](#using-the-release-binary)
### DEPENDENCIES
If you [installed with pip](#with-pip), simply re-run the same command that was used to install the program
If you [installed using Homebrew](#with-homebrew), run `brew upgrade yt-dlp/taps/yt-dlp`
## RELEASE FILES
#### Recommended
File|Description
:---|:---
[yt-dlp](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp)|Platform-independant binary. Needs Python (recommended for **UNIX-like systems**)
[yt-dlp.exe](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp.exe)|Windows (Win7 SP1+) standalone x64 binary (recommended for **Windows**)
#### Alternatives
File|Description
:---|:---
[yt-dlp_macos](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_macos)|MacOS (10.15+) standalone executable
[yt-dlp_x86.exe](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_x86.exe)|Windows (Vista SP2+) standalone x86 (32-bit) binary
[yt-dlp_min.exe](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_min.exe)|Windows (Win7 SP1+) standalone x64 binary built with `py2exe`.<br/> Does not contain `pycryptodomex`, needs VC++14
[yt-dlp_win.zip](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_win.zip)|Unpackaged Windows executable (no auto-update)
[yt-dlp_macos.zip](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_macos.zip)|Unpackaged MacOS (10.15+) executable (no auto-update)
#### Misc
File|Description
:---|:---
[yt-dlp.tar.gz](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp.tar.gz)|Source tarball. Also contains manpages, completions, etc
[SHA2-512SUMS](https://github.com/yt-dlp/yt-dlp/releases/latest/download/SHA2-512SUMS)|GNU-style SHA512 sums
[SHA2-256SUMS](https://github.com/yt-dlp/yt-dlp/releases/latest/download/SHA2-256SUMS)|GNU-style SHA256 sums
## DEPENDENCIES
Python versions 3.6+ (CPython and PyPy) are supported. Other versions and implementations may or may not work correctly.
<!-- Python 3.5+ uses VC++14 and it is already embedded in the binary created
@@ -200,36 +242,32 @@ ### DEPENDENCIES
While all the other dependancies are optional, `ffmpeg` and `ffprobe` are highly recommended
* [**ffmpeg** and **ffprobe**](https://www.ffmpeg.org) - Required for [merging seperate video and audio files](#format-selection) as well as for various [post-processing](#post-processing-options) tasks. Licence [depends on the build](https://www.ffmpeg.org/legal.html)
* [**mutagen**](https://github.com/quodlibet/mutagen) - For embedding thumbnail in certain formats. Licenced under [GPLv2+](https://github.com/quodlibet/mutagen/blob/master/COPYING)
* [**pycryptodomex**](https://github.com/Legrandin/pycryptodome) - For decrypting AES-128 HLS streams and various other data. Licenced under [BSD2](https://github.com/Legrandin/pycryptodome/blob/master/LICENSE.rst)
* [**websockets**](https://github.com/aaugustin/websockets) - For downloading over websocket. Licenced under [BSD3](https://github.com/aaugustin/websockets/blob/main/LICENSE)
* [**keyring**](https://github.com/jaraco/keyring) - For decrypting cookies of chromium-based browsers on Linux. Licenced under [MIT](https://github.com/jaraco/keyring/blob/main/LICENSE)
* [**AtomicParsley**](https://github.com/wez/atomicparsley) - For embedding thumbnail in mp4/m4a if mutagen is not present. Licenced under [GPLv2+](https://github.com/wez/atomicparsley/blob/master/COPYING)
* [**rtmpdump**](http://rtmpdump.mplayerhq.hu) - For downloading `rtmp` streams. ffmpeg will be used as a fallback. Licenced under [GPLv2+](http://rtmpdump.mplayerhq.hu)
* [**mplayer**](http://mplayerhq.hu/design7/info.html) or [**mpv**](https://mpv.io) - For downloading `rstp` streams. ffmpeg will be used as a fallback. Licenced under [GPLv2+](https://github.com/mpv-player/mpv/blob/master/Copyright)
* [**phantomjs**](https://github.com/ariya/phantomjs) - Used in extractors where javascript needs to be run. Licenced under [BSD3](https://github.com/ariya/phantomjs/blob/master/LICENSE.BSD)
* [**sponskrub**](https://github.com/faissaloo/SponSkrub) - For using the now **deprecated** [sponskrub options](#sponskrub-options). Licenced under [GPLv3+](https://github.com/faissaloo/SponSkrub/blob/master/LICENCE.md)
* [**mutagen**](https://github.com/quodlibet/mutagen) - For embedding thumbnail in certain formats. Licensed under [GPLv2+](https://github.com/quodlibet/mutagen/blob/master/COPYING)
* [**pycryptodomex**](https://github.com/Legrandin/pycryptodome) - For decrypting AES-128 HLS streams and various other data. Licensed under [BSD2](https://github.com/Legrandin/pycryptodome/blob/master/LICENSE.rst)
* [**websockets**](https://github.com/aaugustin/websockets) - For downloading over websocket. Licensed under [BSD3](https://github.com/aaugustin/websockets/blob/main/LICENSE)
* [**keyring**](https://github.com/jaraco/keyring) - For decrypting cookies of chromium-based browsers on Linux. Licensed under [MIT](https://github.com/jaraco/keyring/blob/main/LICENSE)
* [**AtomicParsley**](https://github.com/wez/atomicparsley) - For embedding thumbnail in mp4/m4a if mutagen is not present. Licensed under [GPLv2+](https://github.com/wez/atomicparsley/blob/master/COPYING)
* [**rtmpdump**](http://rtmpdump.mplayerhq.hu) - For downloading `rtmp` streams. ffmpeg will be used as a fallback. Licensed under [GPLv2+](http://rtmpdump.mplayerhq.hu)
* [**mplayer**](http://mplayerhq.hu/design7/info.html) or [**mpv**](https://mpv.io) - For downloading `rstp` streams. ffmpeg will be used as a fallback. Licensed under [GPLv2+](https://github.com/mpv-player/mpv/blob/master/Copyright)
* [**phantomjs**](https://github.com/ariya/phantomjs) - Used in extractors where javascript needs to be run. Licensed under [BSD3](https://github.com/ariya/phantomjs/blob/master/LICENSE.BSD)
* [**sponskrub**](https://github.com/faissaloo/SponSkrub) - For using the now **deprecated** [sponskrub options](#sponskrub-options). Licensed under [GPLv3+](https://github.com/faissaloo/SponSkrub/blob/master/LICENCE.md)
* Any external downloader that you want to use with `--downloader`
To use or redistribute the dependencies, you must agree to their respective licensing terms.
The windows releases are already built with the python interpreter, mutagen, pycryptodomex and websockets included.
The Windows and MacOS standalone release binaries are already built with the python interpreter, mutagen, pycryptodomex and websockets included.
**Note**: There are some regressions in newer ffmpeg versions that causes various issues when used alongside yt-dlp. Since ffmpeg is such an important dependancy, we provide [custom builds](https://github.com/yt-dlp/FFmpeg-Builds/wiki/Latest#latest-autobuilds) with patches for these issues at [yt-dlp/FFmpeg-Builds](https://github.com/yt-dlp/FFmpeg-Builds). See [the readme](https://github.com/yt-dlp/FFmpeg-Builds#patches-applied) for details on the specifc issues solved by these builds
### COMPILE
## COMPILE
**For Windows**:
To build the Windows executable, you must have pyinstaller (and optionally mutagen, pycryptodomex, websockets)
To build the Windows executable, you must have pyinstaller (and optionally mutagen, pycryptodomex, websockets). Once you have all the necessary dependencies installed, (optionally) build lazy extractors using `devscripts/make_lazy_extractors.py`, and then just run `pyinst.py`. The executable will be built for the same architecture (32/64 bit) as the python used to build it.
python3 -m pip install -U -r requirements.txt
Once you have all the necessary dependencies installed, just run `py pyinst.py`. The executable will be built for the same architecture (32/64 bit) as the python used to build it.
You can also build the executable without any version info or metadata by using:
pyinstaller.exe yt_dlp\__main__.py --onefile --name yt-dlp
py -m pip install -U pyinstaller -r requirements.txt
py devscripts/make_lazy_extractors.py
py pyinst.py
Note that pyinstaller [does not support](https://github.com/pyinstaller/pyinstaller#requirements-and-tested-platforms) Python installed from the Windows store without using a virtual environment
@@ -237,7 +275,9 @@ ### COMPILE
You will need the required build tools: `python`, `make` (GNU), `pandoc`, `zip`, `pytest`
Then simply run `make`. You can also run `make yt-dlp` instead to compile only the binary without updating any of the additional files
**Note**: In either platform, `devscripts\update-version.py` can be used to automatically update the version number
**Note**: In either platform, `devscripts/update-version.py` can be used to automatically update the version number
You can also fork the project on github and run your fork's [build workflow](.github/workflows/build.yml) to automatically build a release
# USAGE AND OPTIONS
@@ -253,7 +293,7 @@ ## General Options:
sure that you have sufficient permissions
(run with sudo if needed)
-i, --ignore-errors Ignore download and postprocessing errors.
The download will be considered successfull
The download will be considered successful
even if the postprocessing fails
--no-abort-on-error Continue with next video on download
errors; e.g. to skip unavailable videos in
@@ -289,6 +329,10 @@ ## General Options:
--flat-playlist Do not extract the videos of a playlist,
only list them
--no-flat-playlist Extract the videos of a playlist
--wait-for-video MIN[-MAX] Wait for scheduled streams to become
available. Pass the minimum number of
seconds (or range) to wait between retries
--no-wait-for-video Do not wait for scheduled streams (default)
--mark-watched Mark videos watched (even with --simulate).
Currently only supported for YouTube
--no-mark-watched Do not mark videos watched (default)
@@ -338,12 +382,11 @@ ## Video Selection:
specify range: "--playlist-items
1-3,7,10-13", it will download the videos
at index 1, 2, 3, 7, 10, 11, 12 and 13
--max-downloads NUMBER Abort after downloading NUMBER files
--min-filesize SIZE Do not download any videos smaller than
SIZE (e.g. 50k or 44.6m)
--max-filesize SIZE Do not download any videos larger than SIZE
(e.g. 50k or 44.6m)
--date DATE Download only videos uploaded in this date.
--date DATE Download only videos uploaded on this date.
The date can be "YYYYMMDD" or in the format
"(now|today)[+-][0-9](day|week|month|year)(s)?"
--datebefore DATE Download only videos uploaded on or before
@@ -380,13 +423,18 @@ ## Video Selection:
--download-archive FILE Download only videos not listed in the
archive file. Record the IDs of all
downloaded videos in it
--no-download-archive Do not use archive file (default)
--max-downloads NUMBER Abort after downloading NUMBER files
--break-on-existing Stop the download process when encountering
a file that is in the archive
--break-on-reject Stop the download process when encountering
a file that has been filtered out
--break-per-input Make --break-on-existing and --break-on-
reject act only on the current input URL
--no-break-per-input --break-on-existing and --break-on-reject
terminates the entire download queue
--skip-playlist-after-errors N Number of allowed failures until the rest
of the playlist is skipped
--no-download-archive Do not use archive file (default)
## Download Options:
-N, --concurrent-fragments N Number of fragments of a dash/hlsnative
@@ -465,6 +513,7 @@ ## Filesystem Options:
stdin), one URL per line. Lines starting
with '#', ';' or ']' are considered as
comments and ignored
--no-batch-file Do not read URLs from batch file (default)
-P, --paths [TYPES:]PATH The paths where the files should be
downloaded. Specify the type of file and
the path separated by a colon ":". All the
@@ -486,9 +535,9 @@ ## Filesystem Options:
filenames
--no-restrict-filenames Allow Unicode characters, "&" and spaces in
filenames (default)
--windows-filenames Force filenames to be windows compatible
--no-windows-filenames Make filenames windows compatible only if
using windows (default)
--windows-filenames Force filenames to be Windows-compatible
--no-windows-filenames Make filenames Windows-compatible only if
using Windows (default)
--trim-filenames LENGTH Limit the filename length (excluding
extension) to the specified number of
characters
@@ -536,8 +585,8 @@ ## Filesystem Options:
--load-info-json FILE JSON file containing the video information
(created with the "--write-info-json"
option)
--cookies FILE File to read cookies from and dump cookie
jar in
--cookies FILE Netscape formatted file to read cookies
from and dump cookie jar in
--no-cookies Do not read/dump cookies from/to file
(default)
--cookies-from-browser BROWSER[:PROFILE]
@@ -584,9 +633,9 @@ ## Verbosity and Simulation Options:
anything to disk
--no-simulate Download the video even if printing/listing
options are used
--ignore-no-formats-error Ignore "No video formats" error. Usefull
for extracting metadata even if the videos
are not actually available for download
--ignore-no-formats-error Ignore "No video formats" error. Useful for
extracting metadata even if the videos are
not actually available for download
(experimental)
--no-ignore-no-formats-error Throw error when no downloadable video
formats are found (default)
@@ -620,7 +669,7 @@ ## Verbosity and Simulation Options:
"postprocess:", or "postprocess-title:".
The video's fields are accessible under the
"info" key and the progress attributes are
accessible under "progress" key. Eg:
accessible under "progress" key. E.g.:
--console-title --progress-template
"download-title:%(info.id)s-%(progress.eta)s"
-v, --verbose Print various debugging information
@@ -633,7 +682,7 @@ ## Verbosity and Simulation Options:
## Workarounds:
--encoding ENCODING Force the specified encoding (experimental)
--no-check-certificate Suppress HTTPS certificate validation
--no-check-certificates Suppress HTTPS certificate validation
--prefer-insecure Use an unencrypted connection to retrieve
information about the video (Currently
supported only for YouTube)
@@ -682,10 +731,12 @@ ## Video Format Options:
containers irrespective of quality
--no-prefer-free-formats Don't give any special preference to free
containers (default)
--check-formats Check that the formats selected are
--check-formats Check that the selected formats are
actually downloadable
--no-check-formats Do not check that the formats selected are
--check-all-formats Check all formats for whether they are
actually downloadable
--no-check-formats Do not check that the formats are actually
downloadable
-F, --list-formats List available formats of each video.
Simulate unless --no-simulate is used
--merge-output-format FORMAT If a merge is required (e.g.
@@ -707,7 +758,7 @@ ## Subtitle Options:
"ass/srt/best"
--sub-langs LANGS Languages of the subtitles to download (can
be regex) or "all" separated by commas.
(Eg: --sub-langs en.*,ja) You can prefix
(Eg: --sub-langs "en.*,ja") You can prefix
the language code with a "-" to exempt it
from the requested languages. (Eg: --sub-
langs all,-live_chat) Use --list-subs for a
@@ -739,9 +790,9 @@ ## Post-Processing Options:
--audio-format FORMAT Specify audio format to convert the audio
to when -x is used. Currently supported
formats are: best (default) or one of
best|aac|flac|mp3|m4a|opus|vorbis|wav
best|aac|flac|mp3|m4a|opus|vorbis|wav|alac
--audio-quality QUALITY Specify ffmpeg audio quality, insert a
value between 0 (better) and 9 (worse) for
value between 0 (best) and 10 (worst) for
VBR or a specific bitrate like 128K
(default 5)
--remux-video FORMAT Remux the video into another container if
@@ -790,15 +841,20 @@ ## Post-Processing Options:
--no-embed-subs Do not embed subtitles (default)
--embed-thumbnail Embed thumbnail in the video as cover art
--no-embed-thumbnail Do not embed thumbnail (default)
--embed-metadata Embed metadata to the video file. Also adds
chapters to file unless --no-add-chapters
is used (Alias: --add-metadata)
--embed-metadata Embed metadata to the video file. Also
embeds chapters/infojson if present unless
--no-embed-chapters/--no-embed-info-json
are used (Alias: --add-metadata)
--no-embed-metadata Do not add metadata to file (default)
(Alias: --no-add-metadata)
--embed-chapters Add chapter markers to the video file
(Alias: --add-chapters)
--no-embed-chapters Do not add chapter markers (default)
(Alias: --no-add-chapters)
--embed-info-json Embed the infojson as an attachment to
mkv/mka video files
--no-embed-info-json Do not embed the infojson as an attachment
to the video file
--parse-metadata FROM:TO Parse additional metadata like title/artist
from other fields; see "MODIFYING METADATA"
for details
@@ -847,7 +903,11 @@ ## Post-Processing Options:
--no-split-chapters Do not split video based on chapters
(default)
--remove-chapters REGEX Remove chapters whose title matches the
given regular expression. This option can
given regular expression. Time ranges
prefixed by a "*" can also be used in place
of chapters to remove the specified range.
Eg: --remove-chapters "*10:15-15:00"
--remove-chapters "intro". This option can
be used multiple times
--no-remove-chapters Do not remove any chapters from the file
(default)
@@ -938,7 +998,7 @@ # CONFIGURATION
* `~/yt-dlp.conf`
* `~/yt-dlp.conf.txt`
`%XDG_CONFIG_HOME%` defaults to `~/.config` if undefined. On windows, `~` points to %HOME% if present, `%USERPROFILE%` (generally `C:\Users\<user name>`) or `%HOMEDRIVE%%HOMEPATH%`.
`%XDG_CONFIG_HOME%` defaults to `~/.config` if undefined. On windows, `%APPDATA%` generally points to (`C:\Users\<user name>\AppData\Roaming`) and `~` points to `%HOME%` if present, `%USERPROFILE%` (generally `C:\Users\<user name>`), or `%HOMEDRIVE%%HOMEPATH%`
1. **System Configuration**: `/etc/yt-dlp.conf`
For example, with the following configuration file yt-dlp will always extract the audio, not copy the mtime, use a proxy and save all videos under `YouTube` directory in your home directory:
@@ -960,7 +1020,7 @@ # Save all videos under YouTube directory in your home directory
Note that options in configuration file are just the same options aka switches used in regular command line calls; thus there **must be no whitespace** after `-` or `--`, e.g. `-o` or `--proxy` but not `- o` or `-- proxy`.
You can use `--ignore-config` if you want to disable all configuration files for a particular yt-dlp run. If `--ignore-config` is found inside any configuration file, no further configuration will be loaded. For example, having the option in the portable configuration file prevents loading of user and system configurations. Additionally, (for backward compatibility) if `--ignore-config` is found inside the system configuration file, the user configuration is not loaded.
You can use `--ignore-config` if you want to disable all configuration files for a particular yt-dlp run. If `--ignore-config` is found inside any configuration file, no further configuration will be loaded. For example, having the option in the portable configuration file prevents loading of home, user, and system configurations. Additionally, (for backward compatibility) if `--ignore-config` is found inside the system configuration file, the user configuration is not loaded.
### Authentication with `.netrc` file
@@ -990,7 +1050,7 @@ # OUTPUT TEMPLATE
The simplest usage of `-o` is not to set any template arguments when downloading a single file, like in `yt-dlp -o funny_video.flv "https://some/video"` (hard-coding file extension like this is _not_ recommended and could break some post-processing).
It may however also contain special sequences that will be replaced when downloading each video. The special sequences may be formatted according to [python string formatting operations](https://docs.python.org/3/library/stdtypes.html#printf-style-string-formatting). For example, `%(NAME)s` or `%(NAME)05d`. To clarify, that is a percent symbol followed by a name in parentheses, followed by formatting operations.
It may however also contain special sequences that will be replaced when downloading each video. The special sequences may be formatted according to [Python string formatting operations](https://docs.python.org/3/library/stdtypes.html#printf-style-string-formatting). For example, `%(NAME)s` or `%(NAME)05d`. To clarify, that is a percent symbol followed by a name in parentheses, followed by formatting operations.
The field names themselves (the part inside the parenthesis) can also have some special formatting:
1. **Object traversal**: The dictionaries and lists available in metadata can be traversed by using a `.` (dot) separator. You can also do python slicing using `:`. Eg: `%(tags.0)s`, `%(subtitles.en.-1.ext)s`, `%(id.3:7:-1)s`, `%(formats.:.format_id)s`. `%()s` refers to the entire infodict. Note that all the fields that become available using this method are not listed below. Use `-j` to see such fields
@@ -998,7 +1058,7 @@ # OUTPUT TEMPLATE
1. **Date/time Formatting**: Date/time fields can be formatted according to [strftime formatting](https://docs.python.org/3/library/datetime.html#strftime-and-strptime-format-codes) by specifying it separated from the field name using a `>`. Eg: `%(duration>%H-%M-%S)s`, `%(upload_date>%Y-%m-%d)s`, `%(epoch-3600>%H-%M-%S)s`
1. **Alternatives**: Alternate fields can be specified seperated with a `,`. Eg: `%(release_date>%Y,upload_date>%Y|Unknown)s`
1. **Default**: A literal default value can be specified for when the field is empty using a `|` seperator. This overrides `--output-na-template`. Eg: `%(uploader|Unknown)s`
1. **More Conversions**: In addition to the normal format types `diouxXeEfFgGcrs`, `B`, `j`, `l`, `q` can be used for converting to **B**ytes, **j**son, a comma seperated **l**ist (alternate form flag `#` makes it new line `\n` seperated) and a string **q**uoted for the terminal, respectively
1. **More Conversions**: In addition to the normal format types `diouxXeEfFgGcrs`, `B`, `j`, `l`, `q` can be used for converting to **B**ytes, **j**son (flag `#` for pretty-printing), a comma seperated **l**ist (flag `#` for `\n` newline-seperated) and a string **q**uoted for the terminal (flag `#` to split a list into different arguments), respectively
1. **Unicode normalization**: The format type `U` can be used for NFC [unicode normalization](https://docs.python.org/3/library/unicodedata.html#unicodedata.normalize). The alternate form flag (`#`) changes the normalization to NFD and the conversion flag `+` can be used for NFKC/NFKD compatibility equivalence normalization. Eg: `%(title)+.100U` is NFKC
To summarize, the general syntax for a field is:
@@ -1006,7 +1066,7 @@ # OUTPUT TEMPLATE
%(name[.keys][addition][>strf][,alternate][|default])[flags][width][.precision][length]type
```
Additionally, you can set different output templates for the various metadata files separately from the general output template by specifying the type of file followed by the template separated by a colon `:`. The different file types supported are `subtitle`, `thumbnail`, `description`, `annotation` (deprecated), `infojson`, `pl_thumbnail`, `pl_description`, `pl_infojson`, `chapter`. For example, `-o '%(title)s.%(ext)s' -o 'thumbnail:%(title)s\%(title)s.%(ext)s'` will put the thumbnails in a folder with the same name as the video. If any of the templates (except default) is empty, that type of file will not be written. Eg: `--write-thumbnail -o "thumbnail:"` will write thumbnails only for playlists and not for video.
Additionally, you can set different output templates for the various metadata files separately from the general output template by specifying the type of file followed by the template separated by a colon `:`. The different file types supported are `subtitle`, `thumbnail`, `description`, `annotation` (deprecated), `infojson`, `link`, `pl_thumbnail`, `pl_description`, `pl_infojson`, `chapter`. For example, `-o '%(title)s.%(ext)s' -o 'thumbnail:%(title)s\%(title)s.%(ext)s'` will put the thumbnails in a folder with the same name as the video. If any of the templates (except default) is empty, that type of file will not be written. Eg: `--write-thumbnail -o "thumbnail:"` will write thumbnails only for playlists and not for video.
The available fields are:
@@ -1056,6 +1116,7 @@ # OUTPUT TEMPLATE
- `asr` (numeric): Audio sampling rate in Hertz
- `vbr` (numeric): Average video bitrate in KBit/s
- `fps` (numeric): Frame rate
- `dynamic_range` (string): The dynamic range of the video
- `vcodec` (string): Name of the video codec in use
- `container` (string): Name of the container format
- `filesize` (numeric): The number of bytes, if known in advance
@@ -1126,11 +1187,13 @@ # OUTPUT TEMPLATE
- `category_names` (list): Friendly names of the categories
- `name` (string): Friendly name of the smallest category
Each aforementioned sequence when referenced in an output template will be replaced by the actual value corresponding to the sequence name. Note that some of the sequences are not guaranteed to be present since they depend on the metadata obtained by a particular extractor. Such sequences will be replaced with placeholder value provided with `--output-na-placeholder` (`NA` by default).
Each aforementioned sequence when referenced in an output template will be replaced by the actual value corresponding to the sequence name. For example for `-o %(title)s-%(id)s.%(ext)s` and an mp4 video with title `yt-dlp test video` and id `BaW_jenozKc`, this will result in a `yt-dlp test video-BaW_jenozKc.mp4` file created in the current directory.
For example for `-o %(title)s-%(id)s.%(ext)s` and an mp4 video with title `yt-dlp test video` and id `BaW_jenozKc`, this will result in a `yt-dlp test video-BaW_jenozKc.mp4` file created in the current directory.
Note that some of the sequences are not guaranteed to be present since they depend on the metadata obtained by a particular extractor. Such sequences will be replaced with placeholder value provided with `--output-na-placeholder` (`NA` by default).
For numeric sequences you can use numeric related formatting, for example, `%(view_count)05d` will result in a string with view count padded with zeros up to 5 characters, like in `00042`.
**Tip**: Look at the `-j` output to identify which fields are available for the particular URL
For numeric sequences you can use [numeric related formatting](https://docs.python.org/3/library/stdtypes.html#printf-style-string-formatting), for example, `%(view_count)05d` will result in a string with view count padded with zeros up to 5 characters, like in `00042`.
Output templates can also contain arbitrary hierarchical path, e.g. `-o '%(playlist)s/%(playlist_index)s - %(title)s.%(ext)s'` which will result in downloading each video in a directory corresponding to this path template. Any missing directory will be automatically created for you.
@@ -1138,7 +1201,7 @@ # OUTPUT TEMPLATE
The current default template is `%(title)s [%(id)s].%(ext)s`.
In some cases, you don't want special characters such as 中, spaces, or &, such as when transferring the downloaded filename to a Windows system or the filename through an 8bit-unsafe channel. In these cases, add the `--restrict-filenames` flag to get a shorter title:
In some cases, you don't want special characters such as 中, spaces, or &, such as when transferring the downloaded filename to a Windows system or the filename through an 8bit-unsafe channel. In these cases, add the `--restrict-filenames` flag to get a shorter title.
#### Output template and Windows batch files
@@ -1149,11 +1212,14 @@ #### Output template examples
Note that on Windows you need to use double quotes instead of single.
```bash
$ yt-dlp --get-filename -o 'test video.%(ext)s' BaW_jenozKc
test video.webm # Literal name with correct extension
$ yt-dlp --get-filename -o '%(title)s.%(ext)s' BaW_jenozKc
youtube-dl test video ''_ä↭𝕐.mp4 # All kinds of weird characters
youtube-dl test video ''_ä↭𝕐.webm # All kinds of weird characters
$ yt-dlp --get-filename -o '%(title)s.%(ext)s' BaW_jenozKc --restrict-filenames
youtube-dl_test_video_.mp4 # A simple file name
youtube-dl_test_video_.webm # Restricted file name
# Download YouTube playlist videos in separate directory indexed by video order in a playlist
$ yt-dlp -o '%(playlist)s/%(playlist_index)s - %(title)s.%(ext)s' https://www.youtube.com/playlist?list=PLwiyx1dc3P2JR9N8gQaQN_BCvlSlap7re
@@ -1179,6 +1245,8 @@ # FORMAT SELECTION
By default, yt-dlp tries to download the best available quality if you **don't** pass any options.
This is generally equivalent to using `-f bestvideo*+bestaudio/best`. However, if multiple audiostreams is enabled (`--audio-multistreams`), the default format changes to `-f bestvideo+bestaudio/best`. Similarly, if ffmpeg is unavailable, or if you use yt-dlp to stream to `stdout` (`-o -`), the default becomes `-f best/bestvideo+bestaudio`.
**Deprecation warning**: Latest versions of yt-dlp can stream multiple formats to the stdout simultaneously using ffmpeg. So, in future versions, the default for this will be set to `-f bv*+ba/b` similar to normal downloads. If you want to preserve the `-f b/bv+ba` setting, it is recommended to explicitly specify it in the configuration options.
The general syntax for format selection is `-f FORMAT` (or `--format FORMAT`) where `FORMAT` is a *selector expression*, i.e. an expression that describes format or formats you would like to download.
**tl;dr:** [navigate me to examples](#format-selection-examples).
@@ -1189,19 +1257,19 @@ # FORMAT SELECTION
You can also use special names to select particular edge case formats:
- `all`: Select all formats
- `mergeall`: Select and merge all formats (Must be used with `--audio-multistreams`, `--video-multistreams` or both)
- `b*`, `best*`: Select the best quality format irrespective of whether it contains video or audio
- `w*`, `worst*`: Select the worst quality format irrespective of whether it contains video or audio
- `b`, `best`: Select the best quality format that contains both video and audio. Equivalent to `best*[vcodec!=none][acodec!=none]`
- `all`: Select **all formats** separately
- `mergeall`: Select and **merge all formats** (Must be used with `--audio-multistreams`, `--video-multistreams` or both)
- `b*`, `best*`: Select the best quality format that **contains either** a video or an audio
- `b`, `best`: Select the best quality format that **contains both** video and audio. Equivalent to `best*[vcodec!=none][acodec!=none]`
- `bv`, `bestvideo`: Select the best quality **video-only** format. Equivalent to `best*[acodec=none]`
- `bv*`, `bestvideo*`: Select the best quality format that **contains video**. It may also contain audio. Equivalent to `best*[vcodec!=none]`
- `ba`, `bestaudio`: Select the best quality **audio-only** format. Equivalent to `best*[vcodec=none]`
- `ba*`, `bestaudio*`: Select the best quality format that **contains audio**. It may also contain video. Equivalent to `best*[acodec!=none]`
- `w*`, `worst*`: Select the worst quality format that contains either a video or an audio
- `w`, `worst`: Select the worst quality format that contains both video and audio. Equivalent to `worst*[vcodec!=none][acodec!=none]`
- `bv`, `bestvideo`: Select the best quality video-only format. Equivalent to `best*[acodec=none]`
- `wv`, `worstvideo`: Select the worst quality video-only format. Equivalent to `worst*[acodec=none]`
- `bv*`, `bestvideo*`: Select the best quality format that contains video. It may also contain audio. Equivalent to `best*[vcodec!=none]`
- `wv*`, `worstvideo*`: Select the worst quality format that contains video. It may also contain audio. Equivalent to `worst*[vcodec!=none]`
- `ba`, `bestaudio`: Select the best quality audio-only format. Equivalent to `best*[vcodec=none]`
- `wa`, `worstaudio`: Select the worst quality audio-only format. Equivalent to `worst*[vcodec=none]`
- `ba*`, `bestaudio*`: Select the best quality format that contains audio. It may also contain video. Equivalent to `best*[acodec!=none]`
- `wa*`, `worstaudio*`: Select the worst quality format that contains audio. It may also contain video. Equivalent to `worst*[acodec!=none]`
For example, to download the worst quality video-only format you can use `-f worstvideo`. It is however recommended not to use `worst` and related options. When your format selector is `worst`, the format which is worst in all respects is selected. Most of the time, what you actually want is the video with the smallest filesize instead. So it is generally better to use `-f best -S +size,+br,+res,+fps` instead of `-f worst`. See [sorting formats](#sorting-formats) for more details.
@@ -1265,18 +1333,19 @@ ## Sorting Formats
- `source`: Preference of the source as given by the extractor
- `proto`: Protocol used for download (`https`/`ftps` > `http`/`ftp` > `m3u8_native`/`m3u8` > `http_dash_segments`> `websocket_frag` > other > `mms`/`rtsp` > unknown > `f4f`/`f4m`)
- `vcodec`: Video Codec (`av01` > `vp9.2` > `vp9` > `h265` > `h264` > `vp8` > `h263` > `theora` > other > unknown)
- `acodec`: Audio Codec (`opus` > `vorbis` > `aac` > `mp4a` > `mp3` > `ac3` > `dts` > other > unknown)
- `acodec`: Audio Codec (`opus` > `vorbis` > `aac` > `mp4a` > `mp3` > `eac3` > `ac3` > `dts` > other > unknown)
- `codec`: Equivalent to `vcodec,acodec`
- `vext`: Video Extension (`mp4` > `webm` > `flv` > other > unknown). If `--prefer-free-formats` is used, `webm` is prefered.
- `aext`: Audio Extension (`m4a` > `aac` > `mp3` > `ogg` > `opus` > `webm` > other > unknown). If `--prefer-free-formats` is used, the order changes to `opus` > `ogg` > `webm` > `m4a` > `mp3` > `aac`.
- `ext`: Equivalent to `vext,aext`
- `filesize`: Exact filesize, if know in advance. This will be unavailable for mu38 and DASH formats.
- `filesize`: Exact filesize, if known in advance
- `fs_approx`: Approximate filesize calculated from the manifests
- `size`: Exact filesize if available, otherwise approximate filesize
- `height`: Height of video
- `width`: Width of video
- `res`: Video resolution, calculated as the smallest dimension.
- `fps`: Framerate of video
- `hdr`: The dynamic range of the video (`DV` > `HDR12` > `HDR10+` > `HDR10` > `HLG` > `SDR`)
- `tbr`: Total average bitrate in KBit/s
- `vbr`: Average video bitrate in KBit/s
- `abr`: Average audio bitrate in KBit/s
@@ -1287,9 +1356,9 @@ ## Sorting Formats
All fields, unless specified otherwise, are sorted in descending order. To reverse this, prefix the field with a `+`. Eg: `+res` prefers format with the smallest resolution. Additionally, you can suffix a preferred value for the fields, separated by a `:`. Eg: `res:720` prefers larger videos, but no larger than 720p and the smallest video if there are no videos less than 720p. For `codec` and `ext`, you can provide two preferred values, the first for video and the second for audio. Eg: `+codec:avc:m4a` (equivalent to `+vcodec:avc,+acodec:m4a`) sets the video codec preference to `h264` > `h265` > `vp9` > `vp9.2` > `av01` > `vp8` > `h263` > `theora` and audio codec preference to `mp4a` > `aac` > `vorbis` > `opus` > `mp3` > `ac3` > `dts`. You can also make the sorting prefer the nearest values to the provided by using `~` as the delimiter. Eg: `filesize~1G` prefers the format with filesize closest to 1 GiB.
The fields `hasvid` and `ie_pref` are always given highest priority in sorting, irrespective of the user-defined order. This behaviour can be changed by using `--format-sort-force`. Apart from these, the default order used is: `lang,quality,res,fps,codec:vp9.2,size,br,asr,proto,ext,hasaud,source,id`. The extractors may override this default order, but they cannot override the user-provided order.
The fields `hasvid` and `ie_pref` are always given highest priority in sorting, irrespective of the user-defined order. This behaviour can be changed by using `--format-sort-force`. Apart from these, the default order used is: `lang,quality,res,fps,hdr:12,codec:vp9.2,size,br,asr,proto,ext,hasaud,source,id`. The extractors may override this default order, but they cannot override the user-provided order.
Note that the default has `codec:vp9.2`; i.e. `av1` is not prefered
Note that the default has `codec:vp9.2`; i.e. `av1` is not prefered. Similarly, the default for hdr is `hdr:12`; i.e. dolby vision is not prefered. These choices are made since DV and AV1 formats are not yet fully compatible with most devices. This may be changed in the future as more devices become capable of smoothly playing back these formats.
If your format selector is `worst`, the last item is selected after sorting. This means it will select the format that is worst in all respects. Most of the time, what you actually want is the video with the smallest filesize instead. So it is generally better to use `-f best -S +size,+br,+res,+fps`.
@@ -1421,7 +1490,7 @@ # preferring better codec and then larger total bitrate for the same resolution
# MODIFYING METADATA
The metadata obtained the the extractors can be modified by using `--parse-metadata` and `--replace-in-metadata`
The metadata obtained by the extractors can be modified by using `--parse-metadata` and `--replace-in-metadata`
`--replace-in-metadata FIELDS REGEX REPLACE` is used to replace text in any metadata field using [python regular expression](https://docs.python.org/3/library/re.html#regular-expression-syntax). [Backreferences](https://docs.python.org/3/library/re.html?highlight=backreferences#re.sub) can be used in the replace string for advanced use.
@@ -1431,7 +1500,7 @@ # MODIFYING METADATA
This option also has a few special uses:
* You can download an additional URL based on the metadata of the currently downloaded video. To do this, set the field `additional_urls` to the URL that you want to download. Eg: `--parse-metadata "description:(?P<additional_urls>https?://www\.vimeo\.com/\d+)` will download the first vimeo video found in the description
* You can use this to change the metadata that is embedded in the media file. To do this, set the value of the corresponding field with a `meta_` prefix. For example, any value you set to `meta_description` field will be added to the `description` field in the file. For example, you can use this to set a different "description" and "synopsis"
* You can use this to change the metadata that is embedded in the media file. To do this, set the value of the corresponding field with a `meta_` prefix. For example, any value you set to `meta_description` field will be added to the `description` field in the file. For example, you can use this to set a different "description" and "synopsis". Any value set to the `meta_` field will overwrite all default values.
For reference, these are the fields yt-dlp adds by default to the file metadata:
@@ -1472,6 +1541,9 @@ # Set title as "Series name S01E05"
# Set "comment" field in video metadata using description instead of webpage_url
$ yt-dlp --parse-metadata 'description:(?s)(?P<meta_comment>.+)' --add-metadata
# Remove "formats" field from the infojson by setting it to an empty string
$ yt-dlp --parse-metadata ':(?P<formats>)' -j
# Replace all spaces and "_" in title and uploader with a `-`
$ yt-dlp --replace-in-metadata 'title,uploader' '[ _]' '-'
@@ -1479,27 +1551,32 @@ # Replace all spaces and "_" in title and uploader with a `-`
# EXTRACTOR ARGUMENTS
Some extractors accept additional arguments which can be passed using `--extractor-args KEY:ARGS`. `ARGS` is a `;` (semicolon) seperated string of `ARG=VAL1,VAL2`. Eg: `--extractor-args "youtube:player_client=android_agegate,web;include_live_dash" --extractor-args "funimation:version=uncut"`
Some extractors accept additional arguments which can be passed using `--extractor-args KEY:ARGS`. `ARGS` is a `;` (semicolon) separated string of `ARG=VAL1,VAL2`. Eg: `--extractor-args "youtube:player-client=android_agegate,web;include_live_dash" --extractor-args "funimation:version=uncut"`
The following extractors use this feature:
* **youtube**
* `skip`: `hls` or `dash` (or both) to skip download of the respective manifests
* `player_client`: Clients to extract video data from. The main clients are `web`, `android`, `ios`, `mweb`. These also have `_music`, `_embedded`, `_agegate`, and `_creator` variants (Eg: `web_embedded`) (`mweb` has only `_agegate`). By default, `android,web` is used, but the agegate and creator variants are added as required for age-gated videos. Similarly the music variants are added for `music.youtube.com` urls. You can also use `all` to use all the clients
* `player_skip`: Skip some network requests that are generally needed for robust extraction. One or more of `configs` (skip client configs), `webpage` (skip initial webpage), `js` (skip js player). While these options can help reduce the number of requests needed or avoid some rate-limiting, they could cause some issues. See [#860](https://github.com/yt-dlp/yt-dlp/pull/860) for more details
* `include_live_dash`: Include live dash formats (These formats don't download properly)
* `comment_sort`: `top` or `new` (default) - choose comment sorting mode (on YouTube's side).
* `max_comments`: Maximum amount of comments to download (default all).
* `max_comment_depth`: Maximum depth for nested comments. YouTube supports depths 1 or 2 (default).
* **youtubetab**
(YouTube playlists, channels, feeds, etc.)
* `skip`: One or more of `webpage` (skip initial webpage download), `authcheck` (allow the download of playlists requiring authentication when no initial webpage is downloaded. This may cause unwanted behavior, see [#1122](https://github.com/yt-dlp/yt-dlp/pull/1122) for more details)
* **funimation**
* `language`: Languages to extract. Eg: `funimation:language=english,japanese`
* `version`: The video version to extract - `uncut` or `simulcast`
#### youtube
* `skip`: `hls` or `dash` (or both) to skip download of the respective manifests
* `player_client`: Clients to extract video data from. The main clients are `web`, `android`, `ios`, `mweb`. These also have `_music`, `_embedded`, `_agegate`, and `_creator` variants (Eg: `web_embedded`) (`mweb` has only `_agegate`). By default, `android,web` is used, but the agegate and creator variants are added as required for age-gated videos. Similarly the music variants are added for `music.youtube.com` urls. You can also use `all` to use all the clients, and `default` for the default clients.
* `player_skip`: Skip some network requests that are generally needed for robust extraction. One or more of `configs` (skip client configs), `webpage` (skip initial webpage), `js` (skip js player). While these options can help reduce the number of requests needed or avoid some rate-limiting, they could cause some issues. See [#860](https://github.com/yt-dlp/yt-dlp/pull/860) for more details
* `include_live_dash`: Include live dash formats (These formats don't download properly)
* `comment_sort`: `top` or `new` (default) - choose comment sorting mode (on YouTube's side)
* `max_comments`: Maximum amount of comments to download (default all)
* `max_comment_depth`: Maximum depth for nested comments. YouTube supports depths 1 or 2 (default)
* **vikiChannel**
* `video_types`: Types of videos to download - one or more of `episodes`, `movies`, `clips`, `trailers`
#### youtubetab (YouTube playlists, channels, feeds, etc.)
* `skip`: One or more of `webpage` (skip initial webpage download), `authcheck` (allow the download of playlists requiring authentication when no initial webpage is downloaded. This may cause unwanted behavior, see [#1122](https://github.com/yt-dlp/yt-dlp/pull/1122) for more details)
#### funimation
* `language`: Languages to extract. Eg: `funimation:language=english,japanese`
* `version`: The video version to extract - `uncut` or `simulcast`
#### crunchyroll
* `language`: Languages to extract. Eg: `crunchyroll:language=jaJp`
* `hardsub`: Which hard-sub versions to extract. Eg: `crunchyroll:hardsub=None,enUS`
#### vikichannel
* `video_types`: Types of videos to download - one or more of `episodes`, `movies`, `clips`, `trailers`
NOTE: These options may be changed/removed in the future without concern for backward compatibility
@@ -1527,22 +1604,20 @@ # EMBEDDING YT-DLP
From a Python program, you can embed yt-dlp in a more powerful fashion, like this:
```python
import yt_dlp
from yt_dlp import YoutubeDL
ydl_opts = {}
with yt_dlp.YoutubeDL(ydl_opts) as ydl:
ydl_opts = {'format': 'bestaudio'}
with YoutubeDL(ydl_opts) as ydl:
ydl.download(['https://www.youtube.com/watch?v=BaW_jenozKc'])
```
Most likely, you'll want to use various options. For a list of options available, have a look at [`yt_dlp/YoutubeDL.py`](yt_dlp/YoutubeDL.py#L154-L452).
Most likely, you'll want to use various options. For a list of options available, have a look at [`yt_dlp/YoutubeDL.py`](yt_dlp/YoutubeDL.py#L162).
Here's a more complete example of a program that outputs only errors (and a short message after the download is finished), converts the video to an mp3 file, implements a custom postprocessor and prints the final info_dict as json:
Here's a more complete example demonstrating various functionality:
```python
import json
import yt_dlp
from yt_dlp.postprocessor.common import PostProcessor
class MyLogger:
@@ -1564,35 +1639,76 @@ # EMBEDDING YT-DLP
print(msg)
class MyCustomPP(PostProcessor):
# See the docstring of yt_dlp.postprocessor.common.PostProcessor
class MyCustomPP(yt_dlp.postprocessor.PostProcessor):
# See docstring of yt_dlp.postprocessor.common.PostProcessor.run
def run(self, info):
self.to_screen('Doing stuff')
return [], info
# See "progress_hooks" in the docstring of yt_dlp.YoutubeDL
def my_hook(d):
if d['status'] == 'finished':
print('Done downloading, now converting ...')
def format_selector(ctx):
""" Select the best video and the best audio that won't result in an mkv.
This is just an example and does not handle all cases """
# formats are already sorted worst to best
formats = ctx.get('formats')[::-1]
# acodec='none' means there is no audio
best_video = next(f for f in formats
if f['vcodec'] != 'none' and f['acodec'] == 'none')
# find compatible audio extension
audio_ext = {'mp4': 'm4a', 'webm': 'webm'}[best_video['ext']]
# vcodec='none' means there is no video
best_audio = next(f for f in formats if (
f['acodec'] != 'none' and f['vcodec'] == 'none' and f['ext'] == audio_ext))
yield {
# These are the minimum required fields for a merged format
'format_id': f'{best_video["format_id"]}+{best_audio["format_id"]}',
'ext': best_video['ext'],
'requested_formats': [best_video, best_audio],
# Must be + seperated list of protocols
'protocol': f'{best_video["protocol"]}+{best_audio["protocol"]}'
}
# See docstring of yt_dlp.YoutubeDL for a description of the options
ydl_opts = {
'format': 'bestaudio/best',
'format': format_selector,
'postprocessors': [{
'key': 'FFmpegExtractAudio',
'preferredcodec': 'mp3',
'preferredquality': '192',
# Embed metadata in video using ffmpeg.
# See yt_dlp.postprocessor.FFmpegMetadataPP for the arguments it accepts
'key': 'FFmpegMetadata',
'add_chapters': True,
'add_metadata': True,
}],
'logger': MyLogger(),
'progress_hooks': [my_hook],
}
# Add custom headers
yt_dlp.utils.std_headers.update({'Referer': 'https://www.google.com'})
# See the public functions in yt_dlp.YoutubeDL for for other available functions.
# Eg: "ydl.download", "ydl.download_with_info_file"
with yt_dlp.YoutubeDL(ydl_opts) as ydl:
ydl.add_post_processor(MyCustomPP())
info = ydl.extract_info('https://www.youtube.com/watch?v=BaW_jenozKc')
# ydl.sanitize_info makes the info json-serializable
print(json.dumps(ydl.sanitize_info(info)))
```
See the public functions in [`yt_dlp/YoutubeDL.py`](yt_dlp/YoutubeDL.py) for other available functions. Eg: `ydl.download`, `ydl.download_with_info_file`
**Tip**: If you are porting your code from youtube-dl to yt-dlp, one important point to look out for is that we do not guarantee the return value of `YoutubeDL.extract_info` to be json serializable, or even be a dictionary. It will be dictionary-like, but if you want to ensure it is a serializable dictionary, pass it through `YoutubeDL.sanitize_info` as shown in the example above
# DEPRECATED OPTIONS
@@ -1625,6 +1741,7 @@ #### Not recommended
--print-json -j --no-simulate
--autonumber-size NUMBER Use string formatting. Eg: %(autonumber)03d
--autonumber-start NUMBER Use internal field formatting like %(autonumber+NUMBER)s
--id -o "%(id)s.%(ext)s"
--metadata-from-title FORMAT --parse-metadata "%(title)s:FORMAT"
--hls-prefer-native --downloader "m3u8:native"
--hls-prefer-ffmpeg --downloader "m3u8:ffmpeg"
@@ -1665,7 +1782,7 @@ #### Old aliases
--yes-overwrites --force-overwrites
#### Sponskrub Options
Support for [SponSkrub](https://github.com/faissaloo/SponSkrub) has been deprecated in favor of `--sponsorblock`
Support for [SponSkrub](https://github.com/faissaloo/SponSkrub) has been deprecated in favor of the `--sponsorblock` options
--sponskrub --sponsorblock-mark all
--no-sponskrub --no-sponsorblock
@@ -1691,7 +1808,6 @@ #### No longer supported
#### Removed
These options were deprecated since 2014 and have now been entirely removed
--id -o "%(id)s.%(ext)s"
-A, --auto-number -o "%(autonumber)s-%(id)s.%(ext)s"
-t, --title -o "%(title)s-%(id)s.%(ext)s"
-l, --literal -o accepts literal names

View File

@@ -9,7 +9,7 @@
sys.path.insert(0, dirn(dirn((os.path.abspath(__file__)))))
lazy_extractors_filename = sys.argv[1]
lazy_extractors_filename = sys.argv[1] if len(sys.argv) > 1 else 'yt_dlp/extractor/lazy_extractors.py'
if os.path.exists(lazy_extractors_filename):
os.remove(lazy_extractors_filename)

View File

@@ -29,6 +29,9 @@ def gen_ies_md(ies):
continue
if ie_desc is not None:
ie_md += ': {0}'.format(ie.IE_DESC)
search_key = getattr(ie, 'SEARCH_KEY', None)
if search_key is not None:
ie_md += f'; "{ie.SEARCH_KEY}:" prefix'
if not ie.working():
ie_md += ' (Currently broken)'
yield ie_md

View File

@@ -3,11 +3,11 @@
cd /d %~dp0..
if ["%~1"]==[""] (
set "test_set="
set "test_set="test""
) else if ["%~1"]==["core"] (
set "test_set=-k "not download""
set "test_set="-m not download""
) else if ["%~1"]==["download"] (
set "test_set=-k download"
set "test_set="-m "download""
) else (
echo.Invalid test type "%~1". Use "core" ^| "download"
exit /b 1

View File

@@ -3,12 +3,12 @@
if [ -z $1 ]; then
test_set='test'
elif [ $1 = 'core' ]; then
test_set='not download'
test_set="-m not download"
elif [ $1 = 'download' ]; then
test_set='download'
test_set="-m download"
else
echo 'Invalid test type "'$1'". Use "core" | "download"'
exit 1
fi
python3 -m pytest -k "$test_set"
python3 -m pytest "$test_set"

View File

@@ -1,33 +1,42 @@
#!/usr/bin/env python3
from __future__ import unicode_literals
from datetime import datetime
# import urllib.request
import sys
import subprocess
# response = urllib.request.urlopen('https://blackjack4494.github.io/youtube-dlc/update/LATEST_VERSION')
# old_version = response.read().decode('utf-8')
exec(compile(open('yt_dlp/version.py').read(), 'yt_dlp/version.py', 'exec'))
with open('yt_dlp/version.py', 'rt') as f:
exec(compile(f.read(), 'yt_dlp/version.py', 'exec'))
old_version = locals()['__version__']
old_version_list = old_version.split(".", 4)
old_version_list = old_version.split('.')
old_ver = '.'.join(old_version_list[:3])
old_rev = old_version_list[3] if len(old_version_list) > 3 else ''
ver = datetime.utcnow().strftime("%Y.%m.%d")
rev = str(int(old_rev or 0) + 1) if old_ver == ver else ''
rev = (sys.argv[1:] or [''])[0] # Use first argument, if present as revision number
if not rev:
rev = str(int(old_rev or 0) + 1) if old_ver == ver else ''
VERSION = '.'.join((ver, rev)) if rev else ver
# VERSION_LIST = [(int(v) for v in ver.split(".") + [rev or 0])]
try:
sp = subprocess.Popen(['git', 'rev-parse', '--short', 'HEAD'], stdout=subprocess.PIPE)
GIT_HEAD = sp.communicate()[0].decode().strip() or None
except Exception:
GIT_HEAD = None
VERSION_FILE = f'''
# Autogenerated by devscripts/update-version.py
__version__ = {VERSION!r}
RELEASE_GIT_HEAD = {GIT_HEAD!r}
'''.lstrip()
with open('yt_dlp/version.py', 'wt') as f:
f.write(VERSION_FILE)
print('::set-output name=ytdlp_version::' + VERSION)
file_version_py = open('yt_dlp/version.py', 'rt')
data = file_version_py.read()
data = data.replace(old_version, VERSION)
file_version_py.close()
file_version_py = open('yt_dlp/version.py', 'wt')
file_version_py.write(data)
file_version_py.close()
print(f'\nVersion = {VERSION}, Git HEAD = {GIT_HEAD}')

177
pyinst.py
View File

@@ -1,75 +1,84 @@
#!/usr/bin/env python3
# coding: utf-8
from __future__ import unicode_literals
import sys
import os
import platform
import sys
from PyInstaller.utils.hooks import collect_submodules
from PyInstaller.utils.win32.versioninfo import (
VarStruct, VarFileInfo, StringStruct, StringTable,
StringFileInfo, FixedFileInfo, VSVersionInfo, SetVersion,
)
import PyInstaller.__main__
arch = platform.architecture()[0][:2]
assert arch in ('32', '64')
_x86 = '_x86' if arch == '32' else ''
# Compatability with older arguments
opts = sys.argv[1:]
if opts[0:1] in (['32'], ['64']):
if arch != opts[0]:
raise Exception(f'{opts[0]}bit executable cannot be built on a {arch}bit system')
opts = opts[1:]
opts = opts or ['--onefile']
OS_NAME = platform.system()
if OS_NAME == 'Windows':
from PyInstaller.utils.win32.versioninfo import (
VarStruct, VarFileInfo, StringStruct, StringTable,
StringFileInfo, FixedFileInfo, VSVersionInfo, SetVersion,
)
elif OS_NAME == 'Darwin':
pass
else:
raise Exception('{OS_NAME} is not supported')
print(f'Building {arch}bit version with options {opts}')
ARCH = platform.architecture()[0][:2]
FILE_DESCRIPTION = 'yt-dlp%s' % (' (32 Bit)' if _x86 else '')
exec(compile(open('yt_dlp/version.py').read(), 'yt_dlp/version.py', 'exec'))
VERSION = locals()['__version__']
def main():
opts = parse_options()
version = read_version()
VERSION_LIST = VERSION.split('.')
VERSION_LIST = list(map(int, VERSION_LIST)) + [0] * (4 - len(VERSION_LIST))
suffix = '_macos' if OS_NAME == 'Darwin' else '_x86' if ARCH == '32' else ''
final_file = 'dist/%syt-dlp%s%s' % (
'yt-dlp/' if '--onedir' in opts else '', suffix, '.exe' if OS_NAME == 'Windows' else '')
print('Version: %s%s' % (VERSION, _x86))
print('Remember to update the version using devscipts\\update-version.py')
print(f'Building yt-dlp v{version} {ARCH}bit for {OS_NAME} with options {opts}')
print('Remember to update the version using "devscripts/update-version.py"')
if not os.path.isfile('yt_dlp/extractor/lazy_extractors.py'):
print('WARNING: Building without lazy_extractors. Run '
'"devscripts/make_lazy_extractors.py" to build lazy extractors', file=sys.stderr)
print(f'Destination: {final_file}\n')
VERSION_FILE = VSVersionInfo(
ffi=FixedFileInfo(
filevers=VERSION_LIST,
prodvers=VERSION_LIST,
mask=0x3F,
flags=0x0,
OS=0x4,
fileType=0x1,
subtype=0x0,
date=(0, 0),
),
kids=[
StringFileInfo([
StringTable(
'040904B0', [
StringStruct('Comments', 'yt-dlp%s Command Line Interface.' % _x86),
StringStruct('CompanyName', 'https://github.com/yt-dlp'),
StringStruct('FileDescription', FILE_DESCRIPTION),
StringStruct('FileVersion', VERSION),
StringStruct('InternalName', 'yt-dlp%s' % _x86),
StringStruct(
'LegalCopyright',
'pukkandan.ytdlp@gmail.com | UNLICENSE',
),
StringStruct('OriginalFilename', 'yt-dlp%s.exe' % _x86),
StringStruct('ProductName', 'yt-dlp%s' % _x86),
StringStruct(
'ProductVersion',
'%s%s on Python %s' % (VERSION, _x86, platform.python_version())),
])]),
VarFileInfo([VarStruct('Translation', [0, 1200])])
opts = [
f'--name=yt-dlp{suffix}',
'--icon=devscripts/logo.ico',
'--upx-exclude=vcruntime140.dll',
'--noconfirm',
*dependancy_options(),
*opts,
'yt_dlp/__main__.py',
]
)
print(f'Running PyInstaller with {opts}')
import PyInstaller.__main__
PyInstaller.__main__.run(opts)
set_version_info(final_file, version)
def parse_options():
# Compatability with older arguments
opts = sys.argv[1:]
if opts[0:1] in (['32'], ['64']):
if ARCH != opts[0]:
raise Exception(f'{opts[0]}bit executable cannot be built on a {ARCH}bit system')
opts = opts[1:]
return opts or ['--onefile']
def read_version():
exec(compile(open('yt_dlp/version.py').read(), 'yt_dlp/version.py', 'exec'))
return locals()['__version__']
def version_to_list(version):
version_list = version.split('.')
return list(map(int, version_list)) + [0] * (4 - len(version_list))
def dependancy_options():
dependancies = [pycryptodome_module(), 'mutagen'] + collect_submodules('websockets')
excluded_modules = ['test', 'ytdlp_plugins', 'youtube-dl', 'youtube-dlc']
yield from (f'--hidden-import={module}' for module in dependancies)
yield from (f'--exclude-module={module}' for module in excluded_modules)
def pycryptodome_module():
@@ -86,17 +95,41 @@ def pycryptodome_module():
return 'Cryptodome'
dependancies = [pycryptodome_module(), 'mutagen'] + collect_submodules('websockets')
excluded_modules = ['test', 'ytdlp_plugins', 'youtube-dl', 'youtube-dlc']
def set_version_info(exe, version):
if OS_NAME == 'Windows':
windows_set_version(exe, version)
PyInstaller.__main__.run([
'--name=yt-dlp%s' % _x86,
'--icon=devscripts/logo.ico',
*[f'--exclude-module={module}' for module in excluded_modules],
*[f'--hidden-import={module}' for module in dependancies],
'--upx-exclude=vcruntime140.dll',
'--noconfirm',
*opts,
'yt_dlp/__main__.py',
])
SetVersion('dist/%syt-dlp%s.exe' % ('yt-dlp/' if '--onedir' in opts else '', _x86), VERSION_FILE)
def windows_set_version(exe, version):
version_list = version_to_list(version)
suffix = '_x86' if ARCH == '32' else ''
SetVersion(exe, VSVersionInfo(
ffi=FixedFileInfo(
filevers=version_list,
prodvers=version_list,
mask=0x3F,
flags=0x0,
OS=0x4,
fileType=0x1,
subtype=0x0,
date=(0, 0),
),
kids=[
StringFileInfo([StringTable('040904B0', [
StringStruct('Comments', 'yt-dlp%s Command Line Interface.' % suffix),
StringStruct('CompanyName', 'https://github.com/yt-dlp'),
StringStruct('FileDescription', 'yt-dlp%s' % (' (32 Bit)' if ARCH == '32' else '')),
StringStruct('FileVersion', version),
StringStruct('InternalName', f'yt-dlp{suffix}'),
StringStruct('LegalCopyright', 'pukkandan.ytdlp@gmail.com | UNLICENSE'),
StringStruct('OriginalFilename', f'yt-dlp{suffix}.exe'),
StringStruct('ProductName', f'yt-dlp{suffix}'),
StringStruct(
'ProductVersion', f'{version}{suffix} on Python {platform.python_version()}'),
])]), VarFileInfo([VarStruct('Translation', [0, 1200])])
]
))
if __name__ == '__main__':
main()

View File

@@ -16,7 +16,7 @@
exec(compile(open('yt_dlp/version.py').read(), 'yt_dlp/version.py', 'exec'))
DESCRIPTION = 'Command-line program to download videos from YouTube.com and many other other video platforms.'
DESCRIPTION = 'A youtube-dl fork with additional features and patches'
LONG_DESCRIPTION = '\n\n'.join((
'Official repository: <https://github.com/yt-dlp/yt-dlp>',
@@ -29,7 +29,7 @@
if sys.argv[1:2] == ['py2exe']:
import py2exe
warnings.warn(
'Building with py2exe is not officially supported. '
'py2exe builds do not support pycryptodomex and needs VC++14 to run. '
'The recommended way is to use "pyinst.py" to build using pyinstaller')
params = {
'console': [{

View File

@@ -48,6 +48,7 @@ # Supported sites
- **Alura**
- **AluraCourse**
- **Amara**
- **AmazonStore**
- **AMCNetworks**
- **AmericasTestKitchen**
- **AmericasTestKitchenSeason**
@@ -127,7 +128,7 @@ # Supported sites
- **BilibiliAudioAlbum**
- **BilibiliChannel**
- **BiliBiliPlayer**
- **BiliBiliSearch**: Bilibili video search, "bilisearch" keyword
- **BiliBiliSearch**: Bilibili video search; "bilisearch:" prefix
- **BiliIntl**
- **BiliIntlSeries**
- **BioBioChileTV**
@@ -140,6 +141,7 @@ # Supported sites
- **BlackboardCollaborate**
- **BleacherReport**
- **BleacherReportCMS**
- **blogger.com**
- **Bloomberg**
- **BokeCC**
- **BongaCams**
@@ -149,6 +151,7 @@ # Supported sites
- **BR**: Bayerischer Rundfunk
- **BravoTV**
- **Break**
- **BreitBart**
- **brightcove:legacy**
- **brightcove:new**
- **BRMediathek**: Bayerischer Rundfunk Mediathek
@@ -157,11 +160,13 @@ # Supported sites
- **BusinessInsider**
- **BuzzFeed**
- **BYUtv**
- **CableAV**
- **CAM4**
- **Camdemy**
- **CamdemyFolder**
- **CamModels**
- **CamWithHer**
- **CanalAlpha**
- **canalc2.tv**
- **Canalplus**: mycanal.fr and piwiplus.fr
- **Canvas**
@@ -184,7 +189,6 @@ # Supported sites
- **CCTV**: 央视网
- **CDA**
- **CeskaTelevize**
- **CeskaTelevizePorady**
- **CGTN**
- **channel9**: Channel 9
- **CharlieRose**
@@ -222,11 +226,15 @@ # Supported sites
- **CONtv**
- **Corus**
- **Coub**
- **CozyTV**
- **cp24**
- **Cracked**
- **Crackle**
- **CrooksAndLiars**
- **crunchyroll**
- **crunchyroll:beta**
- **crunchyroll:playlist**
- **crunchyroll:playlist:beta**
- **CSpan**: C-SPAN
- **CtsNews**: 華視新聞
- **CTV**
@@ -234,7 +242,8 @@ # Supported sites
- **cu.ntv.co.jp**: Nippon Television Network
- **CultureUnplugged**
- **curiositystream**
- **curiositystream:collection**
- **curiositystream:collections**
- **curiositystream:series**
- **CWTV**
- **DagelijkseKost**: dagelijksekost.een.be
- **DailyMail**
@@ -264,6 +273,7 @@ # Supported sites
- **DiscoveryPlus**
- **DiscoveryPlusIndia**
- **DiscoveryPlusIndiaShow**
- **DiscoveryPlusItalyShow**
- **DiscoveryVR**
- **Disney**
- **DIYNetwork**
@@ -313,8 +323,10 @@ # Supported sites
- **Escapist**
- **ESPN**
- **ESPNArticle**
- **ESPNCricInfo**
- **EsriVideo**
- **Europa**
- **EUScreen**
- **EWETV**
- **ExpoTV**
- **Expressen**
@@ -363,6 +375,7 @@ # Supported sites
- **Funk**
- **Fusion**
- **Fux**
- **Gab**
- **GabTV**
- **Gaia**
- **GameInformer**
@@ -394,6 +407,7 @@ # Supported sites
- **Goshgay**
- **GoToStage**
- **GPUTechConf**
- **Gronkh**
- **Groupon**
- **hbo**
- **HearThisAt**
@@ -443,11 +457,13 @@ # Supported sites
- **IndavideoEmbed**
- **InfoQ**
- **Instagram**
- **instagram:tag**: Instagram hashtag search
- **instagram:tag**: Instagram hashtag search URLs
- **instagram:user**: Instagram user profile
- **InstagramIOS**: IOS instagram:// URL
- **Internazionale**
- **InternetVideoArchive**
- **IPrima**
- **IPrimaCNN**
- **iqiyi**: 爱奇艺
- **Ir90Tv**
- **ITTF**
@@ -517,6 +533,7 @@ # Supported sites
- **LineLive**
- **LineLiveChannel**
- **LineTV**
- **LinkedIn**
- **linkedin:learning**
- **linkedin:learning:course**
- **LinuxAcademy**
@@ -556,6 +573,7 @@ # Supported sites
- **MediaKlikk**
- **Medialaan**
- **Mediaset**
- **MediasetShow**
- **Mediasite**
- **MediasiteCatalog**
- **MediasiteNamedCatalog**
@@ -570,6 +588,7 @@ # Supported sites
- **Mgoon**
- **MGTV**: 芒果TV
- **MiaoPai**
- **microsoftstream**: Microsoft Stream
- **mildom**: Record ongoing live by specific user in Mildom
- **mildom:user:vod**: Download all VODs from specific user in Mildom
- **mildom:vod**: Download a VOD in Mildom
@@ -582,11 +601,13 @@ # Supported sites
- **mirrativ**
- **mirrativ:user**
- **MiTele**: mitele.es
- **mixch**
- **mixcloud**
- **mixcloud:playlist**
- **mixcloud:user**
- **MLB**
- **MLBVideo**
- **MLSSoccer**
- **Mnet**
- **MNetTV**
- **MoeVideo**: LetitBit video services: moevideo.net, playreplay.net and videochart.net
@@ -653,6 +674,7 @@ # Supported sites
- **ndr:embed:base**
- **NDTV**
- **Nebula**
- **nebula:collection**
- **NerdCubedFeed**
- **netease:album**: 网易云音乐 - 专辑
- **netease:djradio**: 网易云音乐 - 电台
@@ -686,8 +708,8 @@ # Supported sites
- **niconico**: ニコニコ動画
- **NiconicoPlaylist**
- **NiconicoUser**
- **nicovideo:search**: Nico video searches
- **nicovideo:search:date**: Nico video searches, newest first
- **nicovideo:search**: Nico video search; "nicosearch:" prefix
- **nicovideo:search:date**: Nico video search, newest first; "nicosearchdate:" prefix
- **nicovideo:search_url**: Nico video search URLs
- **Nintendo**
- **Nitter**
@@ -734,7 +756,9 @@ # Supported sites
- **Odnoklassniki**
- **OktoberfestTV**
- **OlympicsReplay**
- **on24**: ON24
- **OnDemandKorea**
- **OneFootball**
- **onet.pl**
- **onet.tv**
- **onet.tv:channel**
@@ -777,6 +801,7 @@ # Supported sites
- **PatreonUser**
- **pbs**: Public Broadcasting Service (PBS) and member stations: PBS: Public Broadcasting Service, APT - Alabama Public Television (WBIQ), GPB/Georgia Public Broadcasting (WGTV), Mississippi Public Broadcasting (WMPN), Nashville Public Television (WNPT), WFSU-TV (WFSU), WSRE (WSRE), WTCI (WTCI), WPBA/Channel 30 (WPBA), Alaska Public Media (KAKM), Arizona PBS (KAET), KNME-TV/Channel 5 (KNME), Vegas PBS (KLVX), AETN/ARKANSAS ETV NETWORK (KETS), KET (WKLE), WKNO/Channel 10 (WKNO), LPB/LOUISIANA PUBLIC BROADCASTING (WLPB), OETA (KETA), Ozarks Public Television (KOZK), WSIU Public Broadcasting (WSIU), KEET TV (KEET), KIXE/Channel 9 (KIXE), KPBS San Diego (KPBS), KQED (KQED), KVIE Public Television (KVIE), PBS SoCal/KOCE (KOCE), ValleyPBS (KVPT), CONNECTICUT PUBLIC TELEVISION (WEDH), KNPB Channel 5 (KNPB), SOPTV (KSYS), Rocky Mountain PBS (KRMA), KENW-TV3 (KENW), KUED Channel 7 (KUED), Wyoming PBS (KCWC), Colorado Public Television / KBDI 12 (KBDI), KBYU-TV (KBYU), Thirteen/WNET New York (WNET), WGBH/Channel 2 (WGBH), WGBY (WGBY), NJTV Public Media NJ (WNJT), WLIW21 (WLIW), mpt/Maryland Public Television (WMPB), WETA Television and Radio (WETA), WHYY (WHYY), PBS 39 (WLVT), WVPT - Your Source for PBS and More! (WVPT), Howard University Television (WHUT), WEDU PBS (WEDU), WGCU Public Media (WGCU), WPBT2 (WPBT), WUCF TV (WUCF), WUFT/Channel 5 (WUFT), WXEL/Channel 42 (WXEL), WLRN/Channel 17 (WLRN), WUSF Public Broadcasting (WUSF), ETV (WRLK), UNC-TV (WUNC), PBS Hawaii - Oceanic Cable Channel 10 (KHET), Idaho Public Television (KAID), KSPS (KSPS), OPB (KOPB), KWSU/Channel 10 & KTNW/Channel 31 (KWSU), WILL-TV (WILL), Network Knowledge - WSEC/Springfield (WSEC), WTTW11 (WTTW), Iowa Public Television/IPTV (KDIN), Nine Network (KETC), PBS39 Fort Wayne (WFWA), WFYI Indianapolis (WFYI), Milwaukee Public Television (WMVS), WNIN (WNIN), WNIT Public Television (WNIT), WPT (WPNE), WVUT/Channel 22 (WVUT), WEIU/Channel 51 (WEIU), WQPT-TV (WQPT), WYCC PBS Chicago (WYCC), WIPB-TV (WIPB), WTIU (WTIU), CET (WCET), ThinkTVNetwork (WPTD), WBGU-TV (WBGU), WGVU TV (WGVU), NET1 (KUON), Pioneer Public Television (KWCM), SDPB Television (KUSD), TPT (KTCA), KSMQ (KSMQ), KPTS/Channel 8 (KPTS), KTWU/Channel 11 (KTWU), East Tennessee PBS (WSJK), WCTE-TV (WCTE), WLJT, Channel 11 (WLJT), WOSU TV (WOSU), WOUB/WOUC (WOUB), WVPB (WVPB), WKYU-PBS (WKYU), KERA 13 (KERA), MPBN (WCBB), Mountain Lake PBS (WCFE), NHPTV (WENH), Vermont PBS (WETK), witf (WITF), WQED Multimedia (WQED), WMHT Educational Telecommunications (WMHT), Q-TV (WDCQ), WTVS Detroit Public TV (WTVS), CMU Public Television (WCMU), WKAR-TV (WKAR), WNMU-TV Public TV 13 (WNMU), WDSE - WRPT (WDSE), WGTE TV (WGTE), Lakeland Public Television (KAWE), KMOS-TV - Channels 6.1, 6.2 and 6.3 (KMOS), MontanaPBS (KUSM), KRWG/Channel 22 (KRWG), KACV (KACV), KCOS/Channel 13 (KCOS), WCNY/Channel 24 (WCNY), WNED (WNED), WPBS (WPBS), WSKG Public TV (WSKG), WXXI (WXXI), WPSU (WPSU), WVIA Public Media Studios (WVIA), WTVI (WTVI), Western Reserve PBS (WNEO), WVIZ/PBS ideastream (WVIZ), KCTS 9 (KCTS), Basin PBS (KPBT), KUHT / Channel 8 (KUHT), KLRN (KLRN), KLRU (KLRU), WTJX Channel 12 (WTJX), WCVE PBS (WCVE), KBTC Public Television (KBTC)
- **PearVideo**
- **peer.tv**
- **PeerTube**
- **PeerTube:Playlist**
- **peloton**
@@ -795,6 +820,7 @@ # Supported sites
- **Pinterest**
- **PinterestCollection**
- **Pladform**
- **PlanetMarathi**
- **Platzi**
- **PlatziCourse**
- **play.fm**
@@ -811,7 +837,12 @@ # Supported sites
- **podomatic**
- **Pokemon**
- **PokemonWatch**
- **PolsatGo**
- **PolskieRadio**
- **polskieradio:kierowcow**
- **polskieradio:player**
- **polskieradio:podcast**
- **polskieradio:podcast:list**
- **PolskieRadioCategory**
- **Popcorntimes**
- **PopcornTV**
@@ -854,6 +885,9 @@ # Supported sites
- **radiocanada:audiovideo**
- **radiofrance**
- **RadioJavan**
- **radiokapital**
- **radiokapital:show**
- **RadioZetPodcast**
- **radlive**
- **radlive:channel**
- **radlive:season**
@@ -861,6 +895,8 @@ # Supported sites
- **RaiPlay**
- **RaiPlayLive**
- **RaiPlayPlaylist**
- **RaiPlayRadio**
- **RaiPlayRadioPlaylist**
- **RayWenderlich**
- **RayWenderlichCourse**
- **RBMARadio**
@@ -876,7 +912,9 @@ # Supported sites
- **RedBullTV**
- **RedBullTVRrnContent**
- **Reddit**
- **RedditR**
- **RedGifs**
- **RedGifsSearch**: Redgifs search
- **RedGifsUser**: Redgifs user
- **RedTube**
- **RegioTV**
- **RENTV**
@@ -888,6 +926,7 @@ # Supported sites
- **RMCDecouverte**
- **RockstarGames**
- **RoosterTeeth**
- **RoosterTeethSeries**
- **RottenTomatoes**
- **Roxwel**
- **Rozhlas**
@@ -899,6 +938,7 @@ # Supported sites
- **rtl2:you**
- **rtl2:you:series**
- **RTP**
- **RTRFM**
- **RTS**: RTS.ch
- **rtve.es:alacarta**: RTVE a la carta
- **rtve.es:infantil**: RTVE infantil
@@ -930,7 +970,7 @@ # Supported sites
- **SBS**: sbs.com.au
- **schooltv**
- **ScienceChannel**
- **screen.yahoo:search**: Yahoo screen search
- **screen.yahoo:search**: Yahoo screen search; "yvsearch:" prefix
- **Screencast**
- **ScreencastOMatic**
- **ScrippsNetworks**
@@ -938,6 +978,7 @@ # Supported sites
- **SCTE**
- **SCTECourse**
- **Seeker**
- **SenateGov**
- **SenateISVP**
- **SendtoNews**
- **Servus**
@@ -955,12 +996,14 @@ # Supported sites
- **Sina**
- **sky.it**
- **sky:news**
- **sky:news:story**
- **sky:sports**
- **sky:sports:news**
- **skyacademy.it**
- **SkylineWebcams**
- **skynewsarabia:article**
- **skynewsarabia:video**
- **SkyNewsAU**
- **Slideshare**
- **SlidesLive**
- **Slutload**
@@ -970,7 +1013,7 @@ # Supported sites
- **SonyLIVSeries**
- **soundcloud**
- **soundcloud:playlist**
- **soundcloud:search**: Soundcloud search
- **soundcloud:search**: Soundcloud search; "scsearch:" prefix
- **soundcloud:set**
- **soundcloud:trackstation**
- **soundcloud:user**
@@ -1014,8 +1057,10 @@ # Supported sites
- **Streamanity**
- **streamcloud.eu**
- **StreamCZ**
- **StreamFF**
- **StreetVoice**
- **StretchInternet**
- **Stripchat**
- **stv:player**
- **SunPorno**
- **sverigesradio:episode**
@@ -1029,7 +1074,6 @@ # Supported sites
- **SztvHu**
- **t-online.de**
- **Tagesschau**
- **tagesschau:player**
- **Tass**
- **TBS**
- **TDSLifeway**
@@ -1073,6 +1117,8 @@ # Supported sites
- **ThisAmericanLife**
- **ThisAV**
- **ThisOldHouse**
- **ThreeSpeak**
- **ThreeSpeakUser**
- **TikTok**
- **tiktok:user**
- **tinypic**: tinypic.com videos
@@ -1089,6 +1135,8 @@ # Supported sites
- **TrailerAddict** (Currently broken)
- **Trilulilu**
- **Trovo**
- **TrovoChannelClip**: All Clips of a trovo.live channel; "trovoclip:" prefix
- **TrovoChannelVod**: All VODs of a trovo.live channel; "trovovod:" prefix
- **TrovoVod**
- **TruNews**
- **TruTV**
@@ -1134,6 +1182,7 @@ # Supported sites
- **tvp**: Telewizja Polska
- **tvp:embed**: Telewizja Polska
- **tvp:series**
- **tvp:stream**
- **TVPlayer**
- **TVPlayHome**
- **Tweakers**
@@ -1193,7 +1242,7 @@ # Supported sites
- **Viddler**
- **Videa**
- **video.arnes.si**: Arnes Video
- **video.google:search**: Google Video search
- **video.google:search**: Google Video search; "gvsearch:" prefix (Currently broken)
- **video.sky.it**
- **video.sky.it:live**
- **VideoDetective**
@@ -1283,11 +1332,14 @@ # Supported sites
- **WeiboMobile**
- **WeiqiTV**: WQTV
- **whowatch**
- **Willow**
- **WimTV**
- **Wistia**
- **WistiaPlaylist**
- **wnl**: npo.nl, ntr.nl, omroepwnl.nl, zapp.nl and npo3.nl
- **WorldStarHipHop**
- **wppilot**
- **wppilot:channels**
- **WSJ**: Wall Street Journal
- **WSJArticle**
- **WWE**
@@ -1335,19 +1387,19 @@ # Supported sites
- **YouPorn**
- **YourPorn**
- **YourUpload**
- **youtube**: YouTube.com
- **youtube:favorites**: YouTube.com liked videos, ":ytfav" for short (requires authentication)
- **youtube:history**: Youtube watch history, ":ythis" for short (requires authentication)
- **youtube:playlist**: YouTube.com playlists
- **youtube:recommended**: YouTube.com recommended videos, ":ytrec" for short (requires authentication)
- **youtube:search**: YouTube.com searches, "ytsearch" keyword
- **youtube:search:date**: YouTube.com searches, newest videos first, "ytsearchdate" keyword
- **youtube:search_url**: YouTube.com search URLs
- **youtube:subscriptions**: YouTube.com subscriptions feed, ":ytsubs" for short (requires authentication)
- **youtube:tab**: YouTube.com tab
- **youtube:watchlater**: Youtube watch later list, ":ytwatchlater" for short (requires authentication)
- **youtube**: YouTube
- **youtube:favorites**: YouTube liked videos; ":ytfav" keyword (requires cookies)
- **youtube:history**: Youtube watch history; ":ythis" keyword (requires cookies)
- **youtube:playlist**: YouTube playlists
- **youtube:recommended**: YouTube recommended videos; ":ytrec" keyword
- **youtube:search**: YouTube search; "ytsearch:" prefix
- **youtube:search:date**: YouTube search, newest videos first; "ytsearchdate:" prefix
- **youtube:search_url**: YouTube search URLs with sorting and filter support
- **youtube:subscriptions**: YouTube subscriptions feed; ":ytsubs" keyword (requires cookies)
- **youtube:tab**: YouTube Tabs
- **youtube:watchlater**: Youtube watch later list; ":ytwatchlater" keyword (requires cookies)
- **YoutubeYtBe**: youtu.be
- **YoutubeYtUser**: YouTube.com user videos, URL or "ytuser" keyword
- **YoutubeYtUser**: YouTube user videos; "ytuser:" prefix
- **Zapiks**
- **Zattoo**
- **ZattooLive**

View File

@@ -9,7 +9,7 @@
"forcetitle": false,
"forceurl": false,
"force_write_download_archive": false,
"format": "best",
"format": "b/bv",
"ignoreerrors": false,
"listformats": null,
"logtostderr": false,
@@ -44,6 +44,5 @@
"writesubtitles": false,
"allsubtitles": false,
"listsubtitles": false,
"socket_timeout": 20,
"fixup": "never"
}

View File

@@ -137,7 +137,7 @@ def test(inp, *expected, multi=False):
test('webm/mp4', '47')
test('3gp/40/mp4', '35')
test('example-with-dashes', 'example-with-dashes')
test('all', '35', 'example-with-dashes', '45', '47', '2') # Order doesn't actually matter for this
test('all', '2', '47', '45', 'example-with-dashes', '35')
test('mergeall', '2+47+45+example-with-dashes+35', multi=True)
def test_format_selection_audio(self):
@@ -520,7 +520,7 @@ def test_format_filtering(self):
ydl = YDL({'format': 'all[width>=400][width<=600]'})
ydl.process_ie_result(info_dict)
downloaded_ids = [info['format_id'] for info in ydl.downloaded_info_dicts]
self.assertEqual(downloaded_ids, ['B', 'C', 'D'])
self.assertEqual(downloaded_ids, ['D', 'C', 'B'])
ydl = YDL({'format': 'best[height<40]'})
try:
@@ -656,7 +656,7 @@ def test_add_extra_info(self):
'playlist_autonumber': 2,
'_last_playlist_index': 100,
'n_entries': 10,
'formats': [{'id': 'id1'}, {'id': 'id2'}, {'id': 'id3'}]
'formats': [{'id': 'id 1'}, {'id': 'id 2'}, {'id': 'id 3'}]
}
def test_prepare_outtmpl_and_filename(self):
@@ -737,6 +737,7 @@ def expect_same_infodict(out):
test(NA_TEST_OUTTMPL, 'NA-NA-def-1234.mp4')
test(NA_TEST_OUTTMPL, 'none-none-def-1234.mp4', outtmpl_na_placeholder='none')
test(NA_TEST_OUTTMPL, '--def-1234.mp4', outtmpl_na_placeholder='')
test('%(non_existent.0)s', 'NA')
# String formatting
FMT_TEST_OUTTMPL = '%%(height)%s.%%(ext)s'
@@ -762,14 +763,15 @@ def expect_same_infodict(out):
test('a%(width|)d', 'a', outtmpl_na_placeholder='none')
FORMATS = self.outtmpl_info['formats']
sanitize = lambda x: x.replace(':', ' -').replace('"', "'")
sanitize = lambda x: x.replace(':', ' -').replace('"', "'").replace('\n', ' ')
# Custom type casting
test('%(formats.:.id)l', 'id1, id2, id3')
test('%(formats.:.id)#l', ('id1\nid2\nid3', 'id1 id2 id3'))
test('%(formats.:.id)l', 'id 1, id 2, id 3')
test('%(formats.:.id)#l', ('id 1\nid 2\nid 3', 'id 1 id 2 id 3'))
test('%(ext)l', 'mp4')
test('%(formats.:.id) 15l', ' id1, id2, id3')
test('%(formats.:.id) 18l', ' id 1, id 2, id 3')
test('%(formats)j', (json.dumps(FORMATS), sanitize(json.dumps(FORMATS))))
test('%(formats)#j', (json.dumps(FORMATS, indent=4), sanitize(json.dumps(FORMATS, indent=4))))
test('%(title5).3B', 'á')
test('%(title5)U', 'áéí 𝐀')
test('%(title5)#U', 'a\u0301e\u0301i\u0301 𝐀')
@@ -777,8 +779,12 @@ def expect_same_infodict(out):
test('%(title5)+#U', 'a\u0301e\u0301i\u0301 A')
if compat_os_name == 'nt':
test('%(title4)q', ('"foo \\"bar\\" test"', "'foo _'bar_' test'"))
test('%(formats.:.id)#q', ('"id 1" "id 2" "id 3"', "'id 1' 'id 2' 'id 3'"))
test('%(formats.0.id)#q', ('"id 1"', "'id 1'"))
else:
test('%(title4)q', ('\'foo "bar" test\'', "'foo 'bar' test'"))
test('%(formats.:.id)#q', "'id 1' 'id 2' 'id 3'")
test('%(formats.0.id)#q', "'id 1'")
# Internal formatting
test('%(timestamp-1000>%H-%M-%S)s', '11-43-20')
@@ -817,6 +823,12 @@ def gen():
compat_setenv('__yt_dlp_var', 'expanded')
envvar = '%__yt_dlp_var%' if compat_os_name == 'nt' else '$__yt_dlp_var'
test(envvar, (envvar, 'expanded'))
if compat_os_name == 'nt':
test('%s%', ('%s%', '%s%'))
compat_setenv('s', 'expanded')
test('%s%', ('%s%', 'expanded')) # %s% should be expanded before escaping %s
compat_setenv('(test)s', 'expanded')
test('%(test)s%', ('NA%', 'expanded')) # Environment should take priority over template
# Path expansion and escaping
test('Hello %(title1)s', 'Hello $PATH')

View File

@@ -10,6 +10,8 @@
from yt_dlp.aes import (
aes_decrypt,
aes_encrypt,
aes_ecb_encrypt,
aes_ecb_decrypt,
aes_cbc_decrypt,
aes_cbc_decrypt_bytes,
aes_cbc_encrypt,
@@ -17,7 +19,8 @@
aes_ctr_encrypt,
aes_gcm_decrypt_and_verify,
aes_gcm_decrypt_and_verify_bytes,
aes_decrypt_text
aes_decrypt_text,
BLOCK_SIZE_BYTES,
)
from yt_dlp.compat import compat_pycrypto_AES
from yt_dlp.utils import bytes_to_intlist, intlist_to_bytes
@@ -94,6 +97,19 @@ def test_decrypt_text(self):
decrypted = (aes_decrypt_text(encrypted, password, 32))
self.assertEqual(decrypted, self.secret_msg)
def test_ecb_encrypt(self):
data = bytes_to_intlist(self.secret_msg)
data += [0x08] * (BLOCK_SIZE_BYTES - len(data) % BLOCK_SIZE_BYTES)
encrypted = intlist_to_bytes(aes_ecb_encrypt(data, self.key, self.iv))
self.assertEqual(
encrypted,
b'\xaa\x86]\x81\x97>\x02\x92\x9d\x1bR[[L/u\xd3&\xd1(h\xde{\x81\x94\xba\x02\xae\xbd\xa6\xd0:')
def test_ecb_decrypt(self):
data = bytes_to_intlist(b'\xaa\x86]\x81\x97>\x02\x92\x9d\x1bR[[L/u\xd3&\xd1(h\xde{\x81\x94\xba\x02\xae\xbd\xa6\xd0:')
decrypted = intlist_to_bytes(aes_ecb_decrypt(data, self.key, self.iv))
self.assertEqual(decrypted.rstrip(b'\x08'), self.secret_msg)
if __name__ == '__main__':
unittest.main()

View File

@@ -38,7 +38,6 @@ def test_youtube_playlist_matching(self):
assertTab('https://www.youtube.com/AsapSCIENCE')
assertTab('https://www.youtube.com/embedded')
assertTab('https://www.youtube.com/playlist?list=UUBABnxM4Ar9ten8Mdjj1j0Q')
assertTab('https://www.youtube.com/course?list=ECUl4u3cNGP61MdtwGTqZA0MreSaDybji8')
assertTab('https://www.youtube.com/playlist?list=PLwP_SiAcdui0KVebT0mU9Apz359a4ubsC')
assertTab('https://www.youtube.com/watch?v=AV6J6_AeFEQ&playnext=1&list=PL4023E734DA416012') # 668
self.assertFalse('youtube:playlist' in self.matching_ies('PLtS2H6bU1M'))

View File

@@ -112,6 +112,71 @@ def test_call(self):
''')
self.assertEqual(jsi.call_function('z'), 5)
def test_for_loop(self):
jsi = JSInterpreter('''
function x() { a=0; for (i=0; i-10; i++) {a++} a }
''')
self.assertEqual(jsi.call_function('x'), 10)
def test_switch(self):
jsi = JSInterpreter('''
function x(f) { switch(f){
case 1:f+=1;
case 2:f+=2;
case 3:f+=3;break;
case 4:f+=4;
default:f=0;
} return f }
''')
self.assertEqual(jsi.call_function('x', 1), 7)
self.assertEqual(jsi.call_function('x', 3), 6)
self.assertEqual(jsi.call_function('x', 5), 0)
def test_switch_default(self):
jsi = JSInterpreter('''
function x(f) { switch(f){
case 2: f+=2;
default: f-=1;
case 5:
case 6: f+=6;
case 0: break;
case 1: f+=1;
} return f }
''')
self.assertEqual(jsi.call_function('x', 1), 2)
self.assertEqual(jsi.call_function('x', 5), 11)
self.assertEqual(jsi.call_function('x', 9), 14)
def test_try(self):
jsi = JSInterpreter('''
function x() { try{return 10} catch(e){return 5} }
''')
self.assertEqual(jsi.call_function('x'), 10)
def test_for_loop_continue(self):
jsi = JSInterpreter('''
function x() { a=0; for (i=0; i-10; i++) { continue; a++ } a }
''')
self.assertEqual(jsi.call_function('x'), 0)
def test_for_loop_break(self):
jsi = JSInterpreter('''
function x() { a=0; for (i=0; i-10; i++) { break; a++ } a }
''')
self.assertEqual(jsi.call_function('x'), 0)
def test_literal_list(self):
jsi = JSInterpreter('''
function x() { [1, 2, "asdf", [5, 6, 7]][3] }
''')
self.assertEqual(jsi.call_function('x'), [5, 6, 7])
def test_comma(self):
jsi = JSInterpreter('''
function x() { a=5; a -= 1, a+=3; return a }
''')
self.assertEqual(jsi.call_function('x'), 7)
if __name__ == '__main__':
unittest.main()

View File

@@ -848,30 +848,52 @@ def test_parse_codecs(self):
self.assertEqual(parse_codecs('avc1.77.30, mp4a.40.2'), {
'vcodec': 'avc1.77.30',
'acodec': 'mp4a.40.2',
'dynamic_range': None,
})
self.assertEqual(parse_codecs('mp4a.40.2'), {
'vcodec': 'none',
'acodec': 'mp4a.40.2',
'dynamic_range': None,
})
self.assertEqual(parse_codecs('mp4a.40.5,avc1.42001e'), {
'vcodec': 'avc1.42001e',
'acodec': 'mp4a.40.5',
'dynamic_range': None,
})
self.assertEqual(parse_codecs('avc3.640028'), {
'vcodec': 'avc3.640028',
'acodec': 'none',
'dynamic_range': None,
})
self.assertEqual(parse_codecs(', h264,,newcodec,aac'), {
'vcodec': 'h264',
'acodec': 'aac',
'dynamic_range': None,
})
self.assertEqual(parse_codecs('av01.0.05M.08'), {
'vcodec': 'av01.0.05M.08',
'acodec': 'none',
'dynamic_range': None,
})
self.assertEqual(parse_codecs('vp9.2'), {
'vcodec': 'vp9.2',
'acodec': 'none',
'dynamic_range': 'HDR10',
})
self.assertEqual(parse_codecs('av01.0.12M.10.0.110.09.16.09.0'), {
'vcodec': 'av01.0.12M.10',
'acodec': 'none',
'dynamic_range': 'HDR10',
})
self.assertEqual(parse_codecs('dvhe'), {
'vcodec': 'dvhe',
'acodec': 'none',
'dynamic_range': 'DV',
})
self.assertEqual(parse_codecs('theora, vorbis'), {
'vcodec': 'theora',
'acodec': 'vorbis',
'dynamic_range': None,
})
self.assertEqual(parse_codecs('unknownvcodec, unknownacodec'), {
'vcodec': 'unknownvcodec',
@@ -1141,12 +1163,15 @@ def test_parse_count(self):
def test_parse_resolution(self):
self.assertEqual(parse_resolution(None), {})
self.assertEqual(parse_resolution(''), {})
self.assertEqual(parse_resolution('1920x1080'), {'width': 1920, 'height': 1080})
self.assertEqual(parse_resolution('1920×1080'), {'width': 1920, 'height': 1080})
self.assertEqual(parse_resolution(' 1920x1080'), {'width': 1920, 'height': 1080})
self.assertEqual(parse_resolution('1920×1080 '), {'width': 1920, 'height': 1080})
self.assertEqual(parse_resolution('1920 x 1080'), {'width': 1920, 'height': 1080})
self.assertEqual(parse_resolution('720p'), {'height': 720})
self.assertEqual(parse_resolution('4k'), {'height': 2160})
self.assertEqual(parse_resolution('8K'), {'height': 4320})
self.assertEqual(parse_resolution('pre_1920x1080_post'), {'width': 1920, 'height': 1080})
self.assertEqual(parse_resolution('ep1x2'), {})
self.assertEqual(parse_resolution('1920, 1080'), {'width': 1920, 'height': 1080})
def test_parse_bitrate(self):
self.assertEqual(parse_bitrate(None), None)
@@ -1197,12 +1222,49 @@ def test_is_html(self):
def test_render_table(self):
self.assertEqual(
render_table(
['a', 'bcd'],
[[123, 4], [9999, 51]]),
['a', 'empty', 'bcd'],
[[123, '', 4], [9999, '', 51]]),
'a empty bcd\n'
'123 4\n'
'9999 51')
self.assertEqual(
render_table(
['a', 'empty', 'bcd'],
[[123, '', 4], [9999, '', 51]],
hide_empty=True),
'a bcd\n'
'123 4\n'
'9999 51')
self.assertEqual(
render_table(
['\ta', 'bcd'],
[['1\t23', 4], ['\t9999', 51]]),
' a bcd\n'
'1 23 4\n'
'9999 51')
self.assertEqual(
render_table(
['a', 'bcd'],
[[123, 4], [9999, 51]],
delim='-'),
'a bcd\n'
'--------\n'
'123 4\n'
'9999 51')
self.assertEqual(
render_table(
['a', 'bcd'],
[[123, 4], [9999, 51]],
delim='-', extra_gap=2),
'a bcd\n'
'----------\n'
'123 4\n'
'9999 51')
def test_match_str(self):
# Unary
self.assertFalse(match_str('xy', {'x': 1200}))
@@ -1231,6 +1293,7 @@ def test_match_str(self):
self.assertFalse(match_str('x>2K', {'x': 1200}))
self.assertTrue(match_str('x>=1200 & x < 1300', {'x': 1200}))
self.assertFalse(match_str('x>=1100 & x < 1200', {'x': 1200}))
self.assertTrue(match_str('x > 1:0:0', {'x': 3700}))
# String
self.assertFalse(match_str('y=a212', {'y': 'foobar42'}))
@@ -1367,21 +1430,21 @@ def test_dfxp2srt(self):
</body>
</tt>'''.encode('utf-8')
srt_data = '''1
00:00:02,080 --> 00:00:05,839
00:00:02,080 --> 00:00:05,840
<font color="white" face="sansSerif" size="16">default style<font color="red">custom style</font></font>
2
00:00:02,080 --> 00:00:05,839
00:00:02,080 --> 00:00:05,840
<b><font color="cyan" face="sansSerif" size="16"><font color="lime">part 1
</font>part 2</font></b>
3
00:00:05,839 --> 00:00:09,560
00:00:05,840 --> 00:00:09,560
<u><font color="lime">line 3
part 3</font></u>
4
00:00:09,560 --> 00:00:12,359
00:00:09,560 --> 00:00:12,360
<i><u><font color="yellow"><font color="lime">inner
</font>style</font></u></i>
@@ -1594,9 +1657,9 @@ def test_LazyList(self):
self.assertEqual(repr(LazyList(it)), repr(it))
self.assertEqual(str(LazyList(it)), str(it))
self.assertEqual(list(LazyList(it).reverse()), it[::-1])
self.assertEqual(list(LazyList(it).reverse()[1:3:7]), it[::-1][1:3:7])
self.assertEqual(list(LazyList(it).reverse()[::-1]), it)
self.assertEqual(list(LazyList(it, reverse=True)), it[::-1])
self.assertEqual(list(reversed(LazyList(it))[::-1]), it)
self.assertEqual(list(reversed(LazyList(it))[1:3:7]), it[::-1][1:3:7])
def test_LazyList_laziness(self):
@@ -1609,13 +1672,13 @@ def test(ll, idx, val, cache):
test(ll, 5, 5, range(6))
test(ll, -3, 7, range(10))
ll = LazyList(range(10)).reverse()
ll = LazyList(range(10), reverse=True)
test(ll, -1, 0, range(1))
test(ll, 3, 6, range(10))
ll = LazyList(itertools.count())
test(ll, 10, 10, range(11))
ll.reverse()
ll = reversed(ll)
test(ll, -15, 14, range(15))

View File

@@ -26,29 +26,31 @@ def assertIsPlaylist(self, info):
def test_youtube_playlist_noplaylist(self):
dl = FakeYDL()
dl.params['noplaylist'] = True
ie = YoutubePlaylistIE(dl)
ie = YoutubeTabIE(dl)
result = ie.extract('https://www.youtube.com/watch?v=FXxLjLQi3Fg&list=PLwiyx1dc3P2JR9N8gQaQN_BCvlSlap7re')
self.assertEqual(result['_type'], 'url')
self.assertEqual(YoutubeIE().extract_id(result['url']), 'FXxLjLQi3Fg')
self.assertEqual(YoutubeIE.extract_id(result['url']), 'FXxLjLQi3Fg')
def test_youtube_course(self):
print('Skipping: Course URLs no longer exists')
return
dl = FakeYDL()
ie = YoutubePlaylistIE(dl)
# TODO find a > 100 (paginating?) videos course
result = ie.extract('https://www.youtube.com/course?list=ECUl4u3cNGP61MdtwGTqZA0MreSaDybji8')
entries = list(result['entries'])
self.assertEqual(YoutubeIE().extract_id(entries[0]['url']), 'j9WZyLZCBzs')
self.assertEqual(YoutubeIE.extract_id(entries[0]['url']), 'j9WZyLZCBzs')
self.assertEqual(len(entries), 25)
self.assertEqual(YoutubeIE().extract_id(entries[-1]['url']), 'rYefUsYuEp0')
self.assertEqual(YoutubeIE.extract_id(entries[-1]['url']), 'rYefUsYuEp0')
def test_youtube_mix(self):
dl = FakeYDL()
ie = YoutubePlaylistIE(dl)
result = ie.extract('https://www.youtube.com/watch?v=W01L70IGBgE&index=2&list=RDOQpdSVF_k_w')
entries = result['entries']
ie = YoutubeTabIE(dl)
result = ie.extract('https://www.youtube.com/watch?v=tyITL_exICo&list=RDCLAK5uy_kLWIr9gv1XLlPbaDS965-Db4TrBoUTxQ8')
entries = list(result['entries'])
self.assertTrue(len(entries) >= 50)
original_video = entries[0]
self.assertEqual(original_video['id'], 'OQpdSVF_k_w')
self.assertEqual(original_video['id'], 'tyITL_exICo')
def test_youtube_toptracks(self):
print('Skipping: The playlist page gives error 500')
@@ -68,10 +70,10 @@ def test_youtube_flat_playlist_extraction(self):
entries = list(result['entries'])
self.assertTrue(len(entries) == 1)
video = entries[0]
self.assertEqual(video['_type'], 'url_transparent')
self.assertEqual(video['_type'], 'url')
self.assertEqual(video['ie_key'], 'Youtube')
self.assertEqual(video['id'], 'BaW_jenozKc')
self.assertEqual(video['url'], 'BaW_jenozKc')
self.assertEqual(video['url'], 'https://www.youtube.com/watch?v=BaW_jenozKc')
self.assertEqual(video['title'], 'youtube-dl test video "\'/\\ä↭𝕐')
self.assertEqual(video['duration'], 10)
self.assertEqual(video['uploader'], 'Philipp Hagemeister')

View File

@@ -14,9 +14,10 @@
from test.helper import FakeYDL, is_download_test
from yt_dlp.extractor import YoutubeIE
from yt_dlp.jsinterp import JSInterpreter
from yt_dlp.compat import compat_str, compat_urlretrieve
_TESTS = [
_SIG_TESTS = [
(
'https://s.ytimg.com/yts/jsbin/html5player-vflHOr_nV.js',
86,
@@ -64,6 +65,25 @@
)
]
_NSIG_TESTS = [
(
'https://www.youtube.com/s/player/9216d1f7/player_ias.vflset/en_US/base.js',
'SLp9F5bwjAdhE9F-', 'gWnb9IK2DJ8Q1w',
),
(
'https://www.youtube.com/s/player/f8cb7a3b/player_ias.vflset/en_US/base.js',
'oBo2h5euWy6osrUt', 'ivXHpm7qJjJN',
),
(
'https://www.youtube.com/s/player/2dfe380c/player_ias.vflset/en_US/base.js',
'oBo2h5euWy6osrUt', '3DIBbn3qdQ',
),
(
'https://www.youtube.com/s/player/f1ca6900/player_ias.vflset/en_US/base.js',
'cu3wyu6LQn2hse', 'jvxetvmlI9AN9Q',
),
]
@is_download_test
class TestPlayerInfo(unittest.TestCase):
@@ -97,35 +117,49 @@ def setUp(self):
os.mkdir(self.TESTDATA_DIR)
def make_tfunc(url, sig_input, expected_sig):
m = re.match(r'.*-([a-zA-Z0-9_-]+)(?:/watch_as3|/html5player)?\.[a-z]+$', url)
assert m, '%r should follow URL format' % url
test_id = m.group(1)
def t_factory(name, sig_func, url_pattern):
def make_tfunc(url, sig_input, expected_sig):
m = url_pattern.match(url)
assert m, '%r should follow URL format' % url
test_id = m.group('id')
def test_func(self):
basename = 'player-%s.js' % test_id
fn = os.path.join(self.TESTDATA_DIR, basename)
def test_func(self):
basename = f'player-{name}-{test_id}.js'
fn = os.path.join(self.TESTDATA_DIR, basename)
if not os.path.exists(fn):
compat_urlretrieve(url, fn)
if not os.path.exists(fn):
compat_urlretrieve(url, fn)
with io.open(fn, encoding='utf-8') as testf:
jscode = testf.read()
self.assertEqual(sig_func(jscode, sig_input), expected_sig)
ydl = FakeYDL()
ie = YoutubeIE(ydl)
with io.open(fn, encoding='utf-8') as testf:
jscode = testf.read()
func = ie._parse_sig_js(jscode)
src_sig = (
compat_str(string.printable[:sig_input])
if isinstance(sig_input, int) else sig_input)
got_sig = func(src_sig)
self.assertEqual(got_sig, expected_sig)
test_func.__name__ = str('test_signature_js_' + test_id)
setattr(TestSignature, test_func.__name__, test_func)
test_func.__name__ = f'test_{name}_js_{test_id}'
setattr(TestSignature, test_func.__name__, test_func)
return make_tfunc
for test_spec in _TESTS:
make_tfunc(*test_spec)
def signature(jscode, sig_input):
func = YoutubeIE(FakeYDL())._parse_sig_js(jscode)
src_sig = (
compat_str(string.printable[:sig_input])
if isinstance(sig_input, int) else sig_input)
return func(src_sig)
def n_sig(jscode, sig_input):
funcname = YoutubeIE(FakeYDL())._extract_n_function_name(jscode)
return JSInterpreter(jscode).call_function(funcname, sig_input)
make_sig_test = t_factory(
'signature', signature, re.compile(r'.*-(?P<id>[a-zA-Z0-9_-]+)(?:/watch_as3|/html5player)?\.[a-z]+$'))
for test_spec in _SIG_TESTS:
make_sig_test(*test_spec)
make_nsig_test = t_factory(
'nsig', n_sig, re.compile(r'.+/player/(?P<id>[a-zA-Z0-9_-]+)/.+.js$'))
for test_spec in _NSIG_TESTS:
make_nsig_test(*test_spec)
if __name__ == '__main__':

File diff suppressed because it is too large Load Diff

View File

@@ -25,15 +25,17 @@
from .utils import (
DateRange,
decodeOption,
DownloadCancelled,
DownloadError,
error_to_compat_str,
ExistingVideoReached,
expand_path,
GeoUtils,
float_or_none,
int_or_none,
match_filter_func,
MaxDownloadsReached,
parse_duration,
preferredencoding,
read_batch_urls,
RejectedVideoReached,
render_table,
SameFileError,
setproctitle,
@@ -70,7 +72,7 @@ def _real_main(argv=None):
setproctitle('yt-dlp')
parser, opts, args = parseOpts(argv)
warnings = []
warnings, deprecation_warnings = [], []
# Set user agent
if opts.user_agent is not None:
@@ -93,6 +95,7 @@ def _real_main(argv=None):
if opts.batchfile is not None:
try:
if opts.batchfile == '-':
write_string('Reading URLs from stdin:\n')
batchfd = sys.stdin
else:
batchfd = io.open(
@@ -121,10 +124,10 @@ def _real_main(argv=None):
desc = getattr(ie, 'IE_DESC', ie.IE_NAME)
if desc is False:
continue
if hasattr(ie, 'SEARCH_KEY'):
if getattr(ie, 'SEARCH_KEY', None) is not None:
_SEARCHES = ('cute kittens', 'slithering pythons', 'falling cat', 'angry poodle', 'purple fish', 'running tortoise', 'sleeping bunny', 'burping cow')
_COUNTS = ('', '5', '10', 'all')
desc += ' (Example: "%s%s:%s" )' % (ie.SEARCH_KEY, random.choice(_COUNTS), random.choice(_SEARCHES))
desc += f'; "{ie.SEARCH_KEY}:" prefix (Example: "{ie.SEARCH_KEY}{random.choice(_COUNTS)}:{random.choice(_SEARCHES)}")'
write_string(desc + '\n', out=sys.stdout)
sys.exit(0)
if opts.ap_list_mso:
@@ -192,7 +195,15 @@ def _real_main(argv=None):
if opts.overwrites: # --yes-overwrites implies --no-continue
opts.continue_dl = False
if opts.concurrent_fragment_downloads <= 0:
raise ValueError('Concurrent fragments must be positive')
parser.error('Concurrent fragments must be positive')
if opts.wait_for_video is not None:
mobj = re.match(r'(?P<min>\d+)(?:-(?P<max>\d+))?$', opts.wait_for_video)
if not mobj:
parser.error('Invalid time range to wait')
min_wait, max_wait = map(int_or_none, mobj.group('min', 'max'))
if max_wait is not None and max_wait < min_wait:
parser.error('Invalid time range to wait')
opts.wait_for_video = (min_wait, max_wait)
def parse_retries(retries, name=''):
if retries in ('inf', 'infinite'):
@@ -220,15 +231,17 @@ def parse_retries(retries, name=''):
parser.error('invalid http chunk size specified')
opts.http_chunk_size = numeric_chunksize
if opts.playliststart <= 0:
raise ValueError('Playlist start must be positive')
raise parser.error('Playlist start must be positive')
if opts.playlistend not in (-1, None) and opts.playlistend < opts.playliststart:
raise ValueError('Playlist end must be greater than playlist start')
raise parser.error('Playlist end must be greater than playlist start')
if opts.extractaudio:
opts.audioformat = opts.audioformat.lower()
if opts.audioformat not in ['best'] + list(FFmpegExtractAudioPP.SUPPORTED_EXTS):
parser.error('invalid audio format specified')
if opts.audioquality:
opts.audioquality = opts.audioquality.strip('k').strip('K')
if not opts.audioquality.isdigit():
audioquality = int_or_none(float_or_none(opts.audioquality)) # int_or_none prevents inf, nan
if audioquality is None or audioquality < 0:
parser.error('invalid audio quality specified')
if opts.recodevideo is not None:
opts.recodevideo = opts.recodevideo.replace(' ', '')
@@ -244,12 +257,17 @@ def parse_retries(retries, name=''):
if opts.convertthumbnails is not None:
if opts.convertthumbnails not in FFmpegThumbnailsConvertorPP.SUPPORTED_EXTS:
parser.error('invalid thumbnail format specified')
if opts.cookiesfrombrowser is not None:
opts.cookiesfrombrowser = [
part.strip() or None for part in opts.cookiesfrombrowser.split(':', 1)]
if opts.cookiesfrombrowser[0].lower() not in SUPPORTED_BROWSERS:
parser.error('unsupported browser specified for cookies')
geo_bypass_code = opts.geo_bypass_ip_block or opts.geo_bypass_country
if geo_bypass_code is not None:
try:
GeoUtils.random_ipv4(geo_bypass_code)
except Exception:
parser.error('unsupported geo-bypass country or ip-block')
if opts.date is not None:
date = DateRange.day(opts.date)
@@ -258,6 +276,9 @@ def parse_retries(retries, name=''):
compat_opts = opts.compat_opts
def report_conflict(arg1, arg2):
warnings.append(f'{arg2} is ignored since {arg1} was given')
def _unused_compat_opt(name):
if name not in compat_opts:
return False
@@ -282,6 +303,11 @@ def set_default_compat(compat_name, opt_name, default=True, remove_compat=True):
set_default_compat('abort-on-error', 'ignoreerrors', 'only_download')
set_default_compat('no-playlist-metafiles', 'allow_playlist_files')
set_default_compat('no-clean-infojson', 'clean_infojson')
if 'no-attach-info-json' in compat_opts:
if opts.embed_infojson:
_unused_compat_opt('no-attach-info-json')
else:
opts.embed_infojson = False
if 'format-sort' in compat_opts:
opts.format_sort.extend(InfoExtractor.FormatSort.ytdl_default)
_video_multistreams_set = set_default_compat('multistreams', 'allow_multiple_video_streams', False, remove_compat=False)
@@ -289,10 +315,14 @@ def set_default_compat(compat_name, opt_name, default=True, remove_compat=True):
if _video_multistreams_set is False and _audio_multistreams_set is False:
_unused_compat_opt('multistreams')
outtmpl_default = opts.outtmpl.get('default')
if opts.useid:
if outtmpl_default is None:
outtmpl_default = opts.outtmpl['default'] = '%(id)s.%(ext)s'
else:
report_conflict('--output', '--id')
if 'filename' in compat_opts:
if outtmpl_default is None:
outtmpl_default = '%(title)s-%(id)s.%(ext)s'
opts.outtmpl.update({'default': outtmpl_default})
outtmpl_default = opts.outtmpl['default'] = '%(title)s-%(id)s.%(ext)s'
else:
_unused_compat_opt('filename')
@@ -361,13 +391,8 @@ def metadataparser_actions(f):
opts.sponsorblock_remove = set()
sponsorblock_query = opts.sponsorblock_mark | opts.sponsorblock_remove
if (opts.addmetadata or opts.sponsorblock_mark) and opts.addchapters is None:
opts.addchapters = True
opts.remove_chapters = opts.remove_chapters or []
def report_conflict(arg1, arg2):
warnings.append('%s is ignored since %s was given' % (arg2, arg1))
if (opts.remove_chapters or sponsorblock_query) and opts.sponskrub is not False:
if opts.sponskrub:
if opts.remove_chapters:
@@ -386,40 +411,32 @@ def report_conflict(arg1, arg2):
opts.remuxvideo = False
if opts.allow_unplayable_formats:
if opts.extractaudio:
report_conflict('--allow-unplayable-formats', '--extract-audio')
opts.extractaudio = False
if opts.remuxvideo:
report_conflict('--allow-unplayable-formats', '--remux-video')
opts.remuxvideo = False
if opts.recodevideo:
report_conflict('--allow-unplayable-formats', '--recode-video')
opts.recodevideo = False
if opts.addmetadata:
report_conflict('--allow-unplayable-formats', '--add-metadata')
opts.addmetadata = False
if opts.embedsubtitles:
report_conflict('--allow-unplayable-formats', '--embed-subs')
opts.embedsubtitles = False
if opts.embedthumbnail:
report_conflict('--allow-unplayable-formats', '--embed-thumbnail')
opts.embedthumbnail = False
if opts.xattrs:
report_conflict('--allow-unplayable-formats', '--xattrs')
opts.xattrs = False
if opts.fixup and opts.fixup.lower() not in ('never', 'ignore'):
report_conflict('--allow-unplayable-formats', '--fixup')
def report_unplayable_conflict(opt_name, arg, default=False, allowed=None):
val = getattr(opts, opt_name)
if (not allowed and val) or (allowed and not allowed(val)):
report_conflict('--allow-unplayable-formats', arg)
setattr(opts, opt_name, default)
report_unplayable_conflict('extractaudio', '--extract-audio')
report_unplayable_conflict('remuxvideo', '--remux-video')
report_unplayable_conflict('recodevideo', '--recode-video')
report_unplayable_conflict('addmetadata', '--embed-metadata')
report_unplayable_conflict('addchapters', '--embed-chapters')
report_unplayable_conflict('embed_infojson', '--embed-info-json')
opts.embed_infojson = False
report_unplayable_conflict('embedsubtitles', '--embed-subs')
report_unplayable_conflict('embedthumbnail', '--embed-thumbnail')
report_unplayable_conflict('xattrs', '--xattrs')
report_unplayable_conflict('fixup', '--fixup', default='never', allowed=lambda x: x in (None, 'never', 'ignore'))
opts.fixup = 'never'
if opts.remove_chapters:
report_conflict('--allow-unplayable-formats', '--remove-chapters')
opts.remove_chapters = []
if opts.sponsorblock_remove:
report_conflict('--allow-unplayable-formats', '--sponsorblock-remove')
opts.sponsorblock_remove = set()
if opts.sponskrub:
report_conflict('--allow-unplayable-formats', '--sponskrub')
report_unplayable_conflict('remove_chapters', '--remove-chapters', default=[])
report_unplayable_conflict('sponsorblock_remove', '--sponsorblock-remove', default=set())
report_unplayable_conflict('sponskrub', '--sponskrub', default=set())
opts.sponskrub = False
if (opts.addmetadata or opts.sponsorblock_mark) and opts.addchapters is None:
opts.addchapters = True
# PostProcessors
postprocessors = list(opts.add_postprocessors)
if sponsorblock_query:
@@ -490,8 +507,14 @@ def report_conflict(arg1, arg2):
if opts.allsubtitles and not opts.writeautomaticsub:
opts.writesubtitles = True
# ModifyChapters must run before FFmpegMetadataPP
remove_chapters_patterns = []
remove_chapters_patterns, remove_ranges = [], []
for regex in opts.remove_chapters:
if regex.startswith('*'):
dur = list(map(parse_duration, regex[1:].split('-')))
if len(dur) == 2 and all(t is not None for t in dur):
remove_ranges.append(tuple(dur))
continue
parser.error(f'invalid --remove-chapters time range {regex!r}. Must be of the form ?start-end')
try:
remove_chapters_patterns.append(re.compile(regex))
except re.error as err:
@@ -501,6 +524,7 @@ def report_conflict(arg1, arg2):
'key': 'ModifyChapters',
'remove_chapters_patterns': remove_chapters_patterns,
'remove_sponsor_segments': opts.sponsorblock_remove,
'remove_ranges': remove_ranges,
'sponsorblock_chapter_title': opts.sponsorblock_chapter_title,
'force_keyframes': opts.force_keyframes_at_cuts
})
@@ -510,13 +534,16 @@ def report_conflict(arg1, arg2):
# By default ffmpeg preserves metadata applicable for both
# source and target containers. From this point the container won't change,
# so metadata can be added here.
if opts.addmetadata or opts.addchapters:
if opts.addmetadata or opts.addchapters or opts.embed_infojson:
if opts.embed_infojson is None:
opts.embed_infojson = 'if_exists'
postprocessors.append({
'key': 'FFmpegMetadata',
'add_chapters': opts.addchapters,
'add_metadata': opts.addmetadata,
'add_infojson': opts.embed_infojson,
})
# Note: Deprecated
# Deprecated
# This should be above EmbedThumbnail since sponskrub removes the thumbnail attachment
# but must be below EmbedSubtitle and FFmpegMetadata
# See https://github.com/yt-dlp/yt-dlp/issues/204 , https://github.com/faissaloo/SponSkrub/issues/29
@@ -529,6 +556,7 @@ def report_conflict(arg1, arg2):
'cut': opts.sponskrub_cut,
'force': opts.sponskrub_force,
'ignoreerror': opts.sponskrub is None,
'_from_cli': True,
})
if opts.embedthumbnail:
already_have_thumbnail = opts.writethumbnail or opts.write_all_thumbnails
@@ -568,6 +596,19 @@ def report_args_compat(arg, name):
opts.postprocessor_args.setdefault('sponskrub', [])
opts.postprocessor_args['default'] = opts.postprocessor_args['default-compat']
def report_deprecation(val, old, new=None):
if not val:
return
deprecation_warnings.append(
f'{old} is deprecated and may be removed in a future version. Use {new} instead' if new
else f'{old} is deprecated and may not work as expected')
report_deprecation(opts.sponskrub, '--sponskrub', '--sponsorblock-mark or --sponsorblock-remove')
report_deprecation(not opts.prefer_ffmpeg, '--prefer-avconv', 'ffmpeg')
report_deprecation(opts.include_ads, '--include-ads')
# report_deprecation(opts.call_home, '--call-home') # We may re-implement this in future
# report_deprecation(opts.writeannotations, '--write-annotations') # It's just that no website has it
final_ext = (
opts.recodevideo if opts.recodevideo in FFmpegVideoConvertorPP.SUPPORTED_EXTS
else opts.remuxvideo if opts.remuxvideo in FFmpegVideoRemuxerPP.SUPPORTED_EXTS
@@ -687,6 +728,7 @@ def report_args_compat(arg, name):
'download_archive': download_archive_fn,
'break_on_existing': opts.break_on_existing,
'break_on_reject': opts.break_on_reject,
'break_per_url': opts.break_per_url,
'skip_playlist_after_errors': opts.skip_playlist_after_errors,
'cookiefile': opts.cookiefile,
'cookiesfrombrowser': opts.cookiesfrombrowser,
@@ -705,6 +747,7 @@ def report_args_compat(arg, name):
'youtube_include_hls_manifest': opts.youtube_include_hls_manifest,
'encoding': opts.encoding,
'extract_flat': opts.extract_flat,
'wait_for_video': opts.wait_for_video,
'mark_watched': opts.mark_watched,
'merge_output_format': opts.merge_output_format,
'final_ext': final_ext,
@@ -733,12 +776,13 @@ def report_args_compat(arg, name):
'geo_bypass': opts.geo_bypass,
'geo_bypass_country': opts.geo_bypass_country,
'geo_bypass_ip_block': opts.geo_bypass_ip_block,
'warnings': warnings,
'_warnings': warnings,
'_deprecation_warnings': deprecation_warnings,
'compat_opts': compat_opts,
}
with YoutubeDL(ydl_opts) as ydl:
actual_use = len(all_urls) or opts.load_info_filename
actual_use = all_urls or opts.load_info_filename
# Remove cache dir
if opts.rm_cachedir:
@@ -767,7 +811,7 @@ def report_args_compat(arg, name):
retcode = ydl.download_with_info_file(expand_path(opts.load_info_filename))
else:
retcode = ydl.download(all_urls)
except (MaxDownloadsReached, ExistingVideoReached, RejectedVideoReached):
except DownloadCancelled:
ydl.to_screen('Aborting remaining downloads')
retcode = 101
@@ -779,15 +823,15 @@ def main(argv=None):
_real_main(argv)
except DownloadError:
sys.exit(1)
except SameFileError:
sys.exit('ERROR: fixed output name but more than one file to download')
except SameFileError as e:
sys.exit(f'ERROR: {e}')
except KeyboardInterrupt:
sys.exit('\nERROR: Interrupted by user')
except BrokenPipeError:
except BrokenPipeError as e:
# https://docs.python.org/3/library/signal.html#note-on-sigpipe
devnull = os.open(os.devnull, os.O_WRONLY)
os.dup2(devnull, sys.stdout.fileno())
sys.exit(r'\nERROR: {err}')
sys.exit(f'\nERROR: {e}')
__all__ = ['main', 'YoutubeDL', 'gen_extractors', 'list_extractors']

View File

@@ -28,6 +28,48 @@ def aes_gcm_decrypt_and_verify_bytes(data, key, tag, nonce):
BLOCK_SIZE_BYTES = 16
def aes_ecb_encrypt(data, key, iv=None):
"""
Encrypt with aes in ECB mode
@param {int[]} data cleartext
@param {int[]} key 16/24/32-Byte cipher key
@param {int[]} iv Unused for this mode
@returns {int[]} encrypted data
"""
expanded_key = key_expansion(key)
block_count = int(ceil(float(len(data)) / BLOCK_SIZE_BYTES))
encrypted_data = []
for i in range(block_count):
block = data[i * BLOCK_SIZE_BYTES: (i + 1) * BLOCK_SIZE_BYTES]
encrypted_data += aes_encrypt(block, expanded_key)
encrypted_data = encrypted_data[:len(data)]
return encrypted_data
def aes_ecb_decrypt(data, key, iv=None):
"""
Decrypt with aes in ECB mode
@param {int[]} data cleartext
@param {int[]} key 16/24/32-Byte cipher key
@param {int[]} iv Unused for this mode
@returns {int[]} decrypted data
"""
expanded_key = key_expansion(key)
block_count = int(ceil(float(len(data)) / BLOCK_SIZE_BYTES))
encrypted_data = []
for i in range(block_count):
block = data[i * BLOCK_SIZE_BYTES: (i + 1) * BLOCK_SIZE_BYTES]
encrypted_data += aes_decrypt(block, expanded_key)
encrypted_data = encrypted_data[:len(data)]
return encrypted_data
def aes_ctr_decrypt(data, key, iv):
"""
Decrypt with aes in counter mode

View File

@@ -19,6 +19,7 @@
import shutil
import socket
import struct
import subprocess
import sys
import tokenize
import urllib
@@ -162,7 +163,9 @@ def compat_expanduser(path):
def windows_enable_vt_mode(): # TODO: Do this the proper way https://bugs.python.org/issue30075
if compat_os_name != 'nt':
return
os.system('')
startupinfo = subprocess.STARTUPINFO()
startupinfo.dwFlags |= subprocess.STARTF_USESHOWWINDOW
subprocess.Popen('', shell=True, startupinfo=startupinfo)
# Deprecated

View File

@@ -17,7 +17,7 @@
from .utils import (
bug_reports_message,
expand_path,
process_communicate_or_kill,
Popen,
YoutubeDLCookieJar,
)
@@ -117,7 +117,7 @@ def _extract_firefox_cookies(profile, logger):
raise FileNotFoundError('could not find firefox cookies database in {}'.format(search_root))
logger.debug('Extracting cookies from: "{}"'.format(cookie_database_path))
with tempfile.TemporaryDirectory(prefix='youtube_dl') as tmpdir:
with tempfile.TemporaryDirectory(prefix='yt_dlp') as tmpdir:
cursor = None
try:
cursor = _open_database_copy(cookie_database_path, tmpdir)
@@ -236,7 +236,7 @@ def _extract_chrome_cookies(browser_name, profile, logger):
decryptor = get_cookie_decryptor(config['browser_dir'], config['keyring_name'], logger)
with tempfile.TemporaryDirectory(prefix='youtube_dl') as tmpdir:
with tempfile.TemporaryDirectory(prefix='yt_dlp') as tmpdir:
cursor = None
try:
cursor = _open_database_copy(cookie_database_path, tmpdir)
@@ -599,14 +599,14 @@ def _get_mac_keyring_password(browser_keyring_name, logger):
return password.encode('utf-8')
else:
logger.debug('using find-generic-password to obtain password')
proc = subprocess.Popen(['security', 'find-generic-password',
'-w', # write password to stdout
'-a', browser_keyring_name, # match 'account'
'-s', '{} Safe Storage'.format(browser_keyring_name)], # match 'service'
stdout=subprocess.PIPE,
stderr=subprocess.DEVNULL)
proc = Popen(
['security', 'find-generic-password',
'-w', # write password to stdout
'-a', browser_keyring_name, # match 'account'
'-s', '{} Safe Storage'.format(browser_keyring_name)], # match 'service'
stdout=subprocess.PIPE, stderr=subprocess.DEVNULL)
try:
stdout, stderr = process_communicate_or_kill(proc)
stdout, stderr = proc.communicate_or_kill()
if stdout[-1:] == b'\n':
stdout = stdout[:-1]
return stdout
@@ -620,7 +620,7 @@ def _get_windows_v10_key(browser_root, logger):
if path is None:
logger.error('could not find local state file')
return None
with open(path, 'r') as f:
with open(path, 'r', encoding='utf8') as f:
data = json.load(f)
try:
base64_key = data['os_crypt']['encrypted_key']

View File

@@ -10,10 +10,15 @@
def get_suitable_downloader(info_dict, params={}, default=NO_DEFAULT, protocol=None, to_stdout=False):
info_dict['protocol'] = determine_protocol(info_dict)
info_copy = info_dict.copy()
if protocol:
info_copy['protocol'] = protocol
info_copy['to_stdout'] = to_stdout
return _get_suitable_downloader(info_copy, params, default)
downloaders = [_get_suitable_downloader(info_copy, proto, params, default)
for proto in (protocol or info_copy['protocol']).split('+')]
if set(downloaders) == {FFmpegFD} and FFmpegFD.can_merge_formats(info_copy, params):
return FFmpegFD
elif len(downloaders) == 1:
return downloaders[0]
return None
# Some of these require get_suitable_downloader
@@ -36,6 +41,7 @@ def get_suitable_downloader(info_dict, params={}, default=NO_DEFAULT, protocol=N
PROTOCOL_MAP = {
'rtmp': RtmpFD,
'rtmpe': RtmpFD,
'rtmp_ffmpeg': FFmpegFD,
'm3u8_native': HlsFD,
'm3u8': FFmpegFD,
@@ -72,7 +78,7 @@ def shorten_protocol_name(proto, simplify=False):
return short_protocol_names.get(proto, proto)
def _get_suitable_downloader(info_dict, params, default):
def _get_suitable_downloader(info_dict, protocol, params, default):
"""Get the downloader class that can handle the info dict."""
if default is NO_DEFAULT:
default = HttpFD
@@ -80,7 +86,7 @@ def _get_suitable_downloader(info_dict, params, default):
# if (info_dict.get('start_time') or info_dict.get('end_time')) and not info_dict.get('requested_formats') and FFmpegFD.can_download(info_dict):
# return FFmpegFD
protocol = info_dict['protocol']
info_dict['protocol'] = protocol
downloaders = params.get('external_downloader')
external_downloader = (
downloaders if isinstance(downloaders, compat_str) or downloaders is None

View File

@@ -1,6 +1,5 @@
from __future__ import division, unicode_literals
import copy
import os
import re
import time
@@ -13,6 +12,7 @@
format_bytes,
shell_quote,
timeconvert,
timetuple_from_msec,
)
from ..minicurses import (
MultilineLogger,
@@ -76,14 +76,12 @@ def __init__(self, ydl, params):
@staticmethod
def format_seconds(seconds):
(mins, secs) = divmod(seconds, 60)
(hours, mins) = divmod(mins, 60)
if hours > 99:
time = timetuple_from_msec(seconds * 1000)
if time.hours > 99:
return '--:--:--'
if hours == 0:
return '%02d:%02d' % (mins, secs)
else:
return '%02d:%02d:%02d' % (hours, mins, secs)
if not time.hours:
return '%02d:%02d' % time[1:-1]
return '%02d:%02d:%02d' % time[:-1]
@staticmethod
def calc_percent(byte_counter, data_len):
@@ -95,6 +93,8 @@ def calc_percent(byte_counter, data_len):
def format_percent(percent):
if percent is None:
return '---.-%'
elif percent == 100:
return '100%'
return '%6s' % ('%3.1f%%' % percent)
@staticmethod
@@ -249,11 +249,29 @@ def _prepare_multiline_status(self, lines=1):
self._multiline = BreaklineStatusPrinter(self.ydl._screen_file, lines)
else:
self._multiline = MultilinePrinter(self.ydl._screen_file, lines, not self.params.get('quiet'))
self._multiline.allow_colors = self._multiline._HAVE_FULLCAP and not self.params.get('no_color')
def _finish_multiline_status(self):
self._multiline.end()
def _report_progress_status(self, s):
_progress_styles = {
'downloaded_bytes': 'light blue',
'percent': 'light blue',
'eta': 'yellow',
'speed': 'green',
'elapsed': 'bold white',
'total_bytes': '',
'total_bytes_estimate': '',
}
def _report_progress_status(self, s, default_template):
for name, style in self._progress_styles.items():
name = f'_{name}_str'
if name not in s:
continue
s[name] = self._format_progress(s[name], style)
s['_default_template'] = default_template % s
progress_dict = s.copy()
progress_dict.pop('info_dict')
progress_dict = {'info': s['info_dict'], 'progress': progress_dict}
@@ -266,6 +284,10 @@ def _report_progress_status(self, s):
progress_template.get('download-title') or 'yt-dlp %(progress._default_template)s',
progress_dict))
def _format_progress(self, *args, **kwargs):
return self.ydl._format_text(
self._multiline.stream, self._multiline.allow_colors, *args, **kwargs)
def report_progress(self, s):
if s['status'] == 'finished':
if self.params.get('noprogress'):
@@ -278,8 +300,7 @@ def report_progress(self, s):
s['_elapsed_str'] = self.format_seconds(s['elapsed'])
msg_template += ' in %(_elapsed_str)s'
s['_percent_str'] = self.format_percent(100)
s['_default_template'] = msg_template % s
self._report_progress_status(s)
self._report_progress_status(s, msg_template)
return
if s['status'] != 'downloading':
@@ -288,7 +309,7 @@ def report_progress(self, s):
if s.get('eta') is not None:
s['_eta_str'] = self.format_eta(s['eta'])
else:
s['_eta_str'] = 'Unknown ETA'
s['_eta_str'] = 'Unknown'
if s.get('total_bytes') and s.get('downloaded_bytes') is not None:
s['_percent_str'] = self.format_percent(100 * s['downloaded_bytes'] / s['total_bytes'])
@@ -320,9 +341,12 @@ def report_progress(self, s):
else:
msg_template = '%(_downloaded_bytes_str)s at %(_speed_str)s'
else:
msg_template = '%(_percent_str)s % at %(_speed_str)s ETA %(_eta_str)s'
s['_default_template'] = msg_template % s
self._report_progress_status(s)
msg_template = '%(_percent_str)s at %(_speed_str)s ETA %(_eta_str)s'
if s.get('fragment_index') and s.get('fragment_count'):
msg_template += ' (frag %(fragment_index)s/%(fragment_count)s)'
elif s.get('fragment_index'):
msg_template += ' (frag %(fragment_index)s)'
self._report_progress_status(s, msg_template)
def report_resuming_byte(self, resume_len):
"""Report attempt to resume at given byte."""
@@ -405,13 +429,10 @@ def real_download(self, filename, info_dict):
def _hook_progress(self, status, info_dict):
if not self._progress_hooks:
return
info_dict = dict(info_dict)
for key in ('__original_infodict', '__postprocessors'):
info_dict.pop(key, None)
status['info_dict'] = info_dict
# youtube-dl passes the same status object to all the hooks.
# Some third party scripts seems to be relying on this.
# So keep this behavior if possible
status['info_dict'] = copy.deepcopy(info_dict)
for ph in self._progress_hooks:
ph(status)

View File

@@ -55,9 +55,8 @@ def real_download(self, filename, info_dict):
if real_downloader:
self.to_screen(
'[%s] Fragment downloads will be delegated to %s' % (self.FD_NAME, real_downloader.get_basename()))
info_copy = info_dict.copy()
info_copy['fragments'] = fragments_to_download
info_dict['fragments'] = fragments_to_download
fd = real_downloader(self.ydl, self.params)
return fd.real_download(filename, info_copy)
return fd.real_download(filename, info_dict)
return self.download_and_append_fragments(ctx, fragments_to_download, info_dict)

View File

@@ -21,8 +21,7 @@
encodeArgument,
handle_youtubedl_headers,
check_executable,
is_outdated_version,
process_communicate_or_kill,
Popen,
sanitize_open,
)
@@ -115,55 +114,54 @@ def _call_downloader(self, tmpfilename, info_dict):
self._debug_cmd(cmd)
if 'fragments' in info_dict:
fragment_retries = self.params.get('fragment_retries', 0)
skip_unavailable_fragments = self.params.get('skip_unavailable_fragments', True)
count = 0
while count <= fragment_retries:
p = subprocess.Popen(
cmd, stderr=subprocess.PIPE)
_, stderr = process_communicate_or_kill(p)
if p.returncode == 0:
break
# TODO: Decide whether to retry based on error code
# https://aria2.github.io/manual/en/html/aria2c.html#exit-status
self.to_stderr(stderr.decode('utf-8', 'replace'))
count += 1
if count <= fragment_retries:
self.to_screen(
'[%s] Got error. Retrying fragments (attempt %d of %s)...'
% (self.get_basename(), count, self.format_retries(fragment_retries)))
if count > fragment_retries:
if not skip_unavailable_fragments:
self.report_error('Giving up after %s fragment retries' % fragment_retries)
return -1
decrypt_fragment = self.decrypter(info_dict)
dest, _ = sanitize_open(tmpfilename, 'wb')
for frag_index, fragment in enumerate(info_dict['fragments']):
fragment_filename = '%s-Frag%d' % (tmpfilename, frag_index)
try:
src, _ = sanitize_open(fragment_filename, 'rb')
except IOError:
if skip_unavailable_fragments and frag_index > 1:
self.to_screen('[%s] Skipping fragment %d ...' % (self.get_basename(), frag_index))
continue
self.report_error('Unable to open fragment %d' % frag_index)
return -1
dest.write(decrypt_fragment(fragment, src.read()))
src.close()
if not self.params.get('keep_fragments', False):
os.remove(encodeFilename(fragment_filename))
dest.close()
os.remove(encodeFilename('%s.frag.urls' % tmpfilename))
else:
p = subprocess.Popen(
cmd, stderr=subprocess.PIPE)
_, stderr = process_communicate_or_kill(p)
if 'fragments' not in info_dict:
p = Popen(cmd, stderr=subprocess.PIPE)
_, stderr = p.communicate_or_kill()
if p.returncode != 0:
self.to_stderr(stderr.decode('utf-8', 'replace'))
return p.returncode
return p.returncode
fragment_retries = self.params.get('fragment_retries', 0)
skip_unavailable_fragments = self.params.get('skip_unavailable_fragments', True)
count = 0
while count <= fragment_retries:
p = Popen(cmd, stderr=subprocess.PIPE)
_, stderr = p.communicate_or_kill()
if p.returncode == 0:
break
# TODO: Decide whether to retry based on error code
# https://aria2.github.io/manual/en/html/aria2c.html#exit-status
self.to_stderr(stderr.decode('utf-8', 'replace'))
count += 1
if count <= fragment_retries:
self.to_screen(
'[%s] Got error. Retrying fragments (attempt %d of %s)...'
% (self.get_basename(), count, self.format_retries(fragment_retries)))
if count > fragment_retries:
if not skip_unavailable_fragments:
self.report_error('Giving up after %s fragment retries' % fragment_retries)
return -1
decrypt_fragment = self.decrypter(info_dict)
dest, _ = sanitize_open(tmpfilename, 'wb')
for frag_index, fragment in enumerate(info_dict['fragments']):
fragment_filename = '%s-Frag%d' % (tmpfilename, frag_index)
try:
src, _ = sanitize_open(fragment_filename, 'rb')
except IOError as err:
if skip_unavailable_fragments and frag_index > 1:
self.report_skip_fragment(frag_index, err)
continue
self.report_error(f'Unable to open fragment {frag_index}; {err}')
return -1
dest.write(decrypt_fragment(fragment, src.read()))
src.close()
if not self.params.get('keep_fragments', False):
os.remove(encodeFilename(fragment_filename))
dest.close()
os.remove(encodeFilename('%s.frag.urls' % tmpfilename))
return 0
class CurlFD(ExternalFD):
@@ -198,8 +196,8 @@ def _call_downloader(self, tmpfilename, info_dict):
self._debug_cmd(cmd)
# curl writes the progress to stderr so don't capture it.
p = subprocess.Popen(cmd)
process_communicate_or_kill(p)
p = Popen(cmd)
p.communicate_or_kill()
return p.returncode
@@ -327,6 +325,10 @@ def available(cls, path=None):
# Fixme: This may be wrong when --ffmpeg-location is used
return FFmpegPostProcessor().available
@classmethod
def supports(cls, info_dict):
return all(proto in cls.SUPPORTED_PROTOCOLS for proto in info_dict['protocol'].split('+'))
def on_process_started(self, proc, stdin):
""" Override this in subclasses """
pass
@@ -441,8 +443,7 @@ def _call_downloader(self, tmpfilename, info_dict):
if info_dict.get('requested_formats') or protocol == 'http_dash_segments':
for (i, fmt) in enumerate(info_dict.get('requested_formats') or [info_dict]):
stream_number = fmt.get('manifest_stream_number', 0)
a_or_v = 'a' if fmt.get('acodec') != 'none' else 'v'
args.extend(['-map', f'{i}:{a_or_v}:{stream_number}'])
args.extend(['-map', f'{i}:{stream_number}'])
if self.params.get('test', False):
args += ['-fs', compat_str(self._TEST_FILE_SIZE)]
@@ -456,7 +457,7 @@ def _call_downloader(self, tmpfilename, info_dict):
args += ['-f', 'mpegts']
else:
args += ['-f', 'mp4']
if (ffpp.basename == 'ffmpeg' and is_outdated_version(ffpp._versions['ffmpeg'], '3.2', False)) and (not info_dict.get('acodec') or info_dict['acodec'].split('.')[0] in ('aac', 'mp4a')):
if (ffpp.basename == 'ffmpeg' and ffpp._features.get('needs_adtstoasc')) and (not info_dict.get('acodec') or info_dict['acodec'].split('.')[0] in ('aac', 'mp4a')):
args += ['-bsf:a', 'aac_adtstoasc']
elif protocol == 'rtmp':
args += ['-f', 'flv']
@@ -471,7 +472,7 @@ def _call_downloader(self, tmpfilename, info_dict):
args.append(encodeFilename(ffpp._ffmpeg_filename_argument(tmpfilename), True))
self._debug_cmd(args)
proc = subprocess.Popen(args, stdin=subprocess.PIPE, env=env)
proc = Popen(args, stdin=subprocess.PIPE, env=env)
if url in ('-', 'pipe:'):
self.on_process_started(proc, proc.stdin)
try:
@@ -483,7 +484,7 @@ def _call_downloader(self, tmpfilename, info_dict):
# streams). Note that Windows is not affected and produces playable
# files (see https://github.com/ytdl-org/youtube-dl/issues/8300).
if isinstance(e, KeyboardInterrupt) and sys.platform != 'win32' and url not in ('-', 'pipe:'):
process_communicate_or_kill(proc, b'q')
proc.communicate_or_kill(b'q')
else:
proc.kill()
proc.wait()

View File

@@ -31,6 +31,10 @@ class HttpQuietDownloader(HttpFD):
def to_screen(self, *args, **kargs):
pass
def report_retry(self, err, count, retries):
super().to_screen(
f'[download] Got server HTTP error: {err}. Retrying (attempt {count} of {self.format_retries(retries)}) ...')
class FragmentFD(FileDownloader):
"""
@@ -44,6 +48,7 @@ class FragmentFD(FileDownloader):
Skip unavailable fragments (DASH and hlsnative only)
keep_fragments: Keep downloaded fragments on disk after downloading is
finished
concurrent_fragment_downloads: The number of threads to use for native hls and dash downloads
_no_ytdl_file: Don't use .ytdl file
For each incomplete fragment download yt-dlp keeps on disk a special
@@ -72,8 +77,9 @@ def report_retry_fragment(self, err, frag_index, count, retries):
'\r[download] Got server HTTP error: %s. Retrying fragment %d (attempt %d of %s) ...'
% (error_to_compat_str(err), frag_index, count, self.format_retries(retries)))
def report_skip_fragment(self, frag_index):
self.to_screen('[download] Skipping fragment %d ...' % frag_index)
def report_skip_fragment(self, frag_index, err=None):
err = f' {err};' if err else ''
self.to_screen(f'[download]{err} Skipping fragment {frag_index:d} ...')
def _prepare_url(self, info_dict, url):
headers = info_dict.get('http_headers')
@@ -166,7 +172,7 @@ def _prepare_frag_download(self, ctx):
self.ydl,
{
'continuedl': True,
'quiet': True,
'quiet': self.params.get('quiet'),
'noprogress': True,
'ratelimit': self.params.get('ratelimit'),
'retries': self.params.get('retries', 0),
@@ -236,6 +242,7 @@ def _start_frag_download(self, ctx, info_dict):
start = time.time()
ctx.update({
'started': start,
'fragment_started': start,
# Amount of fragment's bytes downloaded by the time of the previous
# frag progress hook invocation
'prev_frag_downloaded_bytes': 0,
@@ -266,6 +273,9 @@ def frag_progress_hook(s):
ctx['fragment_index'] = state['fragment_index']
state['downloaded_bytes'] += frag_total_bytes - ctx['prev_frag_downloaded_bytes']
ctx['complete_frags_downloaded_bytes'] = state['downloaded_bytes']
ctx['speed'] = state['speed'] = self.calc_speed(
ctx['fragment_started'], time_now, frag_total_bytes)
ctx['fragment_started'] = time.time()
ctx['prev_frag_downloaded_bytes'] = 0
else:
frag_downloaded_bytes = s['downloaded_bytes']
@@ -274,8 +284,8 @@ def frag_progress_hook(s):
state['eta'] = self.calc_eta(
start, time_now, estimated_size - resume_len,
state['downloaded_bytes'] - resume_len)
state['speed'] = s.get('speed') or ctx.get('speed')
ctx['speed'] = state['speed']
ctx['speed'] = state['speed'] = self.calc_speed(
ctx['fragment_started'], time_now, frag_downloaded_bytes)
ctx['prev_frag_downloaded_bytes'] = frag_downloaded_bytes
self._hook_progress(state, info_dict)
@@ -369,7 +379,8 @@ def download_and_append_fragments_multiple(self, *args, pack_func=None, finish_f
if max_progress == 1:
return self.download_and_append_fragments(*args[0], pack_func=pack_func, finish_func=finish_func)
max_workers = self.params.get('concurrent_fragment_downloads', max_progress)
self._prepare_multiline_status(max_progress)
if max_progress > 1:
self._prepare_multiline_status(max_progress)
def thread_func(idx, ctx, fragments, info_dict, tpe):
ctx['max_progress'] = max_progress
@@ -443,7 +454,7 @@ def download_fragment(fragment, ctx):
def append_fragment(frag_content, frag_index, ctx):
if not frag_content:
if not is_fatal(frag_index - 1):
self.report_skip_fragment(frag_index)
self.report_skip_fragment(frag_index, 'fragment not found')
return True
else:
ctx['dest_stream'].close()

View File

@@ -77,6 +77,15 @@ def real_download(self, filename, info_dict):
message = ('The stream has AES-128 encryption and neither ffmpeg nor pycryptodomex are available; '
'Decryption will be performed natively, but will be extremely slow')
if not can_download:
has_drm = re.search('|'.join([
r'#EXT-X-FAXS-CM:', # Adobe Flash Access
r'#EXT-X-(?:SESSION-)?KEY:.*?URI="skd://', # Apple FairPlay
]), s)
if has_drm and not self.params.get('allow_unplayable_formats'):
self.report_error(
'This video is DRM protected; Try selecting another format with --format or '
'add --check-formats to automatically fallback to the next best format')
return False
message = message or 'Unsupported features have been detected'
fd = FFmpegFD(self.ydl, self.params)
self.report_warning(f'{message}; extraction will be delegated to {fd.get_basename()}')
@@ -245,13 +254,12 @@ def is_ad_fragment_end(s):
fragments = [fragments[0] if fragments else None]
if real_downloader:
info_copy = info_dict.copy()
info_copy['fragments'] = fragments
info_dict['fragments'] = fragments
fd = real_downloader(self.ydl, self.params)
# TODO: Make progress updates work without hooking twice
# for ph in self._progress_hooks:
# fd.add_progress_hook(ph)
return fd.real_download(filename, info_copy)
return fd.real_download(filename, info_dict)
if is_webvtt:
def pack_fragment(frag_content, frag_index):

View File

@@ -191,11 +191,13 @@ def establish_connection():
# Unexpected HTTP error
raise
raise RetryDownload(err)
except socket.error as err:
if err.errno != errno.ECONNRESET:
# Connection reset is no problem, just retry
raise
except socket.timeout as err:
raise RetryDownload(err)
except socket.error as err:
if err.errno in (errno.ECONNRESET, errno.ETIMEDOUT):
# Connection reset is no problem, just retry
raise RetryDownload(err)
raise
def download():
nonlocal throttle_start
@@ -373,6 +375,8 @@ def retry(e):
count += 1
if count <= retries:
self.report_retry(e.source_error, count, retries)
else:
self.to_screen(f'[download] Got server HTTP error: {e.source_error}')
continue
except NextFragment:
continue

View File

@@ -114,8 +114,8 @@ def real_download(self, filename, info_dict):
fragment_base_url = info_dict.get('fragment_base_url')
fragments = info_dict['fragments'][:1] if self.params.get(
'test', False) else info_dict['fragments']
title = info_dict['title']
origin = info_dict['webpage_url']
title = info_dict.get('title', info_dict['format_id'])
origin = info_dict.get('webpage_url', info_dict['url'])
ctx = {
'filename': filename,

View File

@@ -12,6 +12,7 @@
encodeFilename,
encodeArgument,
get_exe_version,
Popen,
)
@@ -26,7 +27,7 @@ def run_rtmpdump(args):
start = time.time()
resume_percent = None
resume_downloaded_data_len = None
proc = subprocess.Popen(args, stderr=subprocess.PIPE)
proc = Popen(args, stderr=subprocess.PIPE)
cursor_in_new_line = True
proc_stderr_closed = False
try:

View File

@@ -1,14 +1,15 @@
from __future__ import unicode_literals
import os
from ..utils import load_plugins
try:
from .lazy_extractors import *
from .lazy_extractors import _ALL_CLASSES
_LAZY_LOADER = True
_PLUGIN_CLASSES = {}
except ImportError:
_LAZY_LOADER = False
_LAZY_LOADER = False
if not os.environ.get('YTDLP_NO_LAZY_EXTRACTORS'):
try:
from .lazy_extractors import *
from .lazy_extractors import _ALL_CLASSES
_LAZY_LOADER = True
except ImportError:
pass
if not _LAZY_LOADER:
from .extractors import *
@@ -19,8 +20,8 @@
]
_ALL_CLASSES.append(GenericIE)
_PLUGIN_CLASSES = load_plugins('extractor', 'IE', globals())
_ALL_CLASSES = list(_PLUGIN_CLASSES.values()) + _ALL_CLASSES
_PLUGIN_CLASSES = load_plugins('extractor', 'IE', globals())
_ALL_CLASSES = list(_PLUGIN_CLASSES.values()) + _ALL_CLASSES
def gen_extractor_classes():

View File

@@ -15,6 +15,7 @@
compat_ord,
)
from ..utils import (
ass_subtitles_timecode,
bytes_to_intlist,
bytes_to_long,
ExtractorError,
@@ -68,10 +69,6 @@ class ADNIE(InfoExtractor):
'end': 4,
}
@staticmethod
def _ass_subtitles_timecode(seconds):
return '%01d:%02d:%02d.%02d' % (seconds / 3600, (seconds % 3600) / 60, seconds % 60, (seconds % 1) * 100)
def _get_subtitles(self, sub_url, video_id):
if not sub_url:
return None
@@ -117,8 +114,8 @@ def _get_subtitles(self, sub_url, video_id):
continue
alignment = self._POS_ALIGN_MAP.get(position_align, 2) + self._LINE_ALIGN_MAP.get(line_align, 0)
ssa += os.linesep + 'Dialogue: Marked=0,%s,%s,Default,,0,0,0,,%s%s' % (
self._ass_subtitles_timecode(start),
self._ass_subtitles_timecode(end),
ass_subtitles_timecode(start),
ass_subtitles_timecode(end),
'{\\a%d}' % alignment if alignment != 2 else '',
text.replace('\n', '\\N').replace('<i>', '{\\i1}').replace('</i>', '{\\i0}'))

View File

@@ -39,8 +39,8 @@
},
'RCN': {
'name': 'RCN',
'username_field': 'UserName',
'password_field': 'UserPassword',
'username_field': 'username',
'password_field': 'password',
},
'Rogers': {
'name': 'Rogers',

View File

@@ -9,6 +9,7 @@
float_or_none,
int_or_none,
ISO639Utils,
join_nonempty,
OnDemandPagedList,
parse_duration,
str_or_none,
@@ -263,7 +264,7 @@ def _real_extract(self, url):
continue
formats.append({
'filesize': int_or_none(source.get('kilobytes') or None, invscale=1000),
'format_id': '-'.join(filter(None, [source.get('format'), source.get('label')])),
'format_id': join_nonempty(source.get('format'), source.get('label')),
'height': int_or_none(source.get('height') or None),
'tbr': int_or_none(source.get('bitrate') or None),
'width': int_or_none(source.get('width') or None),

View File

@@ -1,55 +1,86 @@
# coding: utf-8
from __future__ import unicode_literals
import json
from .common import InfoExtractor
from ..utils import (
try_get,
)
class AlJazeeraIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?aljazeera\.com/(?P<type>program/[^/]+|(?:feature|video)s)/\d{4}/\d{1,2}/\d{1,2}/(?P<id>[^/?&#]+)'
_VALID_URL = r'https?://(?P<base>\w+\.aljazeera\.\w+)/(?P<type>programs?/[^/]+|(?:feature|video|new)s)?/\d{4}/\d{1,2}/\d{1,2}/(?P<id>[^/?&#]+)'
_TESTS = [{
'url': 'https://www.aljazeera.com/program/episode/2014/9/19/deliverance',
'url': 'https://balkans.aljazeera.net/videos/2021/11/6/pojedini-domovi-u-sarajevu-jos-pod-vodom-mjestanima-se-dostavlja-hrana',
'info_dict': {
'id': '3792260579001',
'id': '6280641530001',
'ext': 'mp4',
'title': 'The Slum - Episode 1: Deliverance',
'description': 'As a birth attendant advocating for family planning, Remy is on the frontline of Tondo\'s battle with overcrowding.',
'uploader_id': '665003303001',
'timestamp': 1411116829,
'upload_date': '20140919',
'title': 'Pojedini domovi u Sarajevu još pod vodom, mještanima se dostavlja hrana',
'timestamp': 1636219149,
'description': 'U sarajevskim naseljima Rajlovac i Reljevo stambeni objekti, ali i industrijska postrojenja i dalje su pod vodom.',
'upload_date': '20211106',
}
}, {
'url': 'https://balkans.aljazeera.net/videos/2021/11/6/djokovic-usao-u-finale-mastersa-u-parizu',
'info_dict': {
'id': '6280654936001',
'ext': 'mp4',
'title': 'Đoković ušao u finale Mastersa u Parizu',
'timestamp': 1636221686,
'description': 'Novak Đoković je u polufinalu Mastersa u Parizu nakon preokreta pobijedio Poljaka Huberta Hurkacza.',
'upload_date': '20211106',
},
'add_ie': ['BrightcoveNew'],
'skip': 'Not accessible from Travis CI server',
}, {
'url': 'https://www.aljazeera.com/videos/2017/5/11/sierra-leone-709-carat-diamond-to-be-auctioned-off',
'only_matching': True,
}, {
'url': 'https://www.aljazeera.com/features/2017/8/21/transforming-pakistans-buses-into-art',
'only_matching': True,
}]
BRIGHTCOVE_URL_TEMPLATE = 'http://players.brightcove.net/%s/%s_default/index.html?videoId=%s'
BRIGHTCOVE_URL_RE = r'https?://players.brightcove.net/(?P<account>\d+)/(?P<player_id>[a-zA-Z0-9]+)_(?P<embed>[^/]+)/index.html\?videoId=(?P<id>\d+)'
def _real_extract(self, url):
post_type, name = self._match_valid_url(url).groups()
base, post_type, id = self._match_valid_url(url).groups()
wp = {
'balkans.aljazeera.net': 'ajb',
'chinese.aljazeera.net': 'chinese',
'mubasher.aljazeera.net': 'ajm',
}.get(base) or 'aje'
post_type = {
'features': 'post',
'program': 'episode',
'programs': 'episode',
'videos': 'video',
'news': 'news',
}[post_type.split('/')[0]]
video = self._download_json(
'https://www.aljazeera.com/graphql', name, query={
f'https://{base}/graphql', id, query={
'wp-site': wp,
'operationName': 'ArchipelagoSingleArticleQuery',
'variables': json.dumps({
'name': name,
'name': id,
'postType': post_type,
}),
}, headers={
'wp-site': 'aje',
})['data']['article']['video']
video_id = video['id']
account_id = video.get('accountId') or '665003303001'
player_id = video.get('playerId') or 'BkeSH5BDb'
return self.url_result(
self.BRIGHTCOVE_URL_TEMPLATE % (account_id, player_id, video_id),
'BrightcoveNew', video_id)
'wp-site': wp,
})
video = try_get(video, lambda x: x['data']['article']['video']) or {}
video_id = video.get('id')
account = video.get('accountId') or '911432371001'
player_id = video.get('playerId') or 'csvTfAlKW'
embed = 'default'
if video_id is None:
webpage = self._download_webpage(url, id)
account, player_id, embed, video_id = self._search_regex(self.BRIGHTCOVE_URL_RE, webpage, 'video id',
group=(1, 2, 3, 4), default=(None, None, None, None))
if video_id is None:
return {
'_type': 'url_transparent',
'url': url,
'ie_key': 'Generic'
}
return {
'_type': 'url_transparent',
'url': f'https://players.brightcove.net/{account}/{player_id}_{embed}/index.html?videoId={video_id}',
'ie_key': 'BrightcoveNew'
}

View File

@@ -0,0 +1,53 @@
# coding: utf-8
from .common import InfoExtractor
from ..utils import int_or_none
class AmazonStoreIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?amazon\.(?:[a-z]{2,3})(?:\.[a-z]{2})?/(?:[^/]+/)?(?:dp|gp/product)/(?P<id>[^/&#$?]+)'
_TESTS = [{
'url': 'https://www.amazon.co.uk/dp/B098XNCHLD/',
'info_dict': {
'id': 'B098XNCHLD',
'title': 'md5:5f3194dbf75a8dcfc83079bd63a2abed',
},
'playlist_mincount': 1,
'playlist': [{
'info_dict': {
'id': 'A1F83G8C2ARO7P',
'ext': 'mp4',
'title': 'mcdodo usb c cable 100W 5a',
'thumbnail': r're:^https?://.*\.jpg$',
},
}]
}, {
'url': 'https://www.amazon.in/Sony-WH-1000XM4-Cancelling-Headphones-Bluetooth/dp/B0863TXGM3',
'info_dict': {
'id': 'B0863TXGM3',
'title': 'md5:b0bde4881d3cfd40d63af19f7898b8ff',
},
'playlist_mincount': 4,
}, {
'url': 'https://www.amazon.com/dp/B0845NXCXF/',
'info_dict': {
'id': 'B0845NXCXF',
'title': 'md5:2145cd4e3c7782f1ee73649a3cff1171',
},
'playlist-mincount': 1,
}]
def _real_extract(self, url):
id = self._match_id(url)
webpage = self._download_webpage(url, id)
data_json = self._parse_json(self._html_search_regex(r'var\s?obj\s?=\s?jQuery\.parseJSON\(\'(.*)\'\)', webpage, 'data'), id)
entries = [{
'id': video['marketPlaceID'],
'url': video['url'],
'title': video.get('title'),
'thumbnail': video.get('thumbUrl') or video.get('thumb'),
'duration': video.get('durationSeconds'),
'height': int_or_none(video.get('videoHeight')),
'width': int_or_none(video.get('videoWidth')),
} for video in (data_json.get('videos') or []) if video.get('isVideo') and video.get('url')]
return self.playlist_result(entries, playlist_id=id, playlist_title=data_json['title'])

View File

@@ -8,6 +8,7 @@
determine_ext,
extract_attributes,
ExtractorError,
join_nonempty,
url_or_none,
urlencode_postdata,
urljoin,
@@ -140,15 +141,8 @@ def extract_info(html, video_id, num=None):
kind = self._search_regex(
r'videomaterialurl/\d+/([^/]+)/',
playlist_url, 'media kind', default=None)
format_id_list = []
if lang:
format_id_list.append(lang)
if kind:
format_id_list.append(kind)
if not format_id_list and num is not None:
format_id_list.append(compat_str(num))
format_id = '-'.join(format_id_list)
format_note = ', '.join(filter(None, (kind, lang_note)))
format_id = join_nonempty(lang, kind) if lang or kind else str(num)
format_note = join_nonempty(kind, lang_note, delim=', ')
item_id_list = []
if format_id:
item_id_list.append(format_id)
@@ -195,12 +189,10 @@ def extract_info(html, video_id, num=None):
if not file_:
continue
ext = determine_ext(file_)
format_id_list = [lang, kind]
if ext == 'm3u8':
format_id_list.append('hls')
elif source.get('type') == 'video/dash' or ext == 'mpd':
format_id_list.append('dash')
format_id = '-'.join(filter(None, format_id_list))
format_id = join_nonempty(
lang, kind,
'hls' if ext == 'm3u8' else None,
'dash' if source.get('type') == 'video/dash' or ext == 'mpd' else None)
if ext == 'm3u8':
file_formats = self._extract_m3u8_formats(
file_, video_id, 'mp4',

View File

@@ -16,6 +16,7 @@
determine_ext,
intlist_to_bytes,
int_or_none,
join_nonempty,
strip_jsonp,
unescapeHTML,
unsmuggle_url,
@@ -303,13 +304,13 @@ def _get_anvato_videos(self, access_key, video_id):
tbr = int_or_none(published_url.get('kbps'))
a_format = {
'url': video_url,
'format_id': ('-'.join(filter(None, ['http', published_url.get('cdn_name')]))).lower(),
'tbr': tbr if tbr != 0 else None,
'format_id': join_nonempty('http', published_url.get('cdn_name')).lower(),
'tbr': tbr or None,
}
if media_format == 'm3u8' and tbr is not None:
a_format.update({
'format_id': '-'.join(filter(None, ['hls', compat_str(tbr)])),
'format_id': join_nonempty('hls', tbr),
'ext': 'mp4',
})
elif media_format == 'm3u8-variant' or ext == 'm3u8':

View File

@@ -388,7 +388,13 @@ def _real_extract(self, url):
class ARDBetaMediathekIE(ARDMediathekBaseIE):
_VALID_URL = r'https://(?:(?:beta|www)\.)?ardmediathek\.de/(?P<client>[^/]+)/(?P<mode>player|live|video|sendung|sammlung)/(?P<display_id>(?:[^/]+/)*)(?P<video_id>[a-zA-Z0-9]+)'
_VALID_URL = r'''(?x)https://
(?:(?:beta|www)\.)?ardmediathek\.de/
(?:(?P<client>[^/]+)/)?
(?:player|live|video|(?P<playlist>sendung|sammlung))/
(?:(?P<display_id>[^?#]+)/)?
(?P<id>(?(playlist)|Y3JpZDovL)[a-zA-Z0-9]+)'''
_TESTS = [{
'url': 'https://www.ardmediathek.de/mdr/video/die-robuste-roswita/Y3JpZDovL21kci5kZS9iZWl0cmFnL2Ntcy84MWMxN2MzZC0wMjkxLTRmMzUtODk4ZS0wYzhlOWQxODE2NGI/',
'md5': 'a1dc75a39c61601b980648f7c9f9f71d',
@@ -403,6 +409,18 @@ class ARDBetaMediathekIE(ARDMediathekBaseIE):
'upload_date': '20200805',
'ext': 'mp4',
},
'skip': 'Error',
}, {
'url': 'https://www.ardmediathek.de/video/tagesschau-oder-tagesschau-20-00-uhr/das-erste/Y3JpZDovL2Rhc2Vyc3RlLmRlL3RhZ2Vzc2NoYXUvZmM4ZDUxMjgtOTE0ZC00Y2MzLTgzNzAtNDZkNGNiZWJkOTll',
'md5': 'f1837e563323b8a642a8ddeff0131f51',
'info_dict': {
'id': '10049223',
'ext': 'mp4',
'title': 'tagesschau, 20:00 Uhr',
'timestamp': 1636398000,
'description': 'md5:39578c7b96c9fe50afdf5674ad985e6b',
'upload_date': '20211108',
},
}, {
'url': 'https://beta.ardmediathek.de/ard/video/Y3JpZDovL2Rhc2Vyc3RlLmRlL3RhdG9ydC9mYmM4NGM1NC0xNzU4LTRmZGYtYWFhZS0wYzcyZTIxNGEyMDE',
'only_matching': True,
@@ -426,6 +444,12 @@ class ARDBetaMediathekIE(ARDMediathekBaseIE):
# playlist of type 'sammlung'
'url': 'https://www.ardmediathek.de/ard/sammlung/team-muenster/5JpTzLSbWUAK8184IOvEir/',
'only_matching': True,
}, {
'url': 'https://www.ardmediathek.de/video/coronavirus-update-ndr-info/astrazeneca-kurz-lockdown-und-pims-syndrom-81/ndr/Y3JpZDovL25kci5kZS84NzE0M2FjNi0wMWEwLTQ5ODEtOTE5NS1mOGZhNzdhOTFmOTI/',
'only_matching': True,
}, {
'url': 'https://www.ardmediathek.de/ard/player/Y3JpZDovL3dkci5kZS9CZWl0cmFnLWQ2NDJjYWEzLTMwZWYtNGI4NS1iMTI2LTU1N2UxYTcxOGIzOQ/tatort-duo-koeln-leipzig-ihr-kinderlein-kommet',
'only_matching': True,
}]
def _ARD_load_playlist_snipped(self, playlist_id, display_id, client, mode, pageNumber):
@@ -525,20 +549,12 @@ def _ARD_extract_playlist(self, url, playlist_id, display_id, client, mode):
return self.playlist_result(entries, playlist_title=display_id)
def _real_extract(self, url):
mobj = self._match_valid_url(url)
video_id = mobj.group('video_id')
display_id = mobj.group('display_id')
if display_id:
display_id = display_id.rstrip('/')
if not display_id:
display_id = video_id
video_id, display_id, playlist_type, client = self._match_valid_url(url).group(
'id', 'display_id', 'playlist', 'client')
display_id, client = display_id or video_id, client or 'ard'
if mobj.group('mode') in ('sendung', 'sammlung'):
# this is a playlist-URL
return self._ARD_extract_playlist(
url, video_id, display_id,
mobj.group('client'),
mobj.group('mode'))
if playlist_type:
return self._ARD_extract_playlist(url, video_id, display_id, client, playlist_type)
player_page = self._download_json(
'https://api.ardmediathek.de/public-gateway',
@@ -574,7 +590,7 @@ def _real_extract(self, url):
}
}
}
}''' % (mobj.group('client'), video_id),
}''' % (client, video_id),
}).encode(), headers={
'Content-Type': 'application/json'
})['data']['playerPage']

View File

@@ -24,9 +24,6 @@ class AtresPlayerIE(InfoExtractor):
'description': 'md5:7634cdcb4d50d5381bedf93efb537fbc',
'duration': 3413,
},
'params': {
'format': 'bestvideo',
},
'skip': 'This video is only available for registered users'
},
{

View File

@@ -21,7 +21,6 @@ class BandaiChannelIE(BrightcoveNewIE):
'duration': 1387.733,
},
'params': {
'format': 'bestvideo',
'skip_download': True,
},
}]

View File

@@ -97,21 +97,16 @@ def _call_api(self, video_id, id, operation, note):
'query': self._GRAPHQL_QUERIES[operation]
}).encode('utf8')).get('data')
def _extract_comments(self, video_id, comments, comment_data):
def _get_comments(self, video_id, comments, comment_data):
yield from comments
for comment in comment_data.copy():
comment_id = comment.get('_id')
if comment.get('replyCount') > 0:
reply_json = self._call_api(
video_id, comment_id, 'GetCommentReplies',
f'Downloading replies for comment {comment_id}')
comments.extend(
self._parse_comment(reply, comment_id)
for reply in reply_json.get('getCommentReplies'))
return {
'comments': comments,
'comment_count': len(comments),
}
for reply in reply_json.get('getCommentReplies'):
yield self._parse_comment(reply, comment_id)
@staticmethod
def _parse_comment(comment_data, parent):
@@ -159,7 +154,5 @@ def _real_extract(self, url):
'tags': [tag.get('name') for tag in video_info.get('tags')],
'availability': self._availability(is_unlisted=video_info.get('unlisted')),
'comments': comments,
'__post_extractor': (
(lambda: self._extract_comments(video_id, comments, video_json.get('getVideoComments')))
if self.get_param('getcomments') else None)
'__post_extractor': self.extract_comments(video_id, comments, video_json.get('getVideoComments'))
}

View File

@@ -451,9 +451,10 @@ def _download_playlist(self, playlist_id):
playlist = self._download_json(
'http://www.bbc.co.uk/programmes/%s/playlist.json' % playlist_id,
playlist_id, 'Downloading playlist JSON')
formats = []
subtitles = {}
version = playlist.get('defaultAvailableVersion')
if version:
for version in playlist.get('allAvailableVersions', []):
smp_config = version['smpConfig']
title = smp_config['title']
description = smp_config['summary']
@@ -463,8 +464,17 @@ def _download_playlist(self, playlist_id):
continue
programme_id = item.get('vpid')
duration = int_or_none(item.get('duration'))
formats, subtitles = self._download_media_selector(programme_id)
return programme_id, title, description, duration, formats, subtitles
version_formats, version_subtitles = self._download_media_selector(programme_id)
types = version['types']
for f in version_formats:
f['format_note'] = ', '.join(types)
if any('AudioDescribed' in x for x in types):
f['language_preference'] = -10
formats += version_formats
for tag, subformats in (version_subtitles or {}).items():
subtitles.setdefault(tag, []).extend(subformats)
return programme_id, title, description, duration, formats, subtitles
except ExtractorError as ee:
if not (isinstance(ee.cause, compat_HTTPError) and ee.cause.code == 404):
raise

View File

@@ -1,16 +1,13 @@
# coding: utf-8
from __future__ import unicode_literals
import hashlib
import itertools
import json
import functools
import re
import math
from .common import InfoExtractor, SearchInfoExtractor
from ..compat import (
compat_str,
compat_parse_qs,
compat_urlparse,
compat_urllib_parse_urlparse
@@ -20,6 +17,7 @@
int_or_none,
float_or_none,
parse_iso8601,
traverse_obj,
try_get,
smuggle_url,
srt_subtitles_timecode,
@@ -101,7 +99,7 @@ class BiliBiliIE(InfoExtractor):
'upload_date': '20170301',
},
'params': {
'skip_download': True, # Test metadata only
'skip_download': True,
},
}, {
'info_dict': {
@@ -115,7 +113,7 @@ class BiliBiliIE(InfoExtractor):
'upload_date': '20170301',
},
'params': {
'skip_download': True, # Test metadata only
'skip_download': True,
},
}]
}, {
@@ -169,7 +167,7 @@ def _real_extract(self, url):
if 'anime/' not in url:
cid = self._search_regex(
r'\bcid(?:["\']:|=)(\d+),["\']page(?:["\']:|=)' + compat_str(page_id), webpage, 'cid',
r'\bcid(?:["\']:|=)(\d+),["\']page(?:["\']:|=)' + str(page_id), webpage, 'cid',
default=None
) or self._search_regex(
r'\bcid(?:["\']:|=)(\d+)', webpage, 'cid',
@@ -259,7 +257,7 @@ def _real_extract(self, url):
# TODO: The json is already downloaded by _extract_anthology_entries. Don't redownload for each video
part_title = try_get(
self._download_json(
"https://api.bilibili.com/x/player/pagelist?bvid=%s&jsonp=jsonp" % bv_id,
f'https://api.bilibili.com/x/player/pagelist?bvid={bv_id}&jsonp=jsonp',
video_id, note='Extracting videos in anthology'),
lambda x: x['data'][int(page_id) - 1]['part'])
title = part_title or title
@@ -273,7 +271,7 @@ def _real_extract(self, url):
# TODO 'view_count' requires deobfuscating Javascript
info = {
'id': compat_str(video_id) if page_id is None else '%s_p%s' % (video_id, page_id),
'id': str(video_id) if page_id is None else '%s_part%s' % (video_id, page_id),
'cid': cid,
'title': title,
'description': description,
@@ -295,29 +293,25 @@ def _real_extract(self, url):
info['uploader'] = self._html_search_meta(
'author', webpage, 'uploader', default=None)
raw_danmaku = self._get_raw_danmaku(video_id, cid)
raw_tags = self._get_tags(video_id)
tags = list(map(lambda x: x['tag_name'], raw_tags))
top_level_info = {
'raw_danmaku': raw_danmaku,
'tags': tags,
'raw_tags': raw_tags,
'tags': traverse_obj(self._download_json(
f'https://api.bilibili.com/x/tag/archive/tags?aid={video_id}',
video_id, fatal=False, note='Downloading tags'), ('data', ..., 'tag_name')),
}
if self.get_param('getcomments', False):
def get_comments():
comments = self._get_all_comment_pages(video_id)
return {
'comments': comments,
'comment_count': len(comments)
}
top_level_info['__post_extractor'] = get_comments
entries[0]['subtitles'] = {
'danmaku': [{
'ext': 'xml',
'url': f'https://comment.bilibili.com/{cid}.xml',
}]
}
'''
r'''
# Requires https://github.com/m13253/danmaku2ass which is licenced under GPL3
# See https://github.com/animelover1984/youtube-dl
raw_danmaku = self._download_webpage(
f'https://comment.bilibili.com/{cid}.xml', video_id, fatal=False, note='Downloading danmaku comments')
danmaku = NiconicoIE.CreateDanmaku(raw_danmaku, commentType='Bilibili', x=1024, y=576)
entries[0]['subtitles'] = {
'danmaku': [{
@@ -327,40 +321,39 @@ def get_comments():
}
'''
top_level_info['__post_extractor'] = self.extract_comments(video_id)
for entry in entries:
entry.update(info)
if len(entries) == 1:
entries[0].update(top_level_info)
return entries[0]
else:
for idx, entry in enumerate(entries):
entry['id'] = '%s_part%d' % (video_id, (idx + 1))
global_info = {
'_type': 'multi_video',
'id': compat_str(video_id),
'bv_id': bv_id,
'title': title,
'description': description,
'entries': entries,
}
for idx, entry in enumerate(entries):
entry['id'] = '%s_part%d' % (video_id, (idx + 1))
global_info.update(info)
global_info.update(top_level_info)
return global_info
return {
'_type': 'multi_video',
'id': str(video_id),
'bv_id': bv_id,
'title': title,
'description': description,
'entries': entries,
**info, **top_level_info
}
def _extract_anthology_entries(self, bv_id, video_id, webpage):
title = self._html_search_regex(
(r'<h1[^>]+\btitle=(["\'])(?P<title>(?:(?!\1).)+)\1',
r'(?s)<h1[^>]*>(?P<title>.+?)</h1>'), webpage, 'title',
r'(?s)<h1[^>]*>(?P<title>.+?)</h1>',
r'<title>(?P<title>.+?)</title>'), webpage, 'title',
group='title')
json_data = self._download_json(
"https://api.bilibili.com/x/player/pagelist?bvid=%s&jsonp=jsonp" % bv_id,
f'https://api.bilibili.com/x/player/pagelist?bvid={bv_id}&jsonp=jsonp',
video_id, note='Extracting videos in anthology')
if len(json_data['data']) > 1:
if json_data['data']:
return self.playlist_from_matches(
json_data['data'], bv_id, title, ie=BiliBiliIE.ie_key(),
getter=lambda entry: 'https://www.bilibili.com/video/%s?p=%d' % (bv_id, entry['page']))
@@ -375,65 +368,33 @@ def _get_video_id_set(self, id, is_bv):
if response['code'] == -400:
raise ExtractorError('Video ID does not exist', expected=True, video_id=id)
elif response['code'] != 0:
raise ExtractorError('Unknown error occurred during API check (code %s)' % response['code'], expected=True, video_id=id)
return (response['data']['aid'], response['data']['bvid'])
raise ExtractorError(f'Unknown error occurred during API check (code {response["code"]})',
expected=True, video_id=id)
return response['data']['aid'], response['data']['bvid']
# recursive solution to getting every page of comments for the video
# we can stop when we reach a page without any comments
def _get_all_comment_pages(self, video_id, commentPageNumber=0):
comment_url = "https://api.bilibili.com/x/v2/reply?jsonp=jsonp&pn=%s&type=1&oid=%s&sort=2&_=1567227301685" % (commentPageNumber, video_id)
json_str = self._download_webpage(
comment_url, video_id,
note='Extracting comments from page %s' % (commentPageNumber))
replies = json.loads(json_str)['data']['replies']
if replies is None:
return []
return self._get_all_children(replies) + self._get_all_comment_pages(video_id, commentPageNumber + 1)
def _get_comments(self, video_id, commentPageNumber=0):
for idx in itertools.count(1):
replies = traverse_obj(
self._download_json(
f'https://api.bilibili.com/x/v2/reply?pn={idx}&oid={video_id}&type=1&jsonp=jsonp&sort=2&_=1567227301685',
video_id, note=f'Extracting comments from page {idx}', fatal=False),
('data', 'replies'))
if not replies:
return
for children in map(self._get_all_children, replies):
yield from children
# extracts all comments in the tree
def _get_all_children(self, replies):
if replies is None:
return []
ret = []
for reply in replies:
author = reply['member']['uname']
author_id = reply['member']['mid']
id = reply['rpid']
text = reply['content']['message']
timestamp = reply['ctime']
parent = reply['parent'] if reply['parent'] != 0 else 'root'
comment = {
"author": author,
"author_id": author_id,
"id": id,
"text": text,
"timestamp": timestamp,
"parent": parent,
}
ret.append(comment)
# from the JSON, the comment structure seems arbitrarily deep, but I could be wrong.
# Regardless, this should work.
ret += self._get_all_children(reply['replies'])
return ret
def _get_raw_danmaku(self, video_id, cid):
# This will be useful if I decide to scrape all pages instead of doing them individually
# cid_url = "https://www.bilibili.com/widget/getPageList?aid=%s" % (video_id)
# cid_str = self._download_webpage(cid_url, video_id, note=False)
# cid = json.loads(cid_str)[0]['cid']
danmaku_url = "https://comment.bilibili.com/%s.xml" % (cid)
danmaku = self._download_webpage(danmaku_url, video_id, note='Downloading danmaku comments')
return danmaku
def _get_tags(self, video_id):
tags_url = "https://api.bilibili.com/x/tag/archive/tags?aid=%s" % (video_id)
tags_json = self._download_json(tags_url, video_id, note='Downloading tags')
return tags_json['data']
def _get_all_children(self, reply):
yield {
'author': traverse_obj(reply, ('member', 'uname')),
'author_id': traverse_obj(reply, ('member', 'mid')),
'id': reply.get('rpid'),
'text': traverse_obj(reply, ('content', 'message')),
'timestamp': reply.get('ctime'),
'parent': reply.get('parent') or 'root',
}
for children in map(self._get_all_children, reply.get('replies') or []):
yield from children
class BiliBiliBangumiIE(InfoExtractor):
@@ -516,11 +477,8 @@ def _entries(self, list_id):
count, max_count = 0, None
for page_num in itertools.count(1):
data = self._parse_json(
self._download_webpage(
self._API_URL % (list_id, page_num), list_id,
note='Downloading page %d' % page_num),
list_id)['data']
data = self._download_json(
self._API_URL % (list_id, page_num), list_id, note=f'Downloading page {page_num}')['data']
max_count = max_count or try_get(data, lambda x: x['page']['count'])
@@ -583,11 +541,11 @@ def _entries(self, category, subcategory, query):
}
if category not in rid_map:
raise ExtractorError('The supplied category, %s, is not supported. List of supported categories: %s' % (category, list(rid_map.keys())))
raise ExtractorError(
f'The category {category} isn\'t supported. Supported categories: {list(rid_map.keys())}')
if subcategory not in rid_map[category]:
raise ExtractorError('The subcategory, %s, isn\'t supported for this category. Supported subcategories: %s' % (subcategory, list(rid_map[category].keys())))
raise ExtractorError(
f'The subcategory {subcategory} isn\'t supported for this category. Supported subcategories: {list(rid_map[category].keys())}')
rid_value = rid_map[category][subcategory]
api_url = 'https://api.bilibili.com/x/web-interface/newlist?rid=%d&type=1&ps=20&jsonp=jsonp' % rid_value
@@ -611,44 +569,29 @@ def _real_extract(self, url):
class BiliBiliSearchIE(SearchInfoExtractor):
IE_DESC = 'Bilibili video search, "bilisearch" keyword'
IE_DESC = 'Bilibili video search'
_MAX_RESULTS = 100000
_SEARCH_KEY = 'bilisearch'
MAX_NUMBER_OF_RESULTS = 1000
def _get_n_results(self, query, n):
"""Get a specified number of results for a query"""
entries = []
pageNumber = 0
while True:
pageNumber += 1
# FIXME
api_url = 'https://api.bilibili.com/x/web-interface/search/type?context=&page=%s&order=pubdate&keyword=%s&duration=0&tids_2=&__refresh__=true&search_type=video&tids=0&highlight=1' % (pageNumber, query)
json_str = self._download_webpage(
api_url, "None", query={"Search_key": query},
note='Extracting results from page %s' % pageNumber)
data = json.loads(json_str)['data']
# FIXME: this is hideous
if "result" not in data:
return {
'_type': 'playlist',
'id': query,
'entries': entries[:n]
}
videos = data['result']
def _search_results(self, query):
for page_num in itertools.count(1):
videos = self._download_json(
'https://api.bilibili.com/x/web-interface/search/type', query,
note=f'Extracting results from page {page_num}', query={
'Search_key': query,
'keyword': query,
'page': page_num,
'context': '',
'order': 'pubdate',
'duration': 0,
'tids_2': '',
'__refresh__': 'true',
'search_type': 'video',
'tids': 0,
'highlight': 1,
})['data'].get('result') or []
for video in videos:
e = self.url_result(video['arcurl'], 'BiliBili', compat_str(video['aid']))
entries.append(e)
if(len(entries) >= n or len(videos) >= BiliBiliSearchIE.MAX_NUMBER_OF_RESULTS):
return {
'_type': 'playlist',
'id': query,
'entries': entries[:n]
}
yield self.url_result(video['arcurl'], 'BiliBili', str(video['aid']))
class BilibiliAudioBaseIE(InfoExtractor):

View File

@@ -0,0 +1,54 @@
# coding: utf-8
from __future__ import unicode_literals
import re
from ..utils import (
mimetype2ext,
parse_duration,
parse_qs,
str_or_none,
traverse_obj,
)
from .common import InfoExtractor
class BloggerIE(InfoExtractor):
IE_NAME = 'blogger.com'
_VALID_URL = r'https?://(?:www\.)?blogger\.com/video\.g\?token=(?P<id>.+)'
_VALID_EMBED = r'''<iframe[^>]+src=["']((?:https?:)?//(?:www\.)?blogger\.com/video\.g\?token=[^"']+)["']'''
_TESTS = [{
'url': 'https://www.blogger.com/video.g?token=AD6v5dzEe9hfcARr5Hlq1WTkYy6t-fXH3BBahVhGvVHe5szdEUBEloSEDSTA8-b111089KbfWuBvTN7fnbxMtymsHhXAXwVvyzHH4Qch2cfLQdGxKQrrEuFpC1amSl_9GuLWODjPgw',
'md5': 'f1bc19b6ea1b0fd1d81e84ca9ec467ac',
'info_dict': {
'id': 'BLOGGER-video-3c740e3a49197e16-796',
'title': 'BLOGGER-video-3c740e3a49197e16-796',
'ext': 'mp4',
'thumbnail': r're:^https?://.*',
'duration': 76.068,
}
}]
@staticmethod
def _extract_urls(webpage):
return re.findall(BloggerIE._VALID_EMBED, webpage)
def _real_extract(self, url):
token_id = self._match_id(url)
webpage = self._download_webpage(url, token_id)
data_json = self._search_regex(r'var\s+VIDEO_CONFIG\s*=\s*(\{.*)', webpage, 'JSON data')
data = self._parse_json(data_json.encode('utf-8').decode('unicode_escape'), token_id)
streams = data['streams']
formats = [{
'ext': mimetype2ext(traverse_obj(parse_qs(stream['play_url']), ('mime', 0))),
'url': stream['play_url'],
'format_id': str_or_none(stream.get('format_id')),
} for stream in streams]
return {
'id': data.get('iframe_id', token_id),
'title': data.get('iframe_id', token_id),
'formats': formats,
'thumbnail': data.get('thumbnail'),
'duration': parse_duration(traverse_obj(parse_qs(streams[0]['play_url']), ('dur', 0))),
}

View File

@@ -0,0 +1,39 @@
from __future__ import unicode_literals
from .common import InfoExtractor
class BreitBartIE(InfoExtractor):
_VALID_URL = r'https?:\/\/(?:www\.)breitbart.com/videos/v/(?P<id>[^/]+)'
_TESTS = [{
'url': 'https://www.breitbart.com/videos/v/5cOz1yup/?pl=Ij6NDOji',
'md5': '0aa6d1d6e183ac5ca09207fe49f17ade',
'info_dict': {
'id': '5cOz1yup',
'ext': 'mp4',
'title': 'Watch \u2013 Clyburn: Statues in Congress Have to Go Because they Are Honoring Slavery',
'description': 'md5:bac35eb0256d1cb17f517f54c79404d5',
'thumbnail': 'https://cdn.jwplayer.com/thumbs/5cOz1yup-1920.jpg',
'age_limit': 0,
}
}, {
'url': 'https://www.breitbart.com/videos/v/eaiZjVOn/',
'only_matching': True,
}]
def _real_extract(self, url):
video_id = self._match_id(url)
webpage = self._download_webpage(url, video_id)
formats = self._extract_m3u8_formats(f'https://cdn.jwplayer.com/manifests/{video_id}.m3u8', video_id, ext='mp4')
self._sort_formats(formats)
return {
'id': video_id,
'title': self._og_search_title(
webpage, default=None) or self._html_search_regex(
r'(?s)<title>(.*?)</title>', webpage, 'video title'),
'description': self._og_search_description(webpage),
'thumbnail': self._og_search_thumbnail(webpage),
'age_limit': self._rta_search(webpage),
'formats': formats
}

View File

@@ -0,0 +1,34 @@
# coding: utf-8
from .common import InfoExtractor
class CableAVIE(InfoExtractor):
_VALID_URL = r'https://cableav\.tv/(?P<id>[a-zA-Z0-9]+)'
_TESTS = [{
'url': 'https://cableav.tv/lS4iR9lWjN8/',
'md5': '7e3fe5e49d61c4233b7f5b0f69b15e18',
'info_dict': {
'id': 'lS4iR9lWjN8',
'ext': 'mp4',
'title': '國產麻豆AV 叮叮映畫 DDF001 情欲小說家 - CableAV',
'description': '國產AV 480p, 720p 国产麻豆AV 叮叮映画 DDF001 情欲小说家',
'thumbnail': r're:^https?://.*\.jpg$',
}
}]
def _real_extract(self, url):
video_id = self._match_id(url)
webpage = self._download_webpage(url, video_id)
video_url = self._og_search_video_url(webpage, secure=False)
formats = self._extract_m3u8_formats(video_url, video_id, 'mp4')
self._sort_formats(formats)
return {
'id': video_id,
'title': self._og_search_title(webpage),
'description': self._og_search_description(webpage),
'thumbnail': self._og_search_thumbnail(webpage),
'formats': formats,
}

View File

@@ -0,0 +1,98 @@
# coding: utf-8
from __future__ import unicode_literals
from .common import InfoExtractor
from ..utils import (
clean_html,
dict_get,
try_get,
unified_strdate,
)
class CanalAlphaIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?canalalpha\.ch/play/[^/]+/[^/]+/(?P<id>\d+)/?.*'
_TESTS = [{
'url': 'https://www.canalalpha.ch/play/le-journal/episode/24520/jeudi-28-octobre-2021',
'info_dict': {
'id': '24520',
'ext': 'mp4',
'title': 'Jeudi 28 octobre 2021',
'description': 'md5:d30c6c3e53f8ad40d405379601973b30',
'thumbnail': 'https://static.canalalpha.ch/poster/journal/journal_20211028.jpg',
'upload_date': '20211028',
'duration': 1125,
},
'params': {'skip_download': True}
}, {
'url': 'https://www.canalalpha.ch/play/le-journal/topic/24512/la-poste-fait-de-neuchatel-un-pole-cryptographique',
'info_dict': {
'id': '24512',
'ext': 'mp4',
'title': 'La Poste fait de Neuchâtel un pôle cryptographique',
'description': 'md5:4ba63ae78a0974d1a53d6703b6e1dedf',
'thumbnail': 'https://static.canalalpha.ch/poster/news/news_39712.jpg',
'upload_date': '20211028',
'duration': 138,
},
'params': {'skip_download': True}
}, {
'url': 'https://www.canalalpha.ch/play/eureka/episode/24484/ces-innovations-qui-veulent-rendre-lagriculture-plus-durable',
'info_dict': {
'id': '24484',
'ext': 'mp4',
'title': 'Ces innovations qui veulent rendre lagriculture plus durable',
'description': 'md5:3de3f151180684621e85be7c10e4e613',
'thumbnail': 'https://static.canalalpha.ch/poster/magazine/magazine_10236.jpg',
'upload_date': '20211026',
'duration': 360,
},
'params': {'skip_download': True}
}, {
'url': 'https://www.canalalpha.ch/play/avec-le-temps/episode/23516/redonner-de-leclat-grace-au-polissage',
'info_dict': {
'id': '23516',
'ext': 'mp4',
'title': 'Redonner de l\'éclat grâce au polissage',
'description': 'md5:0d8fbcda1a5a4d6f6daa3165402177e1',
'thumbnail': 'https://static.canalalpha.ch/poster/magazine/magazine_9990.png',
'upload_date': '20210726',
'duration': 360,
},
'params': {'skip_download': True}
}]
def _real_extract(self, url):
id = self._match_id(url)
webpage = self._download_webpage(url, id)
data_json = self._parse_json(self._search_regex(
r'window\.__SERVER_STATE__\s?=\s?({(?:(?!};)[^"]|"([^"]|\\")*")+})\s?;',
webpage, 'data_json'), id)['1']['data']['data']
manifests = try_get(data_json, lambda x: x['video']['manifests'], expected_type=dict) or {}
subtitles = {}
formats = [{
'url': video['$url'],
'ext': 'mp4',
'width': try_get(video, lambda x: x['res']['width'], expected_type=int),
'height': try_get(video, lambda x: x['res']['height'], expected_type=int),
} for video in try_get(data_json, lambda x: x['video']['mp4'], expected_type=list) or [] if video.get('$url')]
if manifests.get('hls'):
m3u8_frmts, m3u8_subs = self._parse_m3u8_formats_and_subtitles(manifests['hls'], id)
formats.extend(m3u8_frmts)
subtitles = self._merge_subtitles(subtitles, m3u8_subs)
if manifests.get('dash'):
dash_frmts, dash_subs = self._parse_mpd_formats_and_subtitles(manifests['dash'], id)
formats.extend(dash_frmts)
subtitles = self._merge_subtitles(subtitles, dash_subs)
self._sort_formats(formats)
return {
'id': id,
'title': data_json.get('title').strip(),
'description': clean_html(dict_get(data_json, ('longDesc', 'shortDesc'))),
'thumbnail': data_json.get('poster'),
'upload_date': unified_strdate(dict_get(data_json, ('webPublishAt', 'featuredAt', 'diffusionDate'))),
'duration': try_get(data_json, lambda x: x['video']['duration'], expected_type=int),
'formats': formats,
'subtitles': subtitles,
}

View File

@@ -1,4 +1,5 @@
from __future__ import unicode_literals
import json
from .common import InfoExtractor
@@ -41,9 +42,9 @@ class CanvasIE(InfoExtractor):
_GEO_BYPASS = False
_HLS_ENTRY_PROTOCOLS_MAP = {
'HLS': 'm3u8_native',
'HLS_AES': 'm3u8',
'HLS_AES': 'm3u8_native',
}
_REST_API_BASE = 'https://media-services-public.vrt.be/vualto-video-aggregator-web/rest/external/v1'
_REST_API_BASE = 'https://media-services-public.vrt.be/vualto-video-aggregator-web/rest/external/v2'
def _real_extract(self, url):
mobj = self._match_valid_url(url)
@@ -59,16 +60,21 @@ def _real_extract(self, url):
# New API endpoint
if not data:
vrtnutoken = self._download_json('https://token.vrt.be/refreshtoken',
video_id, note='refreshtoken: Retrieve vrtnutoken',
errnote='refreshtoken failed')['vrtnutoken']
headers = self.geo_verification_headers()
headers.update({'Content-Type': 'application/json'})
token = self._download_json(
headers.update({'Content-Type': 'application/json; charset=utf-8'})
vrtPlayerToken = self._download_json(
'%s/tokens' % self._REST_API_BASE, video_id,
'Downloading token', data=b'', headers=headers)['vrtPlayerToken']
'Downloading token', headers=headers, data=json.dumps({
'identityToken': vrtnutoken
}).encode('utf-8'))['vrtPlayerToken']
data = self._download_json(
'%s/videos/%s' % (self._REST_API_BASE, video_id),
video_id, 'Downloading video JSON', query={
'vrtPlayerToken': token,
'client': '%s@PROD' % site_id,
'vrtPlayerToken': vrtPlayerToken,
'client': 'null',
}, expected_status=400)
if not data.get('title'):
code = data.get('code')
@@ -264,7 +270,7 @@ class VrtNUIE(GigyaBaseIE):
'expected_warnings': ['Unable to download asset JSON', 'is not a supported codec', 'Unknown MIME type'],
}]
_NETRC_MACHINE = 'vrtnu'
_APIKEY = '3_qhEcPa5JGFROVwu5SWKqJ4mVOIkwlFNMSKwzPDAh8QZOtHqu6L4nD5Q7lk0eXOOG'
_APIKEY = '3_0Z2HujMtiWq_pkAjgnS2Md2E11a1AwZjYiBETtwNE-EoEHDINgtnvcAOpNgmrVGy'
_CONTEXT_ID = 'R3595707040'
def _real_initialize(self):
@@ -275,16 +281,13 @@ def _login(self):
if username is None:
return
auth_info = self._download_json(
'https://accounts.vrt.be/accounts.login', None,
note='Login data', errnote='Could not get Login data',
headers={}, data=urlencode_postdata({
'loginID': username,
'password': password,
'sessionExpiration': '-2',
'APIKey': self._APIKEY,
'targetEnv': 'jssdk',
}))
auth_info = self._gigya_login({
'APIKey': self._APIKEY,
'targetEnv': 'jssdk',
'loginID': username,
'password': password,
'authMode': 'cookie',
})
if auth_info.get('errorDetails'):
raise ExtractorError('Unable to login: VrtNU said: ' + auth_info.get('errorDetails'), expected=True)
@@ -301,14 +304,15 @@ def _login(self):
'UID': auth_info['UID'],
'UIDSignature': auth_info['UIDSignature'],
'signatureTimestamp': auth_info['signatureTimestamp'],
'client_id': 'vrtnu-site',
'_csrf': self._get_cookies('https://login.vrt.be').get('OIDCXSRF').value,
}
self._request_webpage(
'https://login.vrt.be/perform_login',
None, note='Requesting a token', errnote='Could not get a token',
headers={}, data=urlencode_postdata(post_data))
None, note='Performing login', errnote='perform login failed',
headers={}, query={
'client_id': 'vrtnu-site'
}, data=urlencode_postdata(post_data))
except ExtractorError as e:
if isinstance(e.cause, compat_HTTPError) and e.cause.code == 401:

View File

@@ -2,6 +2,9 @@
from __future__ import unicode_literals
import re
import json
import base64
import time
from .common import InfoExtractor
from ..compat import (
@@ -244,37 +247,96 @@ class CBCGemIE(InfoExtractor):
'params': {'format': 'bv'},
'skip': 'Geo-restricted to Canada',
}]
_API_BASE = 'https://services.radio-canada.ca/ott/cbc-api/v2/assets/'
_GEO_COUNTRIES = ['CA']
_TOKEN_API_KEY = '3f4beddd-2061-49b0-ae80-6f1f2ed65b37'
_NETRC_MACHINE = 'cbcgem'
_claims_token = None
def _new_claims_token(self, email, password):
data = json.dumps({
'email': email,
'password': password,
}).encode()
headers = {'content-type': 'application/json'}
query = {'apikey': self._TOKEN_API_KEY}
resp = self._download_json('https://api.loginradius.com/identity/v2/auth/login',
None, data=data, headers=headers, query=query)
access_token = resp['access_token']
query = {
'access_token': access_token,
'apikey': self._TOKEN_API_KEY,
'jwtapp': 'jwt',
}
resp = self._download_json('https://cloud-api.loginradius.com/sso/jwt/api/token',
None, headers=headers, query=query)
sig = resp['signature']
data = json.dumps({'jwt': sig}).encode()
headers = {'content-type': 'application/json', 'ott-device-type': 'web'}
resp = self._download_json('https://services.radio-canada.ca/ott/cbc-api/v2/token',
None, data=data, headers=headers)
cbc_access_token = resp['accessToken']
headers = {'content-type': 'application/json', 'ott-device-type': 'web', 'ott-access-token': cbc_access_token}
resp = self._download_json('https://services.radio-canada.ca/ott/cbc-api/v2/profile',
None, headers=headers)
return resp['claimsToken']
def _get_claims_token_expiry(self):
# Token is a JWT
# JWT is decoded here and 'exp' field is extracted
# It is a Unix timestamp for when the token expires
b64_data = self._claims_token.split('.')[1]
data = base64.urlsafe_b64decode(b64_data + "==")
return json.loads(data)['exp']
def claims_token_expired(self):
exp = self._get_claims_token_expiry()
if exp - time.time() < 10:
# It will expire in less than 10 seconds, or has already expired
return True
return False
def claims_token_valid(self):
return self._claims_token is not None and not self.claims_token_expired()
def _get_claims_token(self, email, password):
if not self.claims_token_valid():
self._claims_token = self._new_claims_token(email, password)
self._downloader.cache.store(self._NETRC_MACHINE, 'claims_token', self._claims_token)
return self._claims_token
def _real_initialize(self):
if self.claims_token_valid():
return
self._claims_token = self._downloader.cache.load(self._NETRC_MACHINE, 'claims_token')
def _real_extract(self, url):
video_id = self._match_id(url)
video_info = self._download_json(self._API_BASE + video_id, video_id)
video_info = self._download_json('https://services.radio-canada.ca/ott/cbc-api/v2/assets/' + video_id, video_id)
last_error = None
attempt = -1
retries = self.get_param('extractor_retries', 15)
while attempt < retries:
attempt += 1
if last_error:
self.report_warning('%s. Retrying ...' % last_error)
m3u8_info = self._download_json(
video_info['playSession']['url'], video_id,
note='Downloading JSON metadata%s' % f' (attempt {attempt})')
m3u8_url = m3u8_info.get('url')
if m3u8_url:
break
elif m3u8_info.get('errorCode') == 1:
self.raise_geo_restricted(countries=['CA'])
else:
last_error = f'{self.IE_NAME} said: {m3u8_info.get("errorCode")} - {m3u8_info.get("message")}'
# 35 means media unavailable, but retries work
if m3u8_info.get('errorCode') != 35 or attempt >= retries:
raise ExtractorError(last_error)
email, password = self._get_login_info()
if email and password:
claims_token = self._get_claims_token(email, password)
headers = {'x-claims-token': claims_token}
else:
headers = {}
m3u8_info = self._download_json(video_info['playSession']['url'], video_id, headers=headers)
m3u8_url = m3u8_info.get('url')
if m3u8_info.get('errorCode') == 1:
self.raise_geo_restricted(countries=['CA'])
elif m3u8_info.get('errorCode') == 35:
self.raise_login_required(method='password')
elif m3u8_info.get('errorCode') != 0:
raise ExtractorError(f'{self.IE_NAME} said: {m3u8_info.get("errorCode")} - {m3u8_info.get("message")}')
formats = self._extract_m3u8_formats(m3u8_url, video_id, m3u8_id='hls')
self._remove_duplicate_formats(formats)
for i, format in enumerate(formats):
for format in formats:
if format.get('vcodec') == 'none':
if format.get('ext') is None:
format['ext'] = 'm4a'
@@ -328,7 +390,8 @@ def _real_extract(self, url):
show = match.group('show')
show_info = self._download_json(self._API_BASE + show, season_id)
season = int(match.group('season'))
season_info = try_get(show_info, lambda x: x['seasons'][season - 1])
season_info = next((s for s in show_info['seasons'] if s.get('season') == season), None)
if season_info is None:
raise ExtractorError(f'Couldn\'t find season {season} of {show}')
@@ -377,7 +440,7 @@ def _real_extract(self, url):
class CBCGemLiveIE(InfoExtractor):
IE_NAME = 'gem.cbc.ca:live'
_VALID_URL = r'https?://gem\.cbc\.ca/live/(?P<id>[0-9]{12})'
_VALID_URL = r'https?://gem\.cbc\.ca/live/(?P<id>\d+)'
_TEST = {
'url': 'https://gem.cbc.ca/live/920604739687',
'info_dict': {
@@ -396,21 +459,21 @@ class CBCGemLiveIE(InfoExtractor):
# It's unclear where the chars at the end come from, but they appear to be
# constant. Might need updating in the future.
_API = 'https://tpfeed.cbc.ca/f/ExhSPC/t_t3UKJR6MAT'
# There are two URLs, some livestreams are in one, and some
# in the other. The JSON schema is the same for both.
_API_URLS = ['https://tpfeed.cbc.ca/f/ExhSPC/t_t3UKJR6MAT', 'https://tpfeed.cbc.ca/f/ExhSPC/FNiv9xQx_BnT']
def _real_extract(self, url):
video_id = self._match_id(url)
live_info = self._download_json(self._API, video_id)['entries']
video_info = None
for stream in live_info:
if stream.get('guid') == video_id:
video_info = stream
if video_info is None:
raise ExtractorError(
'Couldn\'t find video metadata, maybe this livestream is now offline',
expected=True)
for api_url in self._API_URLS:
video_info = next((
stream for stream in self._download_json(api_url, video_id)['entries']
if stream.get('guid') == video_id), None)
if video_info:
break
else:
raise ExtractorError('Couldn\'t find video metadata, maybe this livestream is now offline', expected=True)
return {
'_type': 'url_transparent',

View File

@@ -20,22 +20,8 @@
class CeskaTelevizeIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?ceskatelevize\.cz/ivysilani/(?:[^/?#&]+/)*(?P<id>[^/#?]+)'
_VALID_URL = r'https?://(?:www\.)?ceskatelevize\.cz/(?:ivysilani|porady)/(?:[^/?#&]+/)*(?P<id>[^/#?]+)'
_TESTS = [{
'url': 'http://www.ceskatelevize.cz/ivysilani/ivysilani/10441294653-hyde-park-civilizace/214411058091220',
'info_dict': {
'id': '61924494877246241',
'ext': 'mp4',
'title': 'Hyde Park Civilizace: Život v Grónsku',
'description': 'md5:3fec8f6bb497be5cdb0c9e8781076626',
'thumbnail': r're:^https?://.*\.jpg',
'duration': 3350,
},
'params': {
# m3u8 download
'skip_download': True,
},
}, {
'url': 'http://www.ceskatelevize.cz/ivysilani/10441294653-hyde-park-civilizace/215411058090502/bonus/20641-bonus-01-en',
'info_dict': {
'id': '61924494877028507',
@@ -66,12 +52,58 @@ class CeskaTelevizeIE(InfoExtractor):
}, {
'url': 'http://www.ceskatelevize.cz/ivysilani/embed/iFramePlayer.php?hash=d6a3e1370d2e4fa76296b90bad4dfc19673b641e&IDEC=217 562 22150/0004&channelID=1&width=100%25',
'only_matching': True,
}, {
# video with 18+ caution trailer
'url': 'http://www.ceskatelevize.cz/porady/10520528904-queer/215562210900007-bogotart/',
'info_dict': {
'id': '215562210900007-bogotart',
'title': 'Queer: Bogotart',
'description': 'Hlavní město Kolumbie v doprovodu queer umělců. Vroucí svět plný vášně, sebevědomí, ale i násilí a bolesti. Připravil Peter Serge Butko',
},
'playlist': [{
'info_dict': {
'id': '61924494877311053',
'ext': 'mp4',
'title': 'Queer: Bogotart (Varování 18+)',
'duration': 11.9,
},
}, {
'info_dict': {
'id': '61924494877068022',
'ext': 'mp4',
'title': 'Queer: Bogotart (Queer)',
'thumbnail': r're:^https?://.*\.jpg',
'duration': 1558.3,
},
}],
'params': {
# m3u8 download
'skip_download': True,
},
}, {
# iframe embed
'url': 'http://www.ceskatelevize.cz/porady/10614999031-neviditelni/21251212048/',
'only_matching': True,
}]
def _real_extract(self, url):
playlist_id = self._match_id(url)
parsed_url = compat_urllib_parse_urlparse(url)
webpage = self._download_webpage(url, playlist_id)
site_name = self._og_search_property('site_name', webpage, fatal=False, default=None)
playlist_title = self._og_search_title(webpage, default=None)
if site_name and playlist_title:
playlist_title = playlist_title.replace(f'{site_name}', '', 1)
playlist_description = self._og_search_description(webpage, default=None)
if playlist_description:
playlist_description = playlist_description.replace('\xa0', ' ')
if parsed_url.path.startswith('/porady/'):
refer_url = update_url_query(unescapeHTML(self._search_regex(
(r'<span[^>]*\bdata-url=(["\'])(?P<url>(?:(?!\1).)+)\1',
r'<iframe[^>]+\bsrc=(["\'])(?P<url>(?:https?:)?//(?:www\.)?ceskatelevize\.cz/ivysilani/embed/iFramePlayer\.php.*?)\1'),
webpage, 'iframe player url', group='url')), query={'autoStart': 'true'})
webpage = self._download_webpage(refer_url, playlist_id)
NOT_AVAILABLE_STRING = 'This content is not available at your territory due to limited copyright.'
if '%s</p>' % NOT_AVAILABLE_STRING in webpage:
@@ -100,7 +132,7 @@ def _real_extract(self, url):
data = {
'playlist[0][type]': type_,
'playlist[0][id]': episode_id,
'requestUrl': compat_urllib_parse_urlparse(url).path,
'requestUrl': parsed_url.path,
'requestSource': 'iVysilani',
}
@@ -108,7 +140,7 @@ def _real_extract(self, url):
for user_agent in (None, USER_AGENTS['Safari']):
req = sanitized_Request(
'https://www.ceskatelevize.cz/ivysilani/ajax/get-client-playlist',
'https://www.ceskatelevize.cz/ivysilani/ajax/get-client-playlist/',
data=urlencode_postdata(data))
req.add_header('Content-type', 'application/x-www-form-urlencoded')
@@ -130,9 +162,6 @@ def _real_extract(self, url):
req = sanitized_Request(compat_urllib_parse_unquote(playlist_url))
req.add_header('Referer', url)
playlist_title = self._og_search_title(webpage, default=None)
playlist_description = self._og_search_description(webpage, default=None)
playlist = self._download_json(req, playlist_id, fatal=False)
if not playlist:
continue
@@ -237,54 +266,3 @@ def _fix_subtitle(subtitle):
yield line
return '\r\n'.join(_fix_subtitle(subtitles))
class CeskaTelevizePoradyIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?ceskatelevize\.cz/porady/(?:[^/?#&]+/)*(?P<id>[^/#?]+)'
_TESTS = [{
# video with 18+ caution trailer
'url': 'http://www.ceskatelevize.cz/porady/10520528904-queer/215562210900007-bogotart/',
'info_dict': {
'id': '215562210900007-bogotart',
'title': 'Queer: Bogotart',
'description': 'Alternativní průvodce současným queer světem',
},
'playlist': [{
'info_dict': {
'id': '61924494876844842',
'ext': 'mp4',
'title': 'Queer: Bogotart (Varování 18+)',
'duration': 10.2,
},
}, {
'info_dict': {
'id': '61924494877068022',
'ext': 'mp4',
'title': 'Queer: Bogotart (Queer)',
'thumbnail': r're:^https?://.*\.jpg',
'duration': 1558.3,
},
}],
'params': {
# m3u8 download
'skip_download': True,
},
}, {
# iframe embed
'url': 'http://www.ceskatelevize.cz/porady/10614999031-neviditelni/21251212048/',
'only_matching': True,
}]
def _real_extract(self, url):
video_id = self._match_id(url)
webpage = self._download_webpage(url, video_id)
data_url = update_url_query(unescapeHTML(self._search_regex(
(r'<span[^>]*\bdata-url=(["\'])(?P<url>(?:(?!\1).)+)\1',
r'<iframe[^>]+\bsrc=(["\'])(?P<url>(?:https?:)?//(?:www\.)?ceskatelevize\.cz/ivysilani/embed/iFramePlayer\.php.*?)\1'),
webpage, 'iframe player url', group='url')), query={
'autoStart': 'true',
})
return self.url_result(data_url, ie=CeskaTelevizeIE.ie_key())

View File

@@ -67,7 +67,7 @@ def _get_post(self, id, post_data):
class ChingariIE(ChingariBaseIE):
_VALID_URL = r'(?:https?://)(?:www\.)?chingari\.io/share/post\?id=(?P<id>[^&/#?]+)'
_VALID_URL = r'https?://(?:www\.)?chingari\.io/share/post\?id=(?P<id>[^&/#?]+)'
_TESTS = [{
'url': 'https://chingari.io/share/post?id=612f8f4ce1dc57090e8a7beb',
'info_dict': {
@@ -102,7 +102,7 @@ def _real_extract(self, url):
class ChingariUserIE(ChingariBaseIE):
_VALID_URL = r'(?:https?://)(?:www\.)?chingari\.io/(?!share/post)(?P<id>[^/?]+)'
_VALID_URL = r'https?://(?:www\.)?chingari\.io/(?!share/post)(?P<id>[^/?]+)'
_TESTS = [{
'url': 'https://chingari.io/dada1023',
'playlist_mincount': 3,

View File

@@ -2,8 +2,10 @@
from __future__ import unicode_literals
import base64
import collections
import datetime
import hashlib
import itertools
import json
import netrc
import os
@@ -53,6 +55,7 @@
GeoRestrictedError,
GeoUtils,
int_or_none,
join_nonempty,
js_to_json,
JSON_LD_RE,
mimetype2ext,
@@ -73,6 +76,7 @@
strip_or_none,
traverse_obj,
unescapeHTML,
UnsupportedError,
unified_strdate,
unified_timestamp,
update_Request,
@@ -146,6 +150,8 @@ class InfoExtractor(object):
* width Width of the video, if known
* height Height of the video, if known
* resolution Textual description of width and height
* dynamic_range The dynamic range of the video. One of:
"SDR" (None), "HDR10", "HDR10+, "HDR12", "HLG, "DV"
* tbr Average bitrate of audio and video in KBit/s
* abr Average audio bitrate in KBit/s
* acodec Name of the audio codec in use
@@ -232,7 +238,6 @@ class InfoExtractor(object):
* "resolution" (optional, string "{width}x{height}",
deprecated)
* "filesize" (optional, int)
* "_test_url" (optional, bool) - If true, test the URL
thumbnail: Full URL to a video thumbnail image.
description: Full video description.
uploader: Full name of the video uploader.
@@ -338,6 +343,7 @@ class InfoExtractor(object):
series, programme or podcast:
series: Title of the series or programme the video episode belongs to.
series_id: Id of the series or programme the video episode belongs to, as a unicode string.
season: Title of the season the video episode belongs to.
season_number: Number of the season the video episode belongs to, as an integer.
season_id: Id of the season the video episode belongs to, as a unicode string.
@@ -438,15 +444,17 @@ class InfoExtractor(object):
_WORKING = True
_LOGIN_HINTS = {
'any': 'Use --cookies, --username and --password or --netrc to provide account credentials',
'any': 'Use --cookies, --username and --password, or --netrc to provide account credentials',
'cookies': (
'Use --cookies for the authentication. '
'See https://github.com/ytdl-org/youtube-dl#how-do-i-pass-cookies-to-youtube-dl for how to pass cookies'),
'password': 'Use --username and --password or --netrc to provide account credentials',
'Use --cookies-from-browser or --cookies for the authentication. '
'See https://github.com/ytdl-org/youtube-dl#how-do-i-pass-cookies-to-youtube-dl for how to manually pass cookies'),
'password': 'Use --username and --password, or --netrc to provide account credentials',
}
def __init__(self, downloader=None):
"""Constructor. Receives an optional downloader."""
"""Constructor. Receives an optional downloader (a YoutubeDL instance).
If a downloader is not passed during initialization,
it must be set using "set_downloader()" before "extract()" is called"""
self._ready = False
self._x_forwarded_for_ip = None
self._printed_messages = set()
@@ -600,10 +608,19 @@ def extract(self, url):
if self.__maybe_fake_ip_and_retry(e.countries):
continue
raise
except UnsupportedError:
raise
except ExtractorError as e:
video_id = e.video_id or self.get_temp_id(url)
raise ExtractorError(
e.msg, video_id=video_id, ie=self.IE_NAME, tb=e.traceback, expected=e.expected, cause=e.cause)
kwargs = {
'video_id': e.video_id or self.get_temp_id(url),
'ie': self.IE_NAME,
'tb': e.traceback,
'expected': e.expected,
'cause': e.cause
}
if hasattr(e, 'countries'):
kwargs['countries'] = e.countries
raise type(e)(e.msg, **kwargs)
except compat_http_client.IncompleteRead as e:
raise ExtractorError('A network error has occurred.', cause=e, expected=True, video_id=self.get_temp_id(url))
except (KeyError, StopIteration) as e:
@@ -662,7 +679,7 @@ def _request_webpage(self, url_or_request, video_id, note=None, errnote=None, fa
See _download_webpage docstring for arguments specification.
"""
if not self._downloader._first_webpage_request:
sleep_interval = float_or_none(self.get_param('sleep_interval_requests')) or 0
sleep_interval = self.get_param('sleep_interval_requests') or 0
if sleep_interval > 0:
self.to_screen('Sleeping %s seconds ...' % sleep_interval)
time.sleep(sleep_interval)
@@ -1062,7 +1079,8 @@ def report_login(self):
def raise_login_required(
self, msg='This video is only available for registered users',
metadata_available=False, method='any'):
if metadata_available and self.get_param('ignore_no_formats_error'):
if metadata_available and (
self.get_param('ignore_no_formats_error') or self.get_param('wait_for_video')):
self.report_warning(msg)
if method is not None:
msg = '%s. %s' % (msg, self._LOGIN_HINTS[method])
@@ -1071,13 +1089,15 @@ def raise_login_required(
def raise_geo_restricted(
self, msg='This video is not available from your location due to geo restriction',
countries=None, metadata_available=False):
if metadata_available and self.get_param('ignore_no_formats_error'):
if metadata_available and (
self.get_param('ignore_no_formats_error') or self.get_param('wait_for_video')):
self.report_warning(msg)
else:
raise GeoRestrictedError(msg, countries=countries)
def raise_no_formats(self, msg, expected=False, video_id=None):
if expected and self.get_param('ignore_no_formats_error'):
if expected and (
self.get_param('ignore_no_formats_error') or self.get_param('wait_for_video')):
self.report_warning(msg, video_id)
elif isinstance(msg, ExtractorError):
raise msg
@@ -1086,12 +1106,13 @@ def raise_no_formats(self, msg, expected=False, video_id=None):
# Methods for following #608
@staticmethod
def url_result(url, ie=None, video_id=None, video_title=None):
def url_result(url, ie=None, video_id=None, video_title=None, **kwargs):
"""Returns a URL that points to a page that should be processed"""
# TODO: ie should be the class used for getting the info
video_info = {'_type': 'url',
'url': url,
'ie_key': ie}
video_info.update(kwargs)
if video_id is not None:
video_info['id'] = video_id
if video_title is not None:
@@ -1134,7 +1155,7 @@ def _search_regex(self, pattern, string, name, default=NO_DEFAULT, fatal=True, f
if mobj:
break
_name = self._downloader._color_text(name, 'blue')
_name = self._downloader._format_err(name, self._downloader.Styles.EMPHASIS)
if mobj:
if group is None:
@@ -1434,6 +1455,9 @@ def extract_video_object(e):
item_type = e.get('@type')
if expected_type is not None and expected_type != item_type:
continue
rating = traverse_obj(e, ('aggregateRating', 'ratingValue'), expected_type=float_or_none)
if rating is not None:
info['average_rating'] = rating
if item_type in ('TVEpisode', 'Episode'):
episode_name = unescapeHTML(e.get('name'))
info.update({
@@ -1480,6 +1504,13 @@ def extract_video_object(e):
break
return dict((k, v) for k, v in info.items() if v is not None)
def _search_nextjs_data(self, webpage, video_id, **kw):
return self._parse_json(
self._search_regex(
r'(?s)<script[^>]+id=[\'"]__NEXT_DATA__[\'"][^>]*>([^<]+)</script>',
webpage, 'next.js data', **kw),
video_id, **kw)
@staticmethod
def _hidden_inputs(html):
html = re.sub(r'<!--(?:(?!<!--).)*-->', '', html)
@@ -1506,19 +1537,21 @@ class FormatSort:
regex = r' *((?P<reverse>\+)?(?P<field>[a-zA-Z0-9_]+)((?P<separator>[~:])(?P<limit>.*?))?)? *$'
default = ('hidden', 'aud_or_vid', 'hasvid', 'ie_pref', 'lang', 'quality',
'res', 'fps', 'codec:vp9.2', 'size', 'br', 'asr',
'proto', 'ext', 'hasaud', 'source', 'format_id') # These must not be aliases
'res', 'fps', 'hdr:12', 'codec:vp9.2', 'size', 'br', 'asr',
'proto', 'ext', 'hasaud', 'source', 'id') # These must not be aliases
ytdl_default = ('hasaud', 'lang', 'quality', 'tbr', 'filesize', 'vbr',
'height', 'width', 'proto', 'vext', 'abr', 'aext',
'fps', 'fs_approx', 'source', 'format_id')
'fps', 'fs_approx', 'source', 'id')
settings = {
'vcodec': {'type': 'ordered', 'regex': True,
'order': ['av0?1', 'vp0?9.2', 'vp0?9', '[hx]265|he?vc?', '[hx]264|avc', 'vp0?8', 'mp4v|h263', 'theora', '', None, 'none']},
'acodec': {'type': 'ordered', 'regex': True,
'order': ['opus', 'vorbis', 'aac', 'mp?4a?', 'mp3', 'e?a?c-?3', 'dts', '', None, 'none']},
'order': ['opus', 'vorbis', 'aac', 'mp?4a?', 'mp3', 'e-?a?c-?3', 'ac-?3', 'dts', '', None, 'none']},
'hdr': {'type': 'ordered', 'regex': True, 'field': 'dynamic_range',
'order': ['dv', '(hdr)?12', r'(hdr)?10\+', '(hdr)?10', 'hlg', '', 'sdr', None]},
'proto': {'type': 'ordered', 'regex': True, 'field': 'protocol',
'order': ['(ht|f)tps', '(ht|f)tp$', 'm3u8.+', '.*dash', 'ws|websocket', '', 'mms|rtsp', 'none', 'f4']},
'order': ['(ht|f)tps', '(ht|f)tp$', 'm3u8.*', '.*dash', 'websocket_frag', 'rtmpe?', '', 'mms|rtsp', 'ws|websocket', 'f4']},
'vext': {'type': 'ordered', 'field': 'video_ext',
'order': ('mp4', 'webm', 'flv', '', 'none'),
'order_free': ('webm', 'mp4', 'flv', '', 'none')},
@@ -1532,8 +1565,8 @@ class FormatSort:
'ie_pref': {'priority': True, 'type': 'extractor'},
'hasvid': {'priority': True, 'field': 'vcodec', 'type': 'boolean', 'not_in_list': ('none',)},
'hasaud': {'field': 'acodec', 'type': 'boolean', 'not_in_list': ('none',)},
'lang': {'convert': 'ignore', 'field': 'language_preference'},
'quality': {'convert': 'float_none', 'default': -1},
'lang': {'convert': 'float', 'field': 'language_preference', 'default': -1},
'quality': {'convert': 'float', 'default': -1},
'filesize': {'convert': 'bytes'},
'fs_approx': {'convert': 'bytes', 'field': 'filesize_approx'},
'id': {'convert': 'string', 'field': 'format_id'},
@@ -1544,7 +1577,7 @@ class FormatSort:
'vbr': {'convert': 'float_none'},
'abr': {'convert': 'float_none'},
'asr': {'convert': 'float_none'},
'source': {'convert': 'ignore', 'field': 'source_preference'},
'source': {'convert': 'float', 'field': 'source_preference', 'default': -1},
'codec': {'type': 'combined', 'field': ('vcodec', 'acodec')},
'br': {'type': 'combined', 'field': ('tbr', 'vbr', 'abr'), 'same_limit': True},
@@ -1553,7 +1586,7 @@ class FormatSort:
'res': {'type': 'multiple', 'field': ('height', 'width'),
'function': lambda it: (lambda l: min(l) if l else 0)(tuple(filter(None, it)))},
# Most of these exist only for compatibility reasons
# Deprecated
'dimension': {'type': 'alias', 'field': 'res'},
'resolution': {'type': 'alias', 'field': 'res'},
'extension': {'type': 'alias', 'field': 'ext'},
@@ -1562,7 +1595,7 @@ class FormatSort:
'video_bitrate': {'type': 'alias', 'field': 'vbr'},
'audio_bitrate': {'type': 'alias', 'field': 'abr'},
'framerate': {'type': 'alias', 'field': 'fps'},
'language_preference': {'type': 'alias', 'field': 'lang'}, # not named as 'language' because such a field exists
'language_preference': {'type': 'alias', 'field': 'lang'},
'protocol': {'type': 'alias', 'field': 'proto'},
'source_preference': {'type': 'alias', 'field': 'source'},
'filesize_approx': {'type': 'alias', 'field': 'fs_approx'},
@@ -1582,10 +1615,20 @@ class FormatSort:
'format_id': {'type': 'alias', 'field': 'id'},
}
_order = []
def __init__(self, ie, field_preference):
self._order = []
self.ydl = ie._downloader
self.evaluate_params(self.ydl.params, field_preference)
if ie.get_param('verbose'):
self.print_verbose_info(self.ydl.write_debug)
def _get_field_setting(self, field, key):
if field not in self.settings:
if key in ('forced', 'priority'):
return False
self.ydl.deprecation_warning(
f'Using arbitrary fields ({field}) for format sorting is deprecated '
'and may be removed in a future version')
self.settings[field] = {}
propObj = self.settings[field]
if key not in propObj:
@@ -1668,7 +1711,10 @@ def add_item(field, reverse, closest, limit_text):
if field is None:
continue
if self._get_field_setting(field, 'type') == 'alias':
field = self._get_field_setting(field, 'field')
alias, field = field, self._get_field_setting(field, 'field')
self.ydl.deprecation_warning(
f'Format sorting alias {alias} is deprecated '
f'and may be removed in a future version. Please use {field} instead')
reverse = match.group('reverse') is not None
closest = match.group('separator') == '~'
limit_text = match.group('limit')
@@ -1772,10 +1818,7 @@ def calculate_preference(self, format):
def _sort_formats(self, formats, field_preference=[]):
if not formats:
return
format_sort = self.FormatSort() # params and to_screen are taken from the downloader
format_sort.evaluate_params(self._downloader.params, field_preference)
if self.get_param('verbose', False):
format_sort.print_verbose_info(self._downloader.write_debug)
format_sort = self.FormatSort(self, field_preference)
formats.sort(key=lambda f: format_sort.calculate_preference(f))
def _check_formats(self, formats, video_id):
@@ -1894,7 +1937,7 @@ def _parse_f4m_formats(self, manifest, manifest_url, video_id, preference=None,
tbr = int_or_none(media_el.attrib.get('bitrate'))
width = int_or_none(media_el.attrib.get('width'))
height = int_or_none(media_el.attrib.get('height'))
format_id = '-'.join(filter(None, [f4m_id, compat_str(i if tbr is None else tbr)]))
format_id = join_nonempty(f4m_id, tbr or i)
# If <bootstrapInfo> is present, the specified f4m is a
# stream-level manifest, and only set-level manifests may refer to
# external resources. See section 11.4 and section 4 of F4M spec
@@ -1956,7 +1999,7 @@ def _parse_f4m_formats(self, manifest, manifest_url, video_id, preference=None,
def _m3u8_meta_format(self, m3u8_url, ext=None, preference=None, quality=None, m3u8_id=None):
return {
'format_id': '-'.join(filter(None, [m3u8_id, 'meta'])),
'format_id': join_nonempty(m3u8_id, 'meta'),
'url': m3u8_url,
'ext': ext,
'protocol': 'm3u8',
@@ -2009,10 +2052,10 @@ def _parse_m3u8_formats_and_subtitles(
video_id=None):
formats, subtitles = [], {}
if '#EXT-X-FAXS-CM:' in m3u8_doc: # Adobe Flash Access
return formats, subtitles
has_drm = re.search(r'#EXT-X-SESSION-KEY:.*?URI="skd://', m3u8_doc)
has_drm = re.search('|'.join([
r'#EXT-X-FAXS-CM:', # Adobe Flash Access
r'#EXT-X-(?:SESSION-)?KEY:.*?URI="skd://', # Apple FairPlay
]), m3u8_doc)
def format_url(url):
return url if re.match(r'^https?://', url) else compat_urlparse.urljoin(m3u8_url, url)
@@ -2051,7 +2094,7 @@ def _extract_m3u8_playlist_indices(*args, **kwargs):
if '#EXT-X-TARGETDURATION' in m3u8_doc: # media playlist, return as is
formats = [{
'format_id': '-'.join(map(str, filter(None, [m3u8_id, idx]))),
'format_id': join_nonempty(m3u8_id, idx),
'format_index': idx,
'url': m3u8_url,
'ext': ext,
@@ -2100,7 +2143,7 @@ def extract_media(x_media_line):
if media_url:
manifest_url = format_url(media_url)
formats.extend({
'format_id': '-'.join(map(str, filter(None, (m3u8_id, group_id, name, idx)))),
'format_id': join_nonempty(m3u8_id, group_id, name, idx),
'format_note': name,
'format_index': idx,
'url': manifest_url,
@@ -2157,9 +2200,9 @@ def build_stream_name():
# format_id intact.
if not live:
stream_name = build_stream_name()
format_id[1] = stream_name if stream_name else '%d' % (tbr if tbr else len(formats))
format_id[1] = stream_name or '%d' % (tbr or len(formats))
f = {
'format_id': '-'.join(map(str, filter(None, format_id))),
'format_id': join_nonempty(*format_id),
'format_index': idx,
'url': manifest_url,
'manifest_url': m3u8_url,
@@ -2623,7 +2666,7 @@ def extract_Initialization(source):
mpd_duration = parse_duration(mpd_doc.get('mediaPresentationDuration'))
formats, subtitles = [], {}
stream_numbers = {'audio': 0, 'video': 0}
stream_numbers = collections.defaultdict(int)
for period in mpd_doc.findall(_add_ns('Period')):
period_duration = parse_duration(period.get('duration')) or mpd_duration
period_ms_info = extract_multisegment_info(period, {
@@ -2645,6 +2688,8 @@ def extract_Initialization(source):
content_type = mime_type
elif codecs.split('.')[0] == 'stpp':
content_type = 'text'
elif mimetype2ext(mime_type) in ('tt', 'dfxp', 'ttml', 'xml', 'json'):
content_type = 'text'
else:
self.report_warning('Unknown MIME type %s in DASH manifest' % mime_type)
continue
@@ -2687,10 +2732,8 @@ def extract_Initialization(source):
'format_note': 'DASH %s' % content_type,
'filesize': filesize,
'container': mimetype2ext(mime_type) + '_dash',
'manifest_stream_number': stream_numbers[content_type]
}
f.update(parse_codecs(codecs))
stream_numbers[content_type] += 1
elif content_type == 'text':
f = {
'ext': mimetype2ext(mime_type),
@@ -2857,7 +2900,9 @@ def add_segment_url():
else:
# Assuming direct URL to unfragmented media.
f['url'] = base_url
if content_type in ('video', 'audio') or mime_type == 'image/jpeg':
if content_type in ('video', 'audio', 'image/jpeg'):
f['manifest_stream_number'] = stream_numbers[f['url']]
stream_numbers[f['url']] += 1
formats.append(f)
elif content_type == 'text':
subtitles.setdefault(lang or 'und', []).append(f)
@@ -2946,13 +2991,6 @@ def _parse_ism_formats_and_subtitles(self, ism_doc, ism_url, ism_id=None):
})
fragment_ctx['time'] += fragment_ctx['duration']
format_id = []
if ism_id:
format_id.append(ism_id)
if stream_name:
format_id.append(stream_name)
format_id.append(compat_str(tbr))
if stream_type == 'text':
subtitles.setdefault(stream_language, []).append({
'ext': 'ismt',
@@ -2971,7 +3009,7 @@ def _parse_ism_formats_and_subtitles(self, ism_doc, ism_url, ism_id=None):
})
elif stream_type in ('video', 'audio'):
formats.append({
'format_id': '-'.join(format_id),
'format_id': join_nonempty(ism_id, stream_name, tbr),
'url': ism_url,
'manifest_url': ism_url,
'ext': 'ismv' if stream_type == 'video' else 'isma',
@@ -3501,6 +3539,32 @@ def extract_subtitles(self, *args, **kwargs):
def _get_subtitles(self, *args, **kwargs):
raise NotImplementedError('This method must be implemented by subclasses')
def extract_comments(self, *args, **kwargs):
if not self.get_param('getcomments'):
return None
generator = self._get_comments(*args, **kwargs)
def extractor():
comments = []
try:
while True:
comments.append(next(generator))
except KeyboardInterrupt:
interrupted = True
self.to_screen('Interrupted by user')
except StopIteration:
interrupted = False
comment_count = len(comments)
self.to_screen(f'Extracted {comment_count} comments')
return {
'comments': comments,
'comment_count': None if interrupted else comment_count
}
return extractor
def _get_comments(self, *args, **kwargs):
raise NotImplementedError('This method must be implemented by subclasses')
@staticmethod
def _merge_subtitle_items(subtitle_list1, subtitle_list2):
""" Merge subtitle items for one language. Items with duplicated URLs
@@ -3585,9 +3649,11 @@ class SearchInfoExtractor(InfoExtractor):
"""
Base class for paged search queries extractors.
They accept URLs in the format _SEARCH_KEY(|all|[0-9]):{query}
Instances should define _SEARCH_KEY and _MAX_RESULTS.
Instances should define _SEARCH_KEY and optionally _MAX_RESULTS
"""
_MAX_RESULTS = float('inf')
@classmethod
def _make_valid_url(cls):
return r'%s(?P<prefix>|[1-9][0-9]*|all):(?P<query>[\s\S]+)' % cls._SEARCH_KEY
@@ -3617,7 +3683,14 @@ def _real_extract(self, query):
return self._get_n_results(query, n)
def _get_n_results(self, query, n):
"""Get a specified number of results for a query"""
"""Get a specified number of results for a query.
Either this function or _search_results must be overridden by subclasses """
return self.playlist_result(
itertools.islice(self._search_results(query), 0, None if n == float('inf') else n),
query, query)
def _search_results(self, query):
"""Returns an iterator of search results"""
raise NotImplementedError('This method must be implemented by subclasses')
@property

View File

@@ -55,7 +55,6 @@ class CorusIE(ThePlatformFeedIE):
'timestamp': 1486392197,
},
'params': {
'format': 'bestvideo',
'skip_download': True,
},
'expected_warnings': ['Failed to parse JSON'],

View File

@@ -57,7 +57,7 @@ def _real_extract(self, url):
file_versions = coub['file_versions']
QUALITIES = ('low', 'med', 'high')
QUALITIES = ('low', 'med', 'high', 'higher')
MOBILE = 'mobile'
IPHONE = 'iphone'
@@ -86,6 +86,7 @@ def _real_extract(self, url):
'format_id': '%s-%s-%s' % (HTML5, kind, quality),
'filesize': int_or_none(item.get('size')),
'vcodec': 'none' if kind == 'audio' else None,
'acodec': 'none' if kind == 'video' else None,
'quality': quality_key(quality),
'source_preference': preference_key(HTML5),
})

View File

@@ -0,0 +1,40 @@
# coding: utf-8
from __future__ import unicode_literals
from .common import InfoExtractor
from ..utils import unified_strdate
class CozyTVIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?cozy\.tv/(?P<uploader>[^/]+)/replays/(?P<id>[^/$#&?]+)'
_TESTS = [{
'url': 'https://cozy.tv/beardson/replays/2021-11-19_1',
'info_dict': {
'id': 'beardson-2021-11-19_1',
'ext': 'mp4',
'title': 'pokemon pt2',
'uploader': 'beardson',
'upload_date': '20211119',
'was_live': True,
'duration': 7981,
},
'params': {'skip_download': True}
}]
def _real_extract(self, url):
uploader, date = self._match_valid_url(url).groups()
id = f'{uploader}-{date}'
data_json = self._download_json(f'https://api.cozy.tv/cache/{uploader}/replay/{date}', id)
formats, subtitles = self._extract_m3u8_formats_and_subtitles(
f'https://cozycdn.foxtrotstream.xyz/replays/{uploader}/{date}/index.m3u8', id, ext='mp4')
return {
'id': id,
'title': data_json.get('title'),
'uploader': data_json.get('user') or uploader,
'upload_date': unified_strdate(data_json.get('date')),
'was_live': True,
'duration': data_json.get('duration'),
'formats': formats,
'subtitles': subtitles,
}

View File

@@ -27,6 +27,7 @@
int_or_none,
lowercase_escape,
merge_dicts,
qualities,
remove_end,
sanitized_Request,
try_get,
@@ -478,19 +479,24 @@ def _real_extract(self, url):
[r'<a[^>]+href="/publisher/[^"]+"[^>]*>([^<]+)</a>', r'<div>\s*Publisher:\s*<span>\s*(.+?)\s*</span>\s*</div>'],
webpage, 'video_uploader', default=False)
requested_languages = self._configuration_arg('language')
requested_hardsubs = [('' if val == 'none' else val) for val in self._configuration_arg('hardsub')]
language_preference = qualities((requested_languages or [language or ''])[::-1])
hardsub_preference = qualities((requested_hardsubs or ['', language or ''])[::-1])
formats = []
for stream in media.get('streams', []):
audio_lang = stream.get('audio_lang')
hardsub_lang = stream.get('hardsub_lang')
audio_lang = stream.get('audio_lang') or ''
hardsub_lang = stream.get('hardsub_lang') or ''
if (requested_languages and audio_lang.lower() not in requested_languages
or requested_hardsubs and hardsub_lang.lower() not in requested_hardsubs):
continue
vrv_formats = self._extract_vrv_formats(
stream.get('url'), video_id, stream.get('format'),
audio_lang, hardsub_lang)
for f in vrv_formats:
f['language_preference'] = 1 if audio_lang == language else 0
f['quality'] = (
1 if not hardsub_lang
else 0 if hardsub_lang == language
else -1)
f['language_preference'] = language_preference(audio_lang)
f['quality'] = hardsub_preference(hardsub_lang)
formats.extend(vrv_formats)
if not formats:
available_fmts = []
@@ -650,7 +656,7 @@ def _real_extract(self, url):
class CrunchyrollShowPlaylistIE(CrunchyrollBaseIE):
IE_NAME = 'crunchyroll:playlist'
_VALID_URL = r'https?://(?:(?P<prefix>www|m)\.)?(?P<url>crunchyroll\.com/(?!(?:news|anime-news|library|forum|launchcalendar|lineup|store|comics|freetrial|login|media-\d+))(?P<id>[\w\-]+))/?(?:\?|$)'
_VALID_URL = r'https?://(?:(?P<prefix>www|m)\.)?(?P<url>crunchyroll\.com/(?:\w{1,2}/)?(?!(?:news|anime-news|library|forum|launchcalendar|lineup|store|comics|freetrial|login|media-\d+))(?P<id>[\w\-]+))/?(?:\?|$)'
_TESTS = [{
'url': 'https://www.crunchyroll.com/a-bridge-to-the-starry-skies-hoshizora-e-kakaru-hashi',
@@ -672,6 +678,9 @@ class CrunchyrollShowPlaylistIE(CrunchyrollBaseIE):
# geo-restricted (US), 18+ maturity wall, non-premium will be available since 2015.11.14
'url': 'http://www.crunchyroll.com/ladies-versus-butlers?skip_wall=1',
'only_matching': True,
}, {
'url': 'http://www.crunchyroll.com/fr/ladies-versus-butlers',
'only_matching': True,
}]
def _real_extract(self, url):
@@ -683,18 +692,72 @@ def _real_extract(self, url):
headers=self.geo_verification_headers())
title = self._html_search_meta('name', webpage, default=None)
episode_paths = re.findall(
r'(?s)<li id="showview_videos_media_(\d+)"[^>]+>.*?<a href="([^"]+)"',
webpage)
entries = [
self.url_result('http://www.crunchyroll.com' + ep, 'Crunchyroll', ep_id)
for ep_id, ep in episode_paths
]
entries.reverse()
episode_re = r'<li id="showview_videos_media_(\d+)"[^>]+>.*?<a href="([^"]+)"'
season_re = r'<a [^>]+season-dropdown[^>]+>([^<]+)'
paths = re.findall(f'(?s){episode_re}|{season_re}', webpage)
entries, current_season = [], None
for ep_id, ep, season in paths:
if season:
current_season = season
continue
entries.append(self.url_result(
f'http://www.crunchyroll.com{ep}', CrunchyrollIE.ie_key(), ep_id, season=current_season))
return {
'_type': 'playlist',
'id': show_id,
'title': title,
'entries': entries,
'entries': reversed(entries),
}
class CrunchyrollBetaIE(CrunchyrollBaseIE):
IE_NAME = 'crunchyroll:beta'
_VALID_URL = r'https?://beta\.crunchyroll\.com/(?P<lang>(?:\w{1,2}/)?)watch/(?P<internal_id>\w+)/(?P<id>[\w\-]+)/?(?:\?|$)'
_TESTS = [{
'url': 'https://beta.crunchyroll.com/watch/GY2P1Q98Y/to-the-future',
'info_dict': {
'id': '696363',
'ext': 'mp4',
'timestamp': 1459610100,
'description': 'md5:a022fbec4fbb023d43631032c91ed64b',
'uploader': 'Toei Animation',
'title': 'World Trigger Episode 73 To the Future',
'upload_date': '20160402',
},
'params': {'skip_download': 'm3u8'},
'expected_warnings': ['Unable to download XML']
}]
def _real_extract(self, url):
lang, internal_id, display_id = self._match_valid_url(url).group('lang', 'internal_id', 'id')
webpage = self._download_webpage(url, display_id)
episode_data = self._parse_json(
self._search_regex(r'__INITIAL_STATE__\s*=\s*({.+?})\s*;', webpage, 'episode data'),
display_id)['content']['byId'][internal_id]
video_id = episode_data['external_id'].split('.')[1]
series_id = episode_data['episode_metadata']['series_slug_title']
return self.url_result(f'https://www.crunchyroll.com/{lang}{series_id}/{display_id}-{video_id}',
CrunchyrollIE.ie_key(), video_id)
class CrunchyrollBetaShowIE(CrunchyrollBaseIE):
IE_NAME = 'crunchyroll:playlist:beta'
_VALID_URL = r'https?://beta\.crunchyroll\.com/(?P<lang>(?:\w{1,2}/)?)series/\w+/(?P<id>[\w\-]+)/?(?:\?|$)'
_TESTS = [{
'url': 'https://beta.crunchyroll.com/series/GY19NQ2QR/Girl-Friend-BETA',
'info_dict': {
'id': 'girl-friend-beta',
'title': 'Girl Friend BETA',
},
'playlist_mincount': 10,
}, {
'url': 'https://beta.crunchyroll.com/it/series/GY19NQ2QR/Girl-Friend-BETA',
'only_matching': True,
}]
def _real_extract(self, url):
lang, series_id = self._match_valid_url(url).group('lang', 'id')
return self.url_result(f'https://www.crunchyroll.com/{lang}{series_id.lower()}',
CrunchyrollShowPlaylistIE.ie_key(), series_id)

View File

@@ -18,7 +18,7 @@
str_to_int,
unescapeHTML,
)
from .senateisvp import SenateISVPIE
from .senategov import SenateISVPIE
from .ustream import UstreamIE

View File

@@ -15,7 +15,6 @@
class CuriosityStreamBaseIE(InfoExtractor):
_NETRC_MACHINE = 'curiositystream'
_auth_token = None
_API_BASE_URL = 'https://api.curiositystream.com/v1/'
def _handle_errors(self, result):
error = result.get('error', {}).get('message')
@@ -39,38 +38,44 @@ def _real_initialize(self):
if email is None:
return
result = self._download_json(
self._API_BASE_URL + 'login', None, data=urlencode_postdata({
'https://api.curiositystream.com/v1/login', None,
note='Logging in', data=urlencode_postdata({
'email': email,
'password': password,
}))
self._handle_errors(result)
self._auth_token = result['message']['auth_token']
CuriosityStreamBaseIE._auth_token = result['message']['auth_token']
class CuriosityStreamIE(CuriosityStreamBaseIE):
IE_NAME = 'curiositystream'
_VALID_URL = r'https?://(?:app\.)?curiositystream\.com/video/(?P<id>\d+)'
_TEST = {
_TESTS = [{
'url': 'https://app.curiositystream.com/video/2',
'info_dict': {
'id': '2',
'ext': 'mp4',
'title': 'How Did You Develop The Internet?',
'description': 'Vint Cerf, Google\'s Chief Internet Evangelist, describes how he and Bob Kahn created the internet.',
'channel': 'Curiosity Stream',
'categories': ['Technology', 'Interview'],
'average_rating': 96.79,
'series_id': '2',
},
'params': {
'format': 'bestvideo',
# m3u8 download
'skip_download': True,
},
}
}]
_API_BASE_URL = 'https://api.curiositystream.com/v1/media/'
def _real_extract(self, url):
video_id = self._match_id(url)
formats = []
for encoding_format in ('m3u8', 'mpd'):
media = self._call_api('media/' + video_id, video_id, query={
media = self._call_api(video_id, video_id, query={
'encodingsNew': 'true',
'encodingsFormat': encoding_format,
})
@@ -140,12 +145,33 @@ def _real_extract(self, url):
'duration': int_or_none(media.get('duration')),
'tags': media.get('tags'),
'subtitles': subtitles,
'channel': media.get('producer'),
'categories': [media.get('primary_category'), media.get('type')],
'average_rating': media.get('rating_percentage'),
'series_id': str(media.get('collection_id') or '') or None,
}
class CuriosityStreamCollectionIE(CuriosityStreamBaseIE):
IE_NAME = 'curiositystream:collection'
_VALID_URL = r'https?://(?:app\.)?curiositystream\.com/(?:collections?|series)/(?P<id>\d+)'
class CuriosityStreamCollectionBaseIE(CuriosityStreamBaseIE):
def _real_extract(self, url):
collection_id = self._match_id(url)
collection = self._call_api(collection_id, collection_id)
entries = []
for media in collection.get('media', []):
media_id = compat_str(media.get('id'))
media_type, ie = ('series', CuriosityStreamSeriesIE) if media.get('is_collection') else ('video', CuriosityStreamIE)
entries.append(self.url_result(
'https://curiositystream.com/%s/%s' % (media_type, media_id),
ie=ie.ie_key(), video_id=media_id))
return self.playlist_result(
entries, collection_id,
collection.get('title'), collection.get('description'))
class CuriosityStreamCollectionsIE(CuriosityStreamCollectionBaseIE):
IE_NAME = 'curiositystream:collections'
_VALID_URL = r'https?://(?:app\.)?curiositystream\.com/collections/(?P<id>\d+)'
_API_BASE_URL = 'https://api.curiositystream.com/v2/collections/'
_TESTS = [{
'url': 'https://curiositystream.com/collections/86',
@@ -156,7 +182,17 @@ class CuriosityStreamCollectionIE(CuriosityStreamBaseIE):
},
'playlist_mincount': 7,
}, {
'url': 'https://app.curiositystream.com/collection/2',
'url': 'https://curiositystream.com/collections/36',
'only_matching': True,
}]
class CuriosityStreamSeriesIE(CuriosityStreamCollectionBaseIE):
IE_NAME = 'curiositystream:series'
_VALID_URL = r'https?://(?:app\.)?curiositystream\.com/(?:series|collection)/(?P<id>\d+)'
_API_BASE_URL = 'https://api.curiositystream.com/v2/series/'
_TESTS = [{
'url': 'https://curiositystream.com/series/2',
'info_dict': {
'id': '2',
'title': 'Curious Minds: The Internet',
@@ -164,23 +200,6 @@ class CuriosityStreamCollectionIE(CuriosityStreamBaseIE):
},
'playlist_mincount': 16,
}, {
'url': 'https://curiositystream.com/series/2',
'only_matching': True,
}, {
'url': 'https://curiositystream.com/collections/36',
'url': 'https://curiositystream.com/collection/2',
'only_matching': True,
}]
def _real_extract(self, url):
collection_id = self._match_id(url)
collection = self._call_api(collection_id, collection_id)
entries = []
for media in collection.get('media', []):
media_id = compat_str(media.get('id'))
media_type, ie = ('series', CuriosityStreamCollectionIE) if media.get('is_collection') else ('video', CuriosityStreamIE)
entries.append(self.url_result(
'https://curiositystream.com/%s/%s' % (media_type, media_id),
ie=ie.ie_key(), video_id=media_id))
return self.playlist_result(
entries, collection_id,
collection.get('title'), collection.get('description'))

View File

@@ -1,42 +0,0 @@
# coding: utf-8
from __future__ import unicode_literals
from .dplay import DPlayIE
class DiscoveryNetworksDeIE(DPlayIE):
_VALID_URL = r'https?://(?:www\.)?(?P<domain>(?:tlc|dmax)\.de|dplay\.co\.uk)/(?:programme|show|sendungen)/(?P<programme>[^/]+)/(?:video/)?(?P<alternate_id>[^/]+)'
_TESTS = [{
'url': 'https://www.tlc.de/programme/breaking-amish/video/die-welt-da-drauen/DCB331270001100',
'info_dict': {
'id': '78867',
'ext': 'mp4',
'title': 'Die Welt da draußen',
'description': 'md5:61033c12b73286e409d99a41742ef608',
'timestamp': 1554069600,
'upload_date': '20190331',
},
'params': {
'format': 'bestvideo',
'skip_download': True,
},
}, {
'url': 'https://www.dmax.de/programme/dmax-highlights/video/tuning-star-sidney-hoffmann-exklusiv-bei-dmax/191023082312316',
'only_matching': True,
}, {
'url': 'https://www.dplay.co.uk/show/ghost-adventures/video/hotel-leger-103620/EHD_280313B',
'only_matching': True,
}, {
'url': 'https://tlc.de/sendungen/breaking-amish/die-welt-da-drauen/',
'only_matching': True,
}]
def _real_extract(self, url):
domain, programme, alternate_id = self._match_valid_url(url).groups()
country = 'GB' if domain == 'dplay.co.uk' else 'DE'
realm = 'questuk' if country == 'GB' else domain.replace('.', '')
return self._get_disco_api_info(
url, '%s/%s' % (programme, alternate_id),
'sonic-eu1-prod.disco-api.com', realm, country)

View File

@@ -1,98 +0,0 @@
# coding: utf-8
from __future__ import unicode_literals
import json
from ..compat import compat_str
from ..utils import try_get
from .common import InfoExtractor
from .dplay import DPlayIE
class DiscoveryPlusIndiaIE(DPlayIE):
_VALID_URL = r'https?://(?:www\.)?discoveryplus\.in/videos?' + DPlayIE._PATH_REGEX
_TESTS = [{
'url': 'https://www.discoveryplus.in/videos/how-do-they-do-it/fugu-and-more?seasonId=8&type=EPISODE',
'info_dict': {
'id': '27104',
'ext': 'mp4',
'display_id': 'how-do-they-do-it/fugu-and-more',
'title': 'Fugu and More',
'description': 'The Japanese catch, prepare and eat the deadliest fish on the planet.',
'duration': 1319,
'timestamp': 1582309800,
'upload_date': '20200221',
'series': 'How Do They Do It?',
'season_number': 8,
'episode_number': 2,
'creator': 'Discovery Channel',
},
'params': {
'format': 'bestvideo',
'skip_download': True,
},
'skip': 'Cookies (not necessarily logged in) are needed'
}]
def _update_disco_api_headers(self, headers, disco_base, display_id, realm):
headers['x-disco-params'] = 'realm=%s' % realm
headers['x-disco-client'] = 'WEB:UNKNOWN:dplus-india:17.0.0'
def _download_video_playback_info(self, disco_base, video_id, headers):
return self._download_json(
disco_base + 'playback/v3/videoPlaybackInfo',
video_id, headers=headers, data=json.dumps({
'deviceInfo': {
'adBlocker': False,
},
'videoId': video_id,
}).encode('utf-8'))['data']['attributes']['streaming']
def _real_extract(self, url):
display_id = self._match_id(url)
return self._get_disco_api_info(
url, display_id, 'ap2-prod-direct.discoveryplus.in', 'dplusindia', 'in')
class DiscoveryPlusIndiaShowIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?discoveryplus\.in/show/(?P<show_name>[^/]+)/?(?:[?#]|$)'
_TESTS = [{
'url': 'https://www.discoveryplus.in/show/how-do-they-do-it',
'playlist_mincount': 140,
'info_dict': {
'id': 'how-do-they-do-it',
},
}]
def _entries(self, show_name):
headers = {
'x-disco-client': 'WEB:UNKNOWN:dplus-india:prod',
'x-disco-params': 'realm=dplusindia',
'referer': 'https://www.discoveryplus.in/',
}
show_url = 'https://ap2-prod-direct.discoveryplus.in/cms/routes/show/{}?include=default'.format(show_name)
show_json = self._download_json(show_url,
video_id=show_name,
headers=headers)['included'][4]['attributes']['component']
show_id = show_json['mandatoryParams'].split('=')[-1]
season_url = 'https://ap2-prod-direct.discoveryplus.in/content/videos?sort=episodeNumber&filter[seasonNumber]={}&filter[show.id]={}&page[size]=100&page[number]={}'
for season in show_json['filters'][0]['options']:
season_id = season['id']
total_pages, page_num = 1, 0
while page_num < total_pages:
season_json = self._download_json(season_url.format(season_id, show_id, compat_str(page_num + 1)),
video_id=show_id, headers=headers,
note='Downloading JSON metadata%s' % (' page %d' % page_num if page_num else ''))
if page_num == 0:
total_pages = try_get(season_json, lambda x: x['meta']['totalPages'], int) or 1
episodes_json = season_json['data']
for episode in episodes_json:
video_id = episode['attributes']['path']
yield self.url_result(
'https://discoveryplus.in/videos/%s' % video_id,
ie=DiscoveryPlusIndiaIE.ie_key(), video_id=video_id)
page_num += 1
def _real_extract(self, url):
show_name = self._match_valid_url(url).group('show_name')
return self.playlist_result(self._entries(show_name), playlist_id=show_name)

View File

@@ -7,8 +7,8 @@
from ..utils import (
int_or_none,
unified_strdate,
compat_str,
determine_ext,
join_nonempty,
update_url_query,
)
@@ -119,18 +119,13 @@ def _real_extract(self, url):
continue
formats.append(f)
continue
format_id = []
if flavor_format:
format_id.append(flavor_format)
if tbr:
format_id.append(compat_str(tbr))
ext = determine_ext(flavor_url)
if flavor_format == 'applehttp' or ext == 'm3u8':
ext = 'mp4'
width = int_or_none(flavor.get('width'))
height = int_or_none(flavor.get('height'))
formats.append({
'format_id': '-'.join(format_id),
'format_id': join_nonempty(flavor_format, tbr),
'url': flavor_url,
'width': width,
'height': height,

View File

@@ -2,6 +2,7 @@
from __future__ import unicode_literals
import json
import uuid
from .common import InfoExtractor
from ..compat import compat_HTTPError
@@ -11,12 +12,172 @@
float_or_none,
int_or_none,
strip_or_none,
try_get,
unified_timestamp,
)
class DPlayIE(InfoExtractor):
class DPlayBaseIE(InfoExtractor):
_PATH_REGEX = r'/(?P<id>[^/]+/[^/?#]+)'
_auth_token_cache = {}
def _get_auth(self, disco_base, display_id, realm, needs_device_id=True):
key = (disco_base, realm)
st = self._get_cookies(disco_base).get('st')
token = (st and st.value) or self._auth_token_cache.get(key)
if not token:
query = {'realm': realm}
if needs_device_id:
query['deviceId'] = uuid.uuid4().hex
token = self._download_json(
disco_base + 'token', display_id, 'Downloading token',
query=query)['data']['attributes']['token']
# Save cache only if cookies are not being set
if not self._get_cookies(disco_base).get('st'):
self._auth_token_cache[key] = token
return f'Bearer {token}'
def _process_errors(self, e, geo_countries):
info = self._parse_json(e.cause.read().decode('utf-8'), None)
error = info['errors'][0]
error_code = error.get('code')
if error_code == 'access.denied.geoblocked':
self.raise_geo_restricted(countries=geo_countries)
elif error_code in ('access.denied.missingpackage', 'invalid.token'):
raise ExtractorError(
'This video is only available for registered users. You may want to use --cookies.', expected=True)
raise ExtractorError(info['errors'][0]['detail'], expected=True)
def _update_disco_api_headers(self, headers, disco_base, display_id, realm):
headers['Authorization'] = self._get_auth(disco_base, display_id, realm, False)
def _download_video_playback_info(self, disco_base, video_id, headers):
streaming = self._download_json(
disco_base + 'playback/videoPlaybackInfo/' + video_id,
video_id, headers=headers)['data']['attributes']['streaming']
streaming_list = []
for format_id, format_dict in streaming.items():
streaming_list.append({
'type': format_id,
'url': format_dict.get('url'),
})
return streaming_list
def _get_disco_api_info(self, url, display_id, disco_host, realm, country, domain=''):
geo_countries = [country.upper()]
self._initialize_geo_bypass({
'countries': geo_countries,
})
disco_base = 'https://%s/' % disco_host
headers = {
'Referer': url,
}
self._update_disco_api_headers(headers, disco_base, display_id, realm)
try:
video = self._download_json(
disco_base + 'content/videos/' + display_id, display_id,
headers=headers, query={
'fields[channel]': 'name',
'fields[image]': 'height,src,width',
'fields[show]': 'name',
'fields[tag]': 'name',
'fields[video]': 'description,episodeNumber,name,publishStart,seasonNumber,videoDuration',
'include': 'images,primaryChannel,show,tags'
})
except ExtractorError as e:
if isinstance(e.cause, compat_HTTPError) and e.cause.code == 400:
self._process_errors(e, geo_countries)
raise
video_id = video['data']['id']
info = video['data']['attributes']
title = info['name'].strip()
formats = []
subtitles = {}
try:
streaming = self._download_video_playback_info(
disco_base, video_id, headers)
except ExtractorError as e:
if isinstance(e.cause, compat_HTTPError) and e.cause.code == 403:
self._process_errors(e, geo_countries)
raise
for format_dict in streaming:
if not isinstance(format_dict, dict):
continue
format_url = format_dict.get('url')
if not format_url:
continue
format_id = format_dict.get('type')
ext = determine_ext(format_url)
if format_id == 'dash' or ext == 'mpd':
dash_fmts, dash_subs = self._extract_mpd_formats_and_subtitles(
format_url, display_id, mpd_id='dash', fatal=False)
formats.extend(dash_fmts)
subtitles = self._merge_subtitles(subtitles, dash_subs)
elif format_id == 'hls' or ext == 'm3u8':
m3u8_fmts, m3u8_subs = self._extract_m3u8_formats_and_subtitles(
format_url, display_id, 'mp4',
entry_protocol='m3u8_native', m3u8_id='hls',
fatal=False)
formats.extend(m3u8_fmts)
subtitles = self._merge_subtitles(subtitles, m3u8_subs)
else:
formats.append({
'url': format_url,
'format_id': format_id,
})
self._sort_formats(formats)
creator = series = None
tags = []
thumbnails = []
included = video.get('included') or []
if isinstance(included, list):
for e in included:
attributes = e.get('attributes')
if not attributes:
continue
e_type = e.get('type')
if e_type == 'channel':
creator = attributes.get('name')
elif e_type == 'image':
src = attributes.get('src')
if src:
thumbnails.append({
'url': src,
'width': int_or_none(attributes.get('width')),
'height': int_or_none(attributes.get('height')),
})
if e_type == 'show':
series = attributes.get('name')
elif e_type == 'tag':
name = attributes.get('name')
if name:
tags.append(name)
return {
'id': video_id,
'display_id': display_id,
'title': title,
'description': strip_or_none(info.get('description')),
'duration': float_or_none(info.get('videoDuration'), 1000),
'timestamp': unified_timestamp(info.get('publishStart')),
'series': series,
'season_number': int_or_none(info.get('seasonNumber')),
'episode_number': int_or_none(info.get('episodeNumber')),
'creator': creator,
'tags': tags,
'thumbnails': thumbnails,
'formats': formats,
'subtitles': subtitles,
'http_headers': {
'referer': domain,
},
}
class DPlayIE(DPlayBaseIE):
_VALID_URL = r'''(?x)https?://
(?P<domain>
(?:www\.)?(?P<host>d
@@ -26,7 +187,7 @@ class DPlayIE(InfoExtractor):
)
)|
(?P<subdomain_country>es|it)\.dplay\.com
)/[^/]+''' + _PATH_REGEX
)/[^/]+''' + DPlayBaseIE._PATH_REGEX
_TESTS = [{
# non geo restricted, via secure api, unsigned download hls URL
@@ -46,7 +207,6 @@ class DPlayIE(InfoExtractor):
'episode_number': 1,
},
'params': {
'format': 'bestvideo',
'skip_download': True,
},
}, {
@@ -67,7 +227,6 @@ class DPlayIE(InfoExtractor):
'episode_number': 1,
},
'params': {
'format': 'bestvideo',
'skip_download': True,
},
}, {
@@ -87,7 +246,6 @@ class DPlayIE(InfoExtractor):
'episode_number': 7,
},
'params': {
'format': 'bestvideo',
'skip_download': True,
},
'skip': 'Available for Premium users',
@@ -153,138 +311,6 @@ class DPlayIE(InfoExtractor):
'only_matching': True,
}]
def _process_errors(self, e, geo_countries):
info = self._parse_json(e.cause.read().decode('utf-8'), None)
error = info['errors'][0]
error_code = error.get('code')
if error_code == 'access.denied.geoblocked':
self.raise_geo_restricted(countries=geo_countries)
elif error_code in ('access.denied.missingpackage', 'invalid.token'):
raise ExtractorError(
'This video is only available for registered users. You may want to use --cookies.', expected=True)
raise ExtractorError(info['errors'][0]['detail'], expected=True)
def _update_disco_api_headers(self, headers, disco_base, display_id, realm):
headers['Authorization'] = 'Bearer ' + self._download_json(
disco_base + 'token', display_id, 'Downloading token',
query={
'realm': realm,
})['data']['attributes']['token']
def _download_video_playback_info(self, disco_base, video_id, headers):
streaming = self._download_json(
disco_base + 'playback/videoPlaybackInfo/' + video_id,
video_id, headers=headers)['data']['attributes']['streaming']
streaming_list = []
for format_id, format_dict in streaming.items():
streaming_list.append({
'type': format_id,
'url': format_dict.get('url'),
})
return streaming_list
def _get_disco_api_info(self, url, display_id, disco_host, realm, country):
geo_countries = [country.upper()]
self._initialize_geo_bypass({
'countries': geo_countries,
})
disco_base = 'https://%s/' % disco_host
headers = {
'Referer': url,
}
self._update_disco_api_headers(headers, disco_base, display_id, realm)
try:
video = self._download_json(
disco_base + 'content/videos/' + display_id, display_id,
headers=headers, query={
'fields[channel]': 'name',
'fields[image]': 'height,src,width',
'fields[show]': 'name',
'fields[tag]': 'name',
'fields[video]': 'description,episodeNumber,name,publishStart,seasonNumber,videoDuration',
'include': 'images,primaryChannel,show,tags'
})
except ExtractorError as e:
if isinstance(e.cause, compat_HTTPError) and e.cause.code == 400:
self._process_errors(e, geo_countries)
raise
video_id = video['data']['id']
info = video['data']['attributes']
title = info['name'].strip()
formats = []
try:
streaming = self._download_video_playback_info(
disco_base, video_id, headers)
except ExtractorError as e:
if isinstance(e.cause, compat_HTTPError) and e.cause.code == 403:
self._process_errors(e, geo_countries)
raise
for format_dict in streaming:
if not isinstance(format_dict, dict):
continue
format_url = format_dict.get('url')
if not format_url:
continue
format_id = format_dict.get('type')
ext = determine_ext(format_url)
if format_id == 'dash' or ext == 'mpd':
formats.extend(self._extract_mpd_formats(
format_url, display_id, mpd_id='dash', fatal=False))
elif format_id == 'hls' or ext == 'm3u8':
formats.extend(self._extract_m3u8_formats(
format_url, display_id, 'mp4',
entry_protocol='m3u8_native', m3u8_id='hls',
fatal=False))
else:
formats.append({
'url': format_url,
'format_id': format_id,
})
self._sort_formats(formats)
creator = series = None
tags = []
thumbnails = []
included = video.get('included') or []
if isinstance(included, list):
for e in included:
attributes = e.get('attributes')
if not attributes:
continue
e_type = e.get('type')
if e_type == 'channel':
creator = attributes.get('name')
elif e_type == 'image':
src = attributes.get('src')
if src:
thumbnails.append({
'url': src,
'width': int_or_none(attributes.get('width')),
'height': int_or_none(attributes.get('height')),
})
if e_type == 'show':
series = attributes.get('name')
elif e_type == 'tag':
name = attributes.get('name')
if name:
tags.append(name)
return {
'id': video_id,
'display_id': display_id,
'title': title,
'description': strip_or_none(info.get('description')),
'duration': float_or_none(info.get('videoDuration'), 1000),
'timestamp': unified_timestamp(info.get('publishStart')),
'series': series,
'season_number': int_or_none(info.get('seasonNumber')),
'episode_number': int_or_none(info.get('episodeNumber')),
'creator': creator,
'tags': tags,
'thumbnails': thumbnails,
'formats': formats,
}
def _real_extract(self, url):
mobj = self._match_valid_url(url)
display_id = mobj.group('id')
@@ -292,11 +318,11 @@ def _real_extract(self, url):
country = mobj.group('country') or mobj.group('subdomain_country') or mobj.group('plus_country')
host = 'disco-api.' + domain if domain[0] == 'd' else 'eu2-prod.disco-api.com'
return self._get_disco_api_info(
url, display_id, host, 'dplay' + country, country)
url, display_id, host, 'dplay' + country, country, domain)
class HGTVDeIE(DPlayIE):
_VALID_URL = r'https?://de\.hgtv\.com/sendungen' + DPlayIE._PATH_REGEX
class HGTVDeIE(DPlayBaseIE):
_VALID_URL = r'https?://de\.hgtv\.com/sendungen' + DPlayBaseIE._PATH_REGEX
_TESTS = [{
'url': 'https://de.hgtv.com/sendungen/tiny-house-klein-aber-oho/wer-braucht-schon-eine-toilette/',
'info_dict': {
@@ -313,9 +339,6 @@ class HGTVDeIE(DPlayIE):
'season_number': 3,
'episode_number': 3,
},
'params': {
'format': 'bestvideo',
},
}]
def _real_extract(self, url):
@@ -324,8 +347,8 @@ def _real_extract(self, url):
url, display_id, 'eu1-prod.disco-api.com', 'hgtv', 'de')
class DiscoveryPlusIE(DPlayIE):
_VALID_URL = r'https?://(?:www\.)?discoveryplus\.com/video' + DPlayIE._PATH_REGEX
class DiscoveryPlusIE(DPlayBaseIE):
_VALID_URL = r'https?://(?:www\.)?discoveryplus\.com/(?:\w{2}/)?video' + DPlayBaseIE._PATH_REGEX
_TESTS = [{
'url': 'https://www.discoveryplus.com/video/property-brothers-forever-home/food-and-family',
'info_dict': {
@@ -343,6 +366,9 @@ class DiscoveryPlusIE(DPlayIE):
'episode_number': 1,
},
'skip': 'Available for Premium users',
}, {
'url': 'https://discoveryplus.com/ca/video/bering-sea-gold-discovery-ca/goldslingers',
'only_matching': True,
}]
_PRODUCT = 'dplus_us'
@@ -372,7 +398,7 @@ def _real_extract(self, url):
class ScienceChannelIE(DiscoveryPlusIE):
_VALID_URL = r'https?://(?:www\.)?sciencechannel\.com/video' + DPlayIE._PATH_REGEX
_VALID_URL = r'https?://(?:www\.)?sciencechannel\.com/video' + DPlayBaseIE._PATH_REGEX
_TESTS = [{
'url': 'https://www.sciencechannel.com/video/strangest-things-science-atve-us/nazi-mystery-machine',
'info_dict': {
@@ -392,7 +418,7 @@ class ScienceChannelIE(DiscoveryPlusIE):
class DIYNetworkIE(DiscoveryPlusIE):
_VALID_URL = r'https?://(?:watch\.)?diynetwork\.com/video' + DPlayIE._PATH_REGEX
_VALID_URL = r'https?://(?:watch\.)?diynetwork\.com/video' + DPlayBaseIE._PATH_REGEX
_TESTS = [{
'url': 'https://watch.diynetwork.com/video/pool-kings-diy-network/bringing-beach-life-to-texas',
'info_dict': {
@@ -412,7 +438,7 @@ class DIYNetworkIE(DiscoveryPlusIE):
class AnimalPlanetIE(DiscoveryPlusIE):
_VALID_URL = r'https?://(?:www\.)?animalplanet\.com/video' + DPlayIE._PATH_REGEX
_VALID_URL = r'https?://(?:www\.)?animalplanet\.com/video' + DPlayBaseIE._PATH_REGEX
_TESTS = [{
'url': 'https://www.animalplanet.com/video/north-woods-law-animal-planet/squirrel-showdown',
'info_dict': {
@@ -429,3 +455,159 @@ class AnimalPlanetIE(DiscoveryPlusIE):
_PRODUCT = 'apl'
_API_URL = 'us1-prod-direct.animalplanet.com'
class DiscoveryPlusIndiaIE(DPlayBaseIE):
_VALID_URL = r'https?://(?:www\.)?discoveryplus\.in/videos?' + DPlayBaseIE._PATH_REGEX
_TESTS = [{
'url': 'https://www.discoveryplus.in/videos/how-do-they-do-it/fugu-and-more?seasonId=8&type=EPISODE',
'info_dict': {
'id': '27104',
'ext': 'mp4',
'display_id': 'how-do-they-do-it/fugu-and-more',
'title': 'Fugu and More',
'description': 'The Japanese catch, prepare and eat the deadliest fish on the planet.',
'duration': 1319,
'timestamp': 1582309800,
'upload_date': '20200221',
'series': 'How Do They Do It?',
'season_number': 8,
'episode_number': 2,
'creator': 'Discovery Channel',
},
'params': {
'skip_download': True,
}
}]
def _update_disco_api_headers(self, headers, disco_base, display_id, realm):
headers.update({
'x-disco-params': 'realm=%s' % realm,
'x-disco-client': 'WEB:UNKNOWN:dplus-india:17.0.0',
'Authorization': self._get_auth(disco_base, display_id, realm),
})
def _download_video_playback_info(self, disco_base, video_id, headers):
return self._download_json(
disco_base + 'playback/v3/videoPlaybackInfo',
video_id, headers=headers, data=json.dumps({
'deviceInfo': {
'adBlocker': False,
},
'videoId': video_id,
}).encode('utf-8'))['data']['attributes']['streaming']
def _real_extract(self, url):
display_id = self._match_id(url)
return self._get_disco_api_info(
url, display_id, 'ap2-prod-direct.discoveryplus.in', 'dplusindia', 'in', 'https://www.discoveryplus.in/')
class DiscoveryNetworksDeIE(DPlayBaseIE):
_VALID_URL = r'https?://(?:www\.)?(?P<domain>(?:tlc|dmax)\.de|dplay\.co\.uk)/(?:programme|show|sendungen)/(?P<programme>[^/]+)/(?:video/)?(?P<alternate_id>[^/]+)'
_TESTS = [{
'url': 'https://www.tlc.de/programme/breaking-amish/video/die-welt-da-drauen/DCB331270001100',
'info_dict': {
'id': '78867',
'ext': 'mp4',
'title': 'Die Welt da draußen',
'description': 'md5:61033c12b73286e409d99a41742ef608',
'timestamp': 1554069600,
'upload_date': '20190331',
},
'params': {
'skip_download': True,
},
}, {
'url': 'https://www.dmax.de/programme/dmax-highlights/video/tuning-star-sidney-hoffmann-exklusiv-bei-dmax/191023082312316',
'only_matching': True,
}, {
'url': 'https://www.dplay.co.uk/show/ghost-adventures/video/hotel-leger-103620/EHD_280313B',
'only_matching': True,
}, {
'url': 'https://tlc.de/sendungen/breaking-amish/die-welt-da-drauen/',
'only_matching': True,
}]
def _real_extract(self, url):
domain, programme, alternate_id = self._match_valid_url(url).groups()
country = 'GB' if domain == 'dplay.co.uk' else 'DE'
realm = 'questuk' if country == 'GB' else domain.replace('.', '')
return self._get_disco_api_info(
url, '%s/%s' % (programme, alternate_id),
'sonic-eu1-prod.disco-api.com', realm, country)
class DiscoveryPlusShowBaseIE(DPlayBaseIE):
def _entries(self, show_name):
headers = {
'x-disco-client': self._X_CLIENT,
'x-disco-params': f'realm={self._REALM}',
'referer': self._DOMAIN,
'Authentication': self._get_auth(self._BASE_API, None, self._REALM),
}
show_json = self._download_json(
f'{self._BASE_API}cms/routes/{self._SHOW_STR}/{show_name}?include=default',
video_id=show_name, headers=headers)['included'][self._INDEX]['attributes']['component']
show_id = show_json['mandatoryParams'].split('=')[-1]
season_url = self._BASE_API + 'content/videos?sort=episodeNumber&filter[seasonNumber]={}&filter[show.id]={}&page[size]=100&page[number]={}'
for season in show_json['filters'][0]['options']:
season_id = season['id']
total_pages, page_num = 1, 0
while page_num < total_pages:
season_json = self._download_json(
season_url.format(season_id, show_id, str(page_num + 1)), show_name, headers=headers,
note='Downloading season %s JSON metadata%s' % (season_id, ' page %d' % page_num if page_num else ''))
if page_num == 0:
total_pages = try_get(season_json, lambda x: x['meta']['totalPages'], int) or 1
episodes_json = season_json['data']
for episode in episodes_json:
video_id = episode['attributes']['path']
yield self.url_result(
'%svideos/%s' % (self._DOMAIN, video_id),
ie=self._VIDEO_IE.ie_key(), video_id=video_id)
page_num += 1
def _real_extract(self, url):
show_name = self._match_valid_url(url).group('show_name')
return self.playlist_result(self._entries(show_name), playlist_id=show_name)
class DiscoveryPlusItalyShowIE(DiscoveryPlusShowBaseIE):
_VALID_URL = r'https?://(?:www\.)?discoveryplus\.it/programmi/(?P<show_name>[^/]+)/?(?:[?#]|$)'
_TESTS = [{
'url': 'https://www.discoveryplus.it/programmi/deal-with-it-stai-al-gioco',
'playlist_mincount': 168,
'info_dict': {
'id': 'deal-with-it-stai-al-gioco',
},
}]
_BASE_API = 'https://disco-api.discoveryplus.it/'
_DOMAIN = 'https://www.discoveryplus.it/'
_X_CLIENT = 'WEB:UNKNOWN:dplay-client:2.6.0'
_REALM = 'dplayit'
_SHOW_STR = 'programmi'
_INDEX = 1
_VIDEO_IE = DPlayIE
class DiscoveryPlusIndiaShowIE(DiscoveryPlusShowBaseIE):
_VALID_URL = r'https?://(?:www\.)?discoveryplus\.in/show/(?P<show_name>[^/]+)/?(?:[?#]|$)'
_TESTS = [{
'url': 'https://www.discoveryplus.in/show/how-do-they-do-it',
'playlist_mincount': 140,
'info_dict': {
'id': 'how-do-they-do-it',
},
}]
_BASE_API = 'https://ap2-prod-direct.discoveryplus.in/'
_DOMAIN = 'https://www.discoveryplus.in/'
_X_CLIENT = 'WEB:UNKNOWN:dplus-india:prod'
_REALM = 'dplusindia'
_SHOW_STR = 'show'
_INDEX = 4
_VIDEO_IE = DiscoveryPlusIndiaIE

View File

@@ -8,6 +8,7 @@
determine_ext,
ExtractorError,
int_or_none,
join_nonempty,
js_to_json,
mimetype2ext,
try_get,
@@ -139,13 +140,9 @@ def _parse_video_metadata(self, js, video_id, timestamp):
label = video.get('label')
height = self._search_regex(
r'^(\d+)[pP]', label or '', 'height', default=None)
format_id = ['http']
for f in (ext, label):
if f:
format_id.append(f)
formats.append({
'url': video_url,
'format_id': '-'.join(format_id),
'format_id': join_nonempty('http', ext, label),
'height': int_or_none(height),
})
self._sort_formats(formats)

View File

@@ -86,7 +86,6 @@ class EggheadLessonIE(EggheadBaseIE):
},
'params': {
'skip_download': True,
'format': 'bestvideo',
},
}, {
'url': 'https://egghead.io/api/v1/lessons/react-add-redux-to-a-react-application',

View File

@@ -8,7 +8,7 @@
class EpiconIE(InfoExtractor):
_VALID_URL = r'(?:https?://)(?:www\.)?epicon\.in/(?:documentaries|movies|tv-shows/[^/?#]+/[^/?#]+)/(?P<id>[^/?#]+)'
_VALID_URL = r'https?://(?:www\.)?epicon\.in/(?:documentaries|movies|tv-shows/[^/?#]+/[^/?#]+)/(?P<id>[^/?#]+)'
_TESTS = [{
'url': 'https://www.epicon.in/documentaries/air-battle-of-srinagar',
'info_dict': {
@@ -84,7 +84,7 @@ def _real_extract(self, url):
class EpiconSeriesIE(InfoExtractor):
_VALID_URL = r'(?!.*season)(?:https?://)(?:www\.)?epicon\.in/tv-shows/(?P<id>[^/?#]+)'
_VALID_URL = r'(?!.*season)https?://(?:www\.)?epicon\.in/tv-shows/(?P<id>[^/?#]+)'
_TESTS = [{
'url': 'https://www.epicon.in/tv-shows/1-of-something',
'playlist_mincount': 5,

View File

@@ -7,7 +7,9 @@
from ..compat import compat_str
from ..utils import (
determine_ext,
dict_get,
int_or_none,
unified_strdate,
unified_timestamp,
)
@@ -236,3 +238,44 @@ def _real_extract(self, url):
webpage, 'embed url')
return self.url_result(embed_url, 'AbcNewsVideo')
class ESPNCricInfoIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?espncricinfo\.com/video/[^#$&?/]+-(?P<id>\d+)'
_TESTS = [{
'url': 'https://www.espncricinfo.com/video/finch-chasing-comes-with-risks-despite-world-cup-trend-1289135',
'info_dict': {
'id': '1289135',
'ext': 'mp4',
'title': 'Finch: Chasing comes with \'risks\' despite World Cup trend',
'description': 'md5:ea32373303e25efbb146efdfc8a37829',
'upload_date': '20211113',
'duration': 96,
},
'params': {'skip_download': True}
}]
def _real_extract(self, url):
id = self._match_id(url)
data_json = self._download_json(f'https://hs-consumer-api.espncricinfo.com/v1/pages/video/video-details?videoId={id}', id)['video']
formats, subtitles = [], {}
for item in data_json.get('playbacks') or []:
if item.get('type') == 'HLS' and item.get('url'):
m3u8_frmts, m3u8_subs = self._extract_m3u8_formats_and_subtitles(item['url'], id)
formats.extend(m3u8_frmts)
subtitles = self._merge_subtitles(subtitles, m3u8_subs)
elif item.get('type') == 'AUDIO' and item.get('url'):
formats.append({
'url': item['url'],
'vcodec': 'none',
})
self._sort_formats(formats)
return {
'id': id,
'title': data_json.get('title'),
'description': data_json.get('summary'),
'upload_date': unified_strdate(dict_get(data_json, ('publishedAt', 'recordedAt'))),
'duration': data_json.get('duration'),
'formats': formats,
'subtitles': subtitles,
}

View File

@@ -0,0 +1,64 @@
# coding: utf-8
from __future__ import unicode_literals
from .common import InfoExtractor
from ..utils import (
parse_duration,
js_to_json,
)
class EUScreenIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?euscreen\.eu/item.html\?id=(?P<id>[^&?$/]+)'
_TESTS = [{
'url': 'https://euscreen.eu/item.html?id=EUS_0EBCBF356BFC4E12A014023BA41BD98C',
'info_dict': {
'id': 'EUS_0EBCBF356BFC4E12A014023BA41BD98C',
'ext': 'mp4',
'title': "L'effondrement du stade du Heysel",
'alt_title': 'Collapse of the Heysel Stadium',
'duration': 318.0,
'description': 'md5:f0ffffdfce6821139357a1b8359d6152',
'series': 'JA2 DERNIERE',
'episode': '-',
'uploader': 'INA / France',
'thumbnail': 'http://images3.noterik.com/domain/euscreenxl/user/eu_ina/video/EUS_0EBCBF356BFC4E12A014023BA41BD98C/image.jpg'
},
'params': {'skip_download': True}
}]
_payload = b'<fsxml><screen><properties><screenId>-1</screenId></properties><capabilities id="1"><properties><platform>Win32</platform><appcodename>Mozilla</appcodename><appname>Netscape</appname><appversion>5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.71 Safari/537.36</appversion><useragent>Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.71 Safari/537.36</useragent><cookiesenabled>true</cookiesenabled><screenwidth>784</screenwidth><screenheight>758</screenheight><orientation>undefined</orientation><smt_browserid>Sat, 07 Oct 2021 08:56:50 GMT</smt_browserid><smt_sessionid>1633769810758</smt_sessionid></properties></capabilities></screen></fsxml>'
def _real_extract(self, url):
id = self._match_id(url)
args_for_js_request = self._download_webpage(
'https://euscreen.eu/lou/LouServlet/domain/euscreenxl/html5application/euscreenxlitem',
id, data=self._payload, query={'actionlist': 'itempage', 'id': id})
info_js = self._download_webpage(
'https://euscreen.eu/lou/LouServlet/domain/euscreenxl/html5application/euscreenxlitem',
id, data=args_for_js_request.replace('screenid', 'screenId').encode())
video_json = self._parse_json(
self._search_regex(r'setVideo\(({.+})\)\(\$end\$\)put', info_js, 'Video JSON'),
id, transform_source=js_to_json)
meta_json = self._parse_json(
self._search_regex(r'setData\(({.+})\)\(\$end\$\)', info_js, 'Metadata JSON'),
id, transform_source=js_to_json)
formats = [{
'url': source['src'],
} for source in video_json.get('sources', [])]
self._sort_formats(formats)
return {
'id': id,
'title': meta_json.get('originalTitle'),
'alt_title': meta_json.get('title'),
'duration': parse_duration(meta_json.get('duration')),
'description': '%s\n%s' % (meta_json.get('summaryOriginal', ''), meta_json.get('summaryEnglish', '')),
'series': meta_json.get('series') or meta_json.get('seriesEnglish'),
'episode': meta_json.get('episodeNumber'),
'uploader': meta_json.get('provider'),
'thumbnail': meta_json.get('screenshot') or video_json.get('screenshot'),
'formats': formats,
}

Some files were not shown because too many files have changed in this diff Show More