Instead of using the MD5 hash of the enclosure download URL, we
create a filename which follows `feedname/entry_title.hash.ext`, where
feedname is a uniquefied feed title (stored in the DB), a truncated version
of the entry title, a shortened hash based on the download URL, and the
original file extension extracted from the download URL.
BUG: 457848
libVLC has a hardcoded maximum number of redirects. Several podcasts
need more than this number. Therefore we resolve the final url through
QNetworkReply and send the final url to the audio player.
This commit adds a bunch of API extensions (public and private) to the
entry, enclosure, etc classes to allow runtime updates of internal data.
Additionally, the feed update routine has been adapted to find updates
in entries, enclosures, etc and pass them on to the relevant objects.
All of this functionality is put behind a new toggle exposed in the
settings (default is on). This is useful since a full update takes
quite a bit longer on underpowered hardware, so users should be able to
switch off this potentially non-essential overhead.
BUG: 446158
This implements the gpodder API from scratch. It turned out that
libmygpo-qt has several critical bugs, and there's no response to pull
requests upstream. So using that library was not an option.
The implementation into kasts consists of the following:
- Can sync with gpodder.net or with a nextcloud server that has the
nextcloud-gpodder app installed. (This app is mostly API compatible
with gpodder.)
- Passwords are stored using qtkeychain. If the keychain is
unavailable it will fallback to file.
- It syncs podcast subscriptions and episode play positions, including
marking episodes as played. Episodes that have a non-zero play
position will be added to the queue automatically.
- It will check for a metered connection before syncing. This is
coupled to the allowMeteredFeedUpdates setting.
- Full synchronization can be performed either manually (from the
settings page) or through automatic triggers: on startup and/or on
feed refresh.
- There is an additional possibility to trigger quick upload-only syncs
to make sure that the local changes are immediately uploaded to the
server (if the connection allows). This will trigger when
subscriptions are added or removed, when the pause/play button is
toggled or an episode is marked as played.
- This implements a few safeguards to avoid having multiple feed URLS
pointing to the same underlying feed (e.g. http vs https). This
solves part of #17
Solves #13
The feed update routine which is now spread over several methods
in Fetcher, is now put into a self-contained KJob. This will allow
to re-use this job later on in e.g. gpodder sync, where it's
required to update feeds before syncing episode statuses.
This also makes the feed update abortable.
Lastly, but most importantly, the feed update procedure has been
optimized to minimize database transactions, resulting in a dramatic
speed-up. This is especially true for importing new feeds, which
will now be at least 5x faster on slow hardware.
This adds a new setting to the Settings page.
Existing enclosures and images will be moved to the new location
(first copied, then deleted in the original location). If any of
the copy actions fail, the operation is aborted and the original
path is restored.
The StorageMoveJob is set up in such a way that it's easy to add other
files or subfolders in the future.
Solves #15
For now this only works with NetworkManager. The related settings are
greyed out on systems not using NetworkManager.
Some details of the implementation:
- Implement settings in the settings menu to enable/disable feed
updates, episode downloads and/ or image downloads on metered
connections. If the option(s) is disabled, an overlay dialog is shown
with options to "not allow", "allow once", or "allow always".
- If the network is down, no attempt is made to download images and the
fallback image will be used until the network is up again.
This also solves an issue where the application hangs when the network
is down and feed images have not been cached yet.
- Next to this, part of the cachedImage implementation in Entry and Feed
has been refactored to re-use code as part of the image() method in
Fetcher.
- In case something unexpected happens, an error will be logged.
- This refactoring also includes a cleanup of a lot of header includes to
avoid circular dependencies.
- The error message will now be shown below the info message.
- Add database migration (for Errors)
This enables the download method in Fetcher to resume in case a partial
download is already saved to disk.
For full implementation of download resumes, more changes are required,
because the current application will automatically clean up files that
don't match the expected size at startup.
When adding a feed and simultaneously starting a feed update, a race
condition could happen where the feed update would catch up with the
feed adding, and start adding and marking old episodes as new before
the original addFeed method would reach them.
Images are now stored in the cache directory in a dedicated subdir
called "images".
Enclosures are stored in the data directory in a dedicated subdir
"enclosures".