

Invidious link didn’t work… Do you have the youtube link?
Heads up for future reference: the video ID is the same between Youtube and Invidious, so you can just replace the invidious domain (inv.nadeko.net
in this case) with youtube.com
.
Invidious link didn’t work… Do you have the youtube link?
Heads up for future reference: the video ID is the same between Youtube and Invidious, so you can just replace the invidious domain (inv.nadeko.net
in this case) with youtube.com
.
Giphy has a documented API that you could use. There have been bulk downloaders, but I didn’t see any that had recent activity. However you still might be able to use one to model your own script after, like https://github.com/jcpsimmons/giphy-stacks
There were downloaders for Gfycat - gallery-dl supported it at one point - but it’s down now. However you might be able to find collections that other people downloaded and are now hosting. You could also use the Internet Archive - they have tools and APIs documented
There’s a Tenor mass downloader that uses the Tenor API and an API key that you provide.
Imgur has GIFs is supported by gallery-dl, so that’s an option.
Also, read over https://github.com/simon987/awesome-datahoarding - there may be something useful for you there.
In terms of hosting, it would depend on my user base and if I want users to be able to upload GIFs, too. If it was just my close friends, then Immich would probably be fine, but if we had people I didn’t know directly using it, I’d want a more refined solution.
There’s Gifable, which is pretty focused, but looks like it has a pretty small following. I haven’t used it myself to see how suitable it is. If you self-host it (or something else that uses S3), note that you can use MinIO or LocalStack for the S3 container rather than using AWS directly. I’m using MinIO as part of my stack now, though for a completely different app.
MediaCMS is another option. Less focused on GIFs but more actively developed, and intended to be used for this sort of purpose.
“But tante, then we will never have Open Source AI”. Exactly. That’s how reality works. If you can’t fulfil the criteria of a category you are not in that category. The fix is not to change the criteria. That’s playing pigeon chess.
This is a bad take. If your criteria aren’t grounded in reality, they aren’t useful, so of course you should change the criteria.
It’s also a missed opportunity to point to an AI model that did things right and that would qualify as “open source AI” even if that definition were not watered down. For example, OLMo (which I just learned about) says that they provide full insight into the training data as well as “full model weights, training code, training logs, training metrics in the form of Weights & Biases logs, and inference code.” Their most complex models are 7B models, which is enough to be relevant.
Saying “Meta and Alphabet will never release Open Source AI that meets the proposed definition” is fine. Saying “we’ll never have Open Source AI, period, that meets the proposed definition” means your proposed definition needs rewritten.
Do you only experience the 5-10 second buffering issue on mobile? If not, then you might be able to fix the issue by tuning your NextCloud instance - upping the memory limit, disabling debug mode and dropping log level back to warn if you ever changed it, enabling memory caching, etc…
Check out https://docs.nextcloud.com/server/latest/admin_manual/installation/server_tuning.html and https://docs.nextcloud.com/server/latest/admin_manual/installation/php_configuration.html#ini-values for docs on the above.
You could’ve scrolled down to the bottom, clicked on “Links,” then clicked on the repo link
The repo has instructions to install a Snap or build from source. If you build from source, it looks like you should download an archive from the releases page rather than just pulling from master.
Open-Webui published a docker image that has a bundled Ollama that you can use, too: ghcr.io/open-webui/open-webui:cuda
. More info at https://docs.openwebui.com/getting-started/#installing-open-webui-with-bundled-ollama-support
I made a typo in my original question: I was afraid of taking the services offline, not online.
Gotcha, that makes more sense.
If you try to run the reverse proxy on the same server and port that an existing service is using (e.g., port 80), then you’ll run into issues. You could also run into conflicts with the ports the services themselves use. Likewise if you use the same outbound port from your router. But IME those issues will mostly stop the new services from starting - you’d have to stop the services or restart your machine for the new service to have a chance to grab the ports while they were unused. Otherwise I can’t think of any issues.
I’m afraid that when I install a reverse proxy, it’ll take my other stuff online and causes me various headaches that I’m not really in the headspace for at the moment.
If you don’t configure your other services in the reverse proxy then you have nothing to worry about. I don’t know of any proxy that auto discovers services and routes to them by default. (Traefik does something like this with Docker services, but they need Docker labels and to be on the same Docker network as Traefik, and you’re the one configuring both of those things.)
Are you running this on your local network? If so, then unless you forward a port to your server on the port your reverse proxy is serving from, it’ll only be accessible from the local network. This means you can either keep it that way (and VPN in to access it) or test it by connecting directly to your server on that port and confirm that it’s working as expected before forwarding the port.
Paired with allowing people who own the original to upgrade for $10 (and I’m assuming something similar in the UK) when they’re charging $50 for the remaster if you don’t have the original, that makes sense. They’re just closing a loophole.
I’d much rather they double the existing game’s price than for them to charge $25-$30 for the upgrade or to even just not have one outright.
It sucks for anyone who’d been planning to play the original and who just hadn’t bought it yet, but used prices for discs should still be low, so only the subset of those people who have disc-less machines are really impacted.
I don’t know that a newer drive cloner will necessarily be faster. Personally, if I’d successfully used the one I already have and wasn’t concerned about it having been damaged (mainly due to heat or moisture) then I would use it instead. If it might be damaged or had given me issues, I’d get a new one.
After replacing all of the drives there is something you’ll need to do to tell it to use their full capacity. From reading an answer to this post, it looks like what you’ll need to do is to select “Change RAID Mode,” then keep RAID 1 selected, keep the same disks, and then on the next screen move the slider to use the drives’ full capacities.
upper capacity
There may be an upper limit, but on Amazon there is a 72 TB version that would have to come with at least 18 TB drives. If 18 TB is fine, 20 TB is also probably fine, but I couldn’t find any reports by people saying they’d loaded 20 TB drives into theirs without issue.
procedure
You could also clone them yourself, but you’d want to put the NAS into read only mode or take it offline first.
I think cloning drives is generally faster than rebuilding them in RAID, as well as easier on the drives, but my personal experience with RAID is very limited.
Basically, what I’d do is:
In terms of timing… I have a Sabrent offline cloning hub (about $50 on Amazon), and it copies data at 60 Mbps, meaning it’d take about 9 hours per clone. Startech makes a similar device ($96 on Amazon, that allegedly clones data at 466 Mbps (28 GB per minute), meaning each clone would take 2.5 hours… but people report it being just as slow as the Sabrent.
Also, if you bought two offline cloning devices, you could do steps 1-3 and 4-6 simultaneously, and do the same again with steps 7-8.
I’m not sure how long it would take RAID to rebuild a pulled drive, but my understanding is that it’s going to be fastest with RAID 1. And if you don’t want to make the NAS read-only while you clone the drives, it’s probably your only option, anyway.
Which system(s) are you playing on?
Good to know! I saw that mentioned on some (apparently outdated) Comodo marketing copy as a benefit over LE
EV certs give you an extra green bar or something along those lines. If your customers care about it, then you have to. If they don’t - and they probably don’t - it’s a waste.
What exactly are you trusting a cert provider with and what are the security implications?
End users trust the cert provider. The cert provider has a process that they use to determine if they can trust you.
What attack vectors do you open yourself up to when trusting a certificate authority with your websites’ certificates?
You’re not really trusting them with your certificates. You don’t give them your private key or anything like that, and the certs are visible to anyone navigating to your website.
Your new vulnerabilities are basically limited to what you do for them - any changes you make to your domain’s DNS config, or anything you host, etc. - and depend on that introducing a vulnerability of its own. You also open a new phishing attack vector, where someone might contact you, posing as the certificate authority, and ask you to make a change that would introduce a vulnerability.
In what way could it benefit security and/or privacy to utilize a paid service?
For most use cases, as far as I know, it doesn’t.
LetsEncrypt doesn’t offer EV or OV certificates, which you may need for your use case. However, these are mostly relevant at the enterprise level. Maybe you have a storefront and want an EV cert?
LetsEncrypt also only offers community support, and if you set something up wrong you could be less secure.
Other CAs may offer services that enhance privacy and security, as well, like scanning your site to confirm your config is sound… but the core offering isn’t really going to be different (aside from LE having intentionally short renewal periods), and theoretically you could get those same services from a different vendor.
You can get wildcard certs with LetsEncrypt (since 2018): https://community.letsencrypt.org/t/acme-v2-production-environment-wildcards/55578
I use --format-sort +res:1080
, which, if my understanding of the documentation is correct, will make it prefer 1080p, the smallest video larger than 1080p if 1080p isn’t available, or the largest video if nothing 1080p or larger is available.
res
is the smallest dimension of the video (so for a 1080x1920 portrait video, it would be 1080).
Default sort is descending order. The +
makes it sort in ascending order instead.
Ah, you’re right - Trilium doesn’t use file-backed notes at all - it saves them in a database (I think Sqlite but I’m not positive).
Trilium supports writing notes in multiple formats, including Markdown.
Why? Do you not have a phone number? Is it blocked in your country? Are you legally prohibited from using software with end to end encryption?