• 2 Posts
  • 71 Comments
Joined 4 years ago
cake
Cake day: January 21st, 2021

help-circle
  • I switched to Immich recently and am very happy.

    1. Immich’s face detection is much better, very rarely fails. Especially for non-white faces. But even for white faces PhotoPrisim regularly needed me reviewing the unmatched faces. I also needed to really turn up the “what is a face” threshold because otherwise it would miss a ton of clear faces. (Then it only missed some, but also has tons of false positives). On the other hand Immich just works.
    2. Immich’s UI is much nicer overall. Lots of small affordances. For example the menu item to “view in timeline” is worth switching alone. Also good riddance to PhotoPrism’s persistent and buggy selection. Someone must have worked really hard on implementing this but it was really just a bad idea.
    3. Immich has an app with uploading, and it allows you to view local and uploaded photos in one interface which is a huge UX win. I couldn’t find a good Android app for uploading to photoprism. You could set up import delays and stuff but you would still regularly get partially uploaded files imported and have to clean it up manually.
    4. Immich’s search by content is much better. For example searching for “cat with red and yellow ball” was useless on PhotoPrism, but I found tons of the results I was looking for on Immich.

    The bad:

    1. There is currently a terrible jank in the Immich app which makes videos unusable and everything painful. Apparently this is due to some Album sync process running in the main thread. They are working on it. I can’t fathom how a few hundred albums causes this much lag but 🤷 There is also even worse lag on the location view page, but at least that is just one page.
    2. The Immich app has a lot less features than the website. But the website works very well on mobile so even just using the website (and the app for uploading) is better than PhotoPrism here. The fundamentals are good but it just needs more work.
    3. I liked PhotoPrism’s advanced filters. They were very limited but at least they were there.
    4. Not being able to sort search results by date is a huge usability issue. I often know roughly when the photo I want to find was taken and being able to order by date would be hugely helpful.
    5. You have to eagerly transcode all videos. There is no way to clean up old transcodes and re-transcode on the fly. To be fair the PhotoPrism story also wasn’t great because you had to wait for the full video to be transcoded before starting, leading to a huge delay for videos more than a few seconds, but at least I could save a few hundred gigs of disk space.

    Honestly a lot of stuff in PhotoPrism feels like one developer has a weird workflow and they optimized it for that. Most of them are counter to what I actually want to do (like automatic title and description generation, or the review stuff, or auto quality rating). Immich is very clearly inspired by Google Photos and takes a lot of things directly from it, but that matches my use case way better. (I was pretty happy with Google Photos until they started refusing to give access to the originals.)


  • Most Intel GPUs are great at transcoding. Reliable, widely supported and quite a bit of transcoding power for very little electrical power.

    I think the main thing I would check is what formats are supported. If the other GPU can support newer formats like AV1 it may be worth it (if you want to store your videos in these more efficient formats or you have clients who can consume these formats and will appreciate the reduced bandwidth).

    But overall I would say if you aren’t having any problems no need to bother. The onboard graphics are simple and efficient.





  • The concern is that it would be nice if the UNIX users and LDAP is automatically in sync and managed from a version controlled source. I guess the answer is just build up a static LDAP database from my existing configs. It would be nice to have one authoritative system on the server but I guess as long as they are both built from one source of truth it shouldn’t be an issue.


  • Yes, LDAP is a general tool. But many applications that I am interested in using it for user information. That is what I want to use it for. I’m not really interested in storing other data.

    I think you are sort of missing the goal of the question. I have a bunch of self-hosted services like Jellyfin, qBittorrent, PhotoPrism, Metabase … I want to avoid having to configure users in each one individually. I am considering LDAP because it is supported by many of these services. I’m not concerned about synchronizing UNIX users, I already have that solved. (If I need to move those to LDAP as well that can be considered, but isn’t a goal).


  • I do use a reverse proxy but for various reasons you can’t just block off some apps. For example if you want to play Jellyfin on a Chromecast or similar, or PhotoPrism if you want to use sharing links. Unfortunately these systems are designed around the built-in auth and you can’t just slap a proxy in front.

    I do use nginx with basic with in front of services where I can. I trust nginx much more than 10 different services with varying quality levels. But unfortunately not all services play well.


  • How are you configuring this? I checked for Jellyfin and their are third-party plugins which don’t look too mature, but none of them seem to work with apps. qBittorrent doesn’t support much (actually I may be able to put reverse-proxy auth in front… I’ll look into that) and Metabase locks SSO behind a premium subscription.

    IDK why but it does seem that LDAP is much more widely supported. Or am I missing some method to make it work







  • This is my dream. However I think my target market is smaller and less willing to pay (personal rather than business). However maintenance is low effort and I want the product for myself. So even if it doesn’t make much or anything I think I will be happy to run it forever.

    The ultimate dream would be to make enough to be able to employ someone else part time, so that there could be business continuity if I wasn’t able to run it anymore.


  • kevincox@lemmy.mltoSelfhosted@lemmy.worldSecurity and docker
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    There is definitely isolation. In theory (if containers worked perfectly as intended) a container can’t see any processes from the host, sees different filesystems, possibly a different network interface and basically everything else. There are some things that are shared like CPU, Memory and disk space but these can also be limited by the host.

    But yes, in practice the Linux kernel is wildly complex and these interfaces don’t work quite as well as intended. You get bugs in permission checks and even memory corruption and code execution vulnerabilities. This results in unintended ways for code to break out of containers.

    So in theory the isolation is quite strong, but in practice you shouldn’t rely on it for security critical isolation.


  • kevincox@lemmy.mltoSelfhosted@lemmy.worldSecurity and docker
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    where you have decent trust in the software you’re running.

    I generally say that containers and traditional UNIX users are good enough isolation for “mostly trusted” software. Basically I know that they aren’t going to actively try to escalate their privilege but may contain bugs that would cause problems without any isolation.

    Of course it always depends on your risk. If you are handing sensitive user data and run lots of different services on the same host you may start to worry about remote code execution vulnerabilities and will be interested in stronger isolation so that a RCE in any one service doesn’t allow escalation to access all data being processed by other services on the host.



  • kevincox@lemmy.mltoSelfhosted@lemmy.worldSecurity and docker
    link
    fedilink
    English
    arrow-up
    4
    ·
    4 months ago

    The Linux kernel is less secure for running untrusted software than a VM because most hypervisors have a far smaller attack surface.

    how many serious organization destroying vulnerabilities have there been? It is pretty solid.

    The CVEs differ? The reasons that most organizations don’t get destroyed is that they don’t run untrusted software on the same kernels that process their sensitive information.

    whatever proprietary software thing you think is best

    This is a ridiculous attack. I never suggested anything about proprietary software. Linux’s KVM is pretty great.


  • kevincox@lemmy.mltoSelfhosted@lemmy.worldSecurity and docker
    link
    fedilink
    English
    arrow-up
    5
    ·
    4 months ago

    I think assuming that you are safe because you aren’t aware of any vulnerabilities is bad security practice.

    Minimizing your attack surface is critical. Defense in depth is just one way to minimize your attack surface (but a very effective one). Putting your container inside a VM is excellent defense in depth. Putting your container inside a non-root user barely is because you still have one Linux kernel sized hole in your swiss-cheese defence model.


  • kevincox@lemmy.mltoSelfhosted@lemmy.worldSecurity and docker
    link
    fedilink
    English
    arrow-up
    6
    ·
    4 months ago

    I never said it was trivial to escape, I just said it wasn’t a strong security boundary. Nothing is black and white. Docker isn’t going to stop a resourceful attacker but you may not need to worry about attackers who are going to spend >$100k on a 0-day vulnerability.

    The Linux kernel isn’t easy to exploit as if it was it wouldn’t be used so heavily in security sensitive environments

    If any “security sensitive” environment is relying on Linux kernel isolation I don’t think they are taking their sensitivity very seriously. The most security sensitive environments I am aware of doing this are shared hosting providers. Personally I wouldn’t rely on them to host anything particularly sensitive. But everyone’s risk tolerance is different.

    use podman with a dedicated user for sandboxing

    This is only every so slightly better. Users have existed in the kernel for a very long time so may be harder to find bugs in but at the end of the day the Linux kernel is just too complex to provide strong isolation.

    There isn’t any way to break out of a properly configured docker container right now but if there were it would mean that an attacker has root

    I would bet $1k that within 5 years we find out that this is false. Obviously all of the publicly known vulnerabilities have been patched. But more are found all of the time. For hobbyist use this is probably fine, but you should acknowledge the risk. There are almost certainly full kernel-privilege code execution vulnerabilities in the current Linux kernel, and it is very likely that at least one of these is privately known.