There is a post about getting overwhelmed by 15 containers and people not wanting to turn the post into a container measuring contest.

But now I am curious, what are your counts? I would guess those of you running k*s would win out by pod scaling

docker ps | wc -l

For those wanting a quick count.

  • ℍ𝕂-𝟞𝟝@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    10
    ·
    7 days ago

    I know using work as an example is cheating, but around 1400-1500 to 5000-6000 depending on load throughout the day.

    At home it’s 12.

    • slazer2au@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      7
      ·
      7 days ago

      I was watching a video yesterday where an org was churning 30K containers a day because they didn’t profile their application correctly and scaled their containers based on a misunderstanding how Linux deals with CPU scheduling.

      • ℍ𝕂-𝟞𝟝@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        5
        ·
        7 days ago

        Yeah that shit is more common than people think.

        A big part of the business of cloud providers is that most orgs have no idea how to do shit. Their enterprise consultants are also wildly variable in competence.

        There was also a large amount of useless bullshit that I needed to cut down since being hired at my current spot, but the amount of containers is actually warranted. We do have that traffic, which is both happy and sad, since while business is booming, I have to deal with this.

  • mogethin0@discuss.online
    link
    fedilink
    English
    arrow-up
    4
    ·
    6 days ago

    I have 43 running, and this was a great reminder to do some cleanup. I can probably reduce my count by 5-10.

  • kaedon@slrpnk.net
    link
    fedilink
    English
    arrow-up
    3
    ·
    7 days ago

    12 LXCs and 2 VMs on proxmox. Big fan of managing all the backups with the web ui (It’s very easy to back to my NAS) and the helper scripts are pretty nice too. Nothing on docker right now, although i used to have a couple in a portainer LXC.

  • mlody@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    7 days ago

    I don’t use them. I’m using OpenBSD on my server which don’t support this feature.

  • Nico198X@europe.pub
    link
    fedilink
    English
    arrow-up
    3
    ·
    7 days ago

    13 with podman on openSUSE MicroOS.

    i used to have a few more but wasn’t using them enough so i cut them.

  • Itdidnttrickledown@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    4
    ·
    7 days ago

    None. I run my services they way they are meant to be run. There is no point in containers for a small setup. Its kinda lazy and you miss out on how to install them.

    • SpatchyIsOnline@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 days ago

      Small setups can very easily turn into large setups without you noticing.

      The only bare-metal setup I’d trust to be scaleable is Nix flakes (which I’m actually very interested in migrating to at some point)

      • Itdidnttrickledown@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        5 days ago

        I’ve never even heard of NIX flakes before today. It looks like another soluion in search of a problem. I trust debian and I trust bare metal more than any container setup. I run multiple services on one machine. I currently have two machines to run all my services. No problems and no downtime other than a weekly update and reload. All crontabed, all automatic.

        At work I have multiple services all running in KVM including some windows domain controllers. Also no problem and weekly full backups are a worry free. Only requiring me to checks them for consistency.

        In short as much as people try to push containers they are only useful if you are dealing with more than few services. No home setup should be that large unless someong is hosting for others.

        • SpatchyIsOnline@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 days ago

          I disagree that Nix is a solution in search of a problem, in fact it solves arguably the two biggest problems in software deployment: dependency hell and reproducibility (i.e. the “It works on my machine” problem)

          Every package gets access to the exact version of all the dependencies it needs (without needless replication like Flatpaks would have) and sharing a flake to another machine means you can replicate that exact setup and guarantee it will be exactly the same

          Containers try to solve the same problems, and succeed to a somewhat decent extent, although with some overhead of course.

          I’m not trying to criticize you or your setup at all, if Debian alone works for you, that’s fine. The beauty of open source and self hosting is that we can use whatever tools we want, however we want. I do though think it’s good practice to be aware of what alternatives are out there should our needs change, or should our tools change to no longer align with our needs.

          • Itdidnttrickledown@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            5 days ago

            All containers do that. Its nothing new just another implementation of the idea with its own idea about what is best. It only saves resources in the form of time if its a large scale operation and finally its just the last in a long line of similar solutions.

  • dieTasse@feddit.org
    link
    fedilink
    English
    arrow-up
    2
    ·
    7 days ago

    I have about 15 trueNAS apps only 2 of them are custom (endurain and molly socket). They are containers but very low effort handled mostly by the system. I also have 3 LXC. And 2 VMs (home assistant and openWRT). I spend only few minutes a week on maintenance. And then I tinker for several hours a week, testing new apps or enhancing current ones configs.

  • tomjuggler@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    7 days ago

    3 that I’m actually using, on my “Home Server” (Raspberry Pi).

    One day I will be migrating the work stuff on VPS over to Docker, and then we’ll see who has the most!

  • K-Money@lemmy.kmoneyserver.com
    link
    fedilink
    English
    arrow-up
    24
    ·
    8 days ago

    140 running containers and 33 stopped (that I spin up sometimes for specific tasks or testing new things), so 173 total on Unraid. I have them gouped into:

    • 118 Auto-updates (low chance of breaking updates or non-critical service that only I would notice if it breaks)
    • 55 Manual-updates (either it’s family-facing e.g. Jellyfin, or it’s got a high chance of breaking updates, or it updates very infrequently so I want to know when that happens, or it’s something I want to keep particular note of or control over what time it updates e.g. Jellyfin when nobody’s in the middle of watching something)

    I subscribe to all their github release pages via FreshRSS and have them grouped into the Auto/Manual categories. Auto takes care of itself and I skim those release notes just to keep aware of any surprises. Manual usually has 1-5 releases each day so I spend 5-20 minutes reading those release notes a bit more closely and updating them as a group, or holding off until I have more bandwidth for troubleshooting if it looks like an involved update.

    Since I put anything that might cause me grief if it breaks in the manual group, I can also just not pay attention to the system for a few days and everything keeps humming along. I just end up with a slightly longer manual update list when I come back to it.