• 0 Posts
  • 50 Comments
Joined 1 year ago
cake
Cake day: June 15th, 2023

help-circle


  • It might be worth taking a step back and looking at your objective with all of this and why you are doing it in the first place.

    If it’s for privacy, then unfortunately that ship has sailed when it comes to email. It’s the digital equivalent of a post card. It’s inherently not private. Nothing you do will make it private. Even services like proton Mail aren’t private–unless you only email other people on proton.

    I appreciate wanting to control your own destiny with it but there are much more productive things you could be spending your time on the improve your privacy surface area.



  • cybersandwich@lemmy.worldtoSelfhosted@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    2 months ago

    GPU with a ton of vran is what you need, BUT

    An alternate solution is something like a Mac mini with an m series chip and 16gb of unified memory. The neural cores on apple silicon are actually pretty impressive and since they use unified memory the models would have access to whatever the system has.

    I only mention it because a Mac mini might be cheaper than GPU with tons of vram by a couple hundred bucks.

    And it will sip power comparatively.

    4090 with 24gb of vram is $1900 M2 Mac mini with 24gb is $1000


  • Like he was saying, it’s more than just power loss. It’s a way of “sanitizing” the power as it comes in. This is “usually” not a problem. But dirty power is arguably worse than power outages. If the voltages fluctuate or get low for whatever reason that puts a big strain on your power supplies.

    This could happen because you run a vacuum on the same circuit and your house is old, guy down the street electrocutes himself or the power coming in from the electric company is ‘dirty’ because they have an issue with transformers or up stream somewhere. It can be imperceptible to you, but your tech notices.




  • cybersandwich@lemmy.worldtoSelfhosted@lemmy.worldPost your Servernames!
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    5 months ago

    “rocinante” for my proxmox host.

    “awkward, past his prime, and engaged in a task beyond his capacities.” From don Quixote’s wiki page.

    It seemed fitting considering it is a server built from old PC parts…engaged in tasks beyond its abilities.

    The rest of my servers (VMs moslty) are named for what they actually do/which vlan they are on (eg vm15) and aren’t fun or excitin names. But at least I know if I am on that VM it has access to that vlan(or that it’s segregated from my other networks).



  • cybersandwich@lemmy.worldtoHardware@lemmy.ml*Permanently Deleted*
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    5 months ago

    I totally believe you can hit the ram limit on these. I was just saying Ive surprisingly managed to be fine with 8GB.

    Android emulators are notoriously memory hungry and there are certain tasks that just flat out require more ram regardless of how well it’s managed.

    The advice I heard about these a while back is: if you know 8GB isn’t enough for you, then you aren’t the market segment they are targeting with the basic models.

    That said, no “pro” model should come with just 8GB. It just waters down what “pro” means.


  • cybersandwich@lemmy.worldtoHardware@lemmy.ml*Permanently Deleted*
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    5 months ago

    I’ve been using a Macair with 8GB of ram since they came out. It was on sale at Costco and I had a gift card. I think I paid $500 out of pocket.

    I was worried that 8GB would limit me but it was the one on sale so I rolled with it. I can say that after several years, the only time it’s limited me was when I tried to run an AI model that was 8GB. Obviously, that becomes an issue at that point.

    But for all I do with my air, including creating a 1GB ramdisk for a script/automation ml job I run, I have never felt limited by the ram.

    I open a bagillion ff tabs. Never close windows etc. it’s an air after all, not a workstation substitute, so my use ases arent overly taxing in the grand scheme of things. I’m not editing my 4k video or doing rendering with it. But ram hasn’t been an issue outside of the AI workload with the 8GB model–and tbh that’s only an issue because of the ML cores. They absolutely scream vs my 1080ti that’s in my server. My m1 with 8GB of ram runs circles around my 24 core 128GB ram server that has a 1080ti.

    I did just get a MacBook pro for work that I requested 128GB of ram. But that’s because I wanted it for bigger AI models(and work is paying not me).




  • There are some dumb responses in this thread. Lots of misplaced vitriol at Texas and farmers.

    You want to have people report this stuff? Don’t act like dickheads when they do.

    Stuff like this happens from time to time in agriculture. UK has issues with TB in dairy cows which requires them to cull herds. It’s really shitty and unfortunate but this type of thing has happened for millenia.

    It’s better that they report it so we can address it and find ways to prevent it happening in the future.

    And unless everyone is willing to go 100% vegan tomorrow, we need farmers, livestock, and the like to keep our meat and dairy supply flowing.

    Edit:

    I also want to point out that it doesn’t seem like.they definitively determined it came from the cows but that he was “link” and “exposed” to infected cows.

    “Genetic tests don’t suggest that the virus suddenly is spreading more easily or that it is causing more severe illness, Shah said. And current antiviral medications still seem to work, he added.”

    So this guy could have gotten it from the same bird the cows got it from as well. A dozen other people were tested and none came back positive.

    All other cases we’ve seen have come from bird contact. So there is a reasonable chance this guy got it from an infected bird without realizing it.

    Also, none of the cows have died (dunno if that’s a good thing or bad thing).



  • I don’t have nearly that much worth backing up(5TB–and realistically only 2TB is probably critical), but I have a Synology Nas(12TB raid 1) and truenas (zfs striped/mirrored) that I back my stuff to (and they back up to each other).

    Then I have a raspberry pi with a USB drive (8tb) at my parents house 4 hours away, that my Synology backs up to (over tailscale).

    Oh, and I have a USB HDD(8tb) that I plug in and backup my Synology Nas to and throw in my fireproof safe. But thats a manual backup I do once every quarter or 6 months if I remember. That’s a very very last resort backup.

    My offsite is at my parents.

    And no, I have not tested it because I don’t know how I’m actually supposed to do that.




  • Breaking things is the best way to learn. Accidentally deleting your container data is one of the best ways to learn how to not do that AND learn about proper backups.

    Breaking things and then trying to restore from a backup that…doesn’t work. Is a great way to learn about testing backups and/or properly configuring them.

    The corrolary to this is: just do stuff. Analysis paralysis is real. You can look up a dozen “right ways” to do things and end up never starting.

    My advice: just start. If you end up backing yourself into a corner where you can’t scale or easily migrate to another solution, oh well. You either learn that lesson or figure out a way to migrate. Learning all along the way.

    Each failure or screw up is worth a hundred “best practice / how to articles”.