I’d like to prevent building a completely new server just to run a gpu for AI workloads. Currently, everything is running on my laptop, except it’s dated cpu only really works well for smaller models.

Now I have an nvidia m40, could I possibly get it to work using thunderbolt and an enclosure or something? note: it’s on linux

  • fuckwit_mcbumcrumble@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    2
    ·
    7 days ago

    If you’ve got a thunderbolt port on your laptop and a thunderbolt dock on your laptop then there’s no reason why it shouldn’t work.

    I’m not familiar with thunderbolt on linux, but on windows you plug it in and it just works™️ and shows up as if it was inside your machine. Your DE on linux might automatically do it, but if you’re command line only you’ll probably have to run a command first.

    • Boomkop3@reddthat.comOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 days ago

      I did some more searching, and found that nvme to pci-e adapters are affordable. That’s going to look a bit janky, but fortunately I don’t care.

      Thank you again for the suggestion!

      • hendrik@palaver.p3x.de
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        7 days ago

        I did some quick googling. Are those thunderbolt docks really $350 ? That’s like half the price of a cheap computer?!

          • hendrik@palaver.p3x.de
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            7 days ago

            Maybe you should do the maths on other options. You could get a refurbished PC for $350. Or buy the dock anyways. Or spend the money on cloud compute if you’re just occasionally using AI. Idk.

            • Boomkop3@reddthat.comOP
              link
              fedilink
              English
              arrow-up
              3
              ·
              6 days ago

              I did not say occasionally. We use AI a lot. Currently it’s mostly for image indexing, recognition, object detection, and audio transcription. But we’re planning to expand to more, and we’d like to use models that are more accurate