

They havent released one for the razor I have, but honestly I might try modeling them myself. Doesn’t seem impossible, and I’ve been waning a deeper comb than they sell.
They havent released one for the razor I have, but honestly I might try modeling them myself. Doesn’t seem impossible, and I’ve been waning a deeper comb than they sell.
Yup. Even for technical writing, markdown with embedded LaTeX is great in most cases, thanks largely to Pandoc and its ability to convert the markdown into pure LaTeX. There are even manuscript-focused Markdown editors, like Zettlr.
Maybe the graph mode of logseq?
Not somebody who knows a lot about this stuff, as I’m a bit of an AI Luddite, but I know just enough to answer this!
“Tokens” are essentially just a unit of work – instead of interacting directly with the user’s input, the model first “tokenizes” the user’s input, simplifying it down into a unit which the actual ML model can process more efficiently. The model then spits out a token or series of tokens as a response, which are then expanded back into text or whatever the output of the model is.
I think tokens are used because most models use them, and use them in a similar way, so they’re the lowest-level common unit of work where you can compare across devices and models.
Agreed! I’m just not sure TOPS is the right metric for a CPU, due to how different the CPU data pipeline is than a GPU. Bubbly/clear instruction streams are one thing, but the majority type of instruction in a calculation also effects how many instructions can be run on each clock cycle pretty significantly, whereas in matrix-optimized silicon its a lot more fair to generalize over a bulk workload.
Generally, I think its fundamentally challenging to generate a generally applicable single number to represent CPU performance across different workloads.
I mean, sure, but largely GPU-based TOPS isn’t that good a comparison with a CPU+GPU mixture. Most tasks can’t be parallelized that well, so comparing TOPS between an APU and a TPU/GPU is not apples to apples (heh).
I just came across the lines in the OpenSuse 42 .bashrc in to connect to palm pilots today…what a flashback.
AntennaPod is better than it has any right to be – on a modern device, it’s super smooth.
Isn’t that going to be ruinously expensive to host an instance for? Video is expensive in terms of storage and bandwidth.
The 6800XT has sold above its MSRP its entire lifecycle, and has been really hard to find the last year or so. When I’ve seen it recently, its been $700-900. Unfortunately, it really is just that good.
Yeah, kobo does too. I assumed it was a proprietary flavor which was pretty locked down, is that not the case?
I vaguely remember there being a FOSS OS you can put on Kobos, can you also do that on Boox?
I mean they do have a point: the framework that the game is targeting is DX11, so if it looks bad it is (broadly) because of an issue in translating DX11 instructions to Vulkan…
Level1 has looked at the B580 on Linux specifically: https://www.youtube.com/watch?v=Tv0o6505JAc
I think most of the issues with games not working should be the same between windows and Linux driver versions, and HardwareUnboxed has done some pretty exhaustive testing recently on the “maturity” of the drivers by testing a couple hundred games for obvious driver problems.
You’re allowed to like gimmicks!
What makes them a gimmick IMO is that they’re sold as “this will change your life and the way you work”, but really it’s just that a subset of the audience for the gimmickless product thinks they’re kinda neat.
Sure the threat model is different, I’m just saying it’s still a single point of failure.
I mean yes, but currently they’re all dependent on Windows, so its less of centralizing OSes, and more changing what its centralized on.
Yikes that’s almost as bad.
Well, I doubt they’ll release one for my clippers since they’re discontinued, so that inspired me to go ahead and model a variable-depth one for myself. Based on some of the comments here, I thickened the comb blades to make them print more easily.