I think it’s more the cloud being the issue here. Such an obvious and large and valuable target. Of course Microsoft also isn’t that secure historically.
I think it’s more the cloud being the issue here. Such an obvious and large and valuable target. Of course Microsoft also isn’t that secure historically.
Probably just so you don’t accidentally waste time unknowingly rereading a book.
It’s also the anti commodity stuff IP has been allowing. If Hershey makes crap chocolate, there is little stopping you from buying Lidnt say. But if Microsoft makes a bad OS, there’s a lot stopping you from using Linux or whatever.
What’s worse is stuff like DRM and computers getting into equipment that otherwise you could use any of a bevy of products for. Think ink cartridges.
Then there’s the secret formulas like for transmission fluid now where say Honda says in the manual you have to get Honda fluid for it to keep working. Idk if it’s actually true, but I l’m loathe to do the 8k USD experiment with my transmission.
You’d think the government could mandate standards but we don’t have stuff like that.
Testing the Jellyfin photos thing out now. I don’t know if it’s working right, but when I first looked at it the issue was I thought it seemed very video focused. I guess otherwise I’m learning docker after all.
Fair enough, last time I tried docker, which was a long time ago, I had all sorts of issues with permissions and persistence. I guess it’s probably better now.
I don’t want a research project. I just was hoping there was an easy to use program to make the viewing better than samba shares. Maybe I just need a set of programs that will display thumbnails over samba.
Yes definitely. Many of my fellow NLP researchers would disagree with those researchers and philosophers (not sure why we should care about the latter’s opinions on LLMs).
I’m not sure what you’re saying here - do you mean you do or don’t think LLMs are “stochastic parrot”s?
In any case, the reason I would care about philosophers opinions on LLMs is mostly because LLMs are already making “the masses” think they’re potentially sentient, and or would deserve personhood. What’s more concerning is that the academics that sort of define what thinking even is seem confused by LLMs if you take the “stochastic parrot” POV. This eventually has real world affects - it might take a decade or two, but these things spread.
I think this is a crazy idea right now, but I also think that going into the future eventually we’ll need to have something like a TNG “Measure of a Man” trial about some AI, and I’d want to get that sort of thing right.
Yea, that was a bad way to phrase it - I just meant that from what I’ve heard tokens are very much not word by word. And sometimes it’s a couple words, but maybe that was misinformation. And I was trying (and failing) to make an analogy for a human - a concept is a compression of what otherwise would be a bunch of words, though I kind of meant more like a reference I guess.
I think it’s very clear that this “stochastic parrot” idea is less and less accepted by researchers and philosophers, maybe only in the podcasts I listen to…
It’s not capable of knowledge in the sense that humans are. All it does is probabilistically predict which sequence of words might best respond to a prompt
I think we need to be careful thinking we understand what human knowledge is and our understanding of the connotations if the word “sense” there. If you mean GPT4 doesn’t have knowledge like humans have like a car doesn’t have motion like a human does then I think we agree. But if you mean that GPT4 cannot reason and access and present information - that’s just false on the face of just using the tool IMO.
It’s also untrue that it’s predicting words, it’s using tokens, which are more like concepts than words, so I’d argue already closer to humans. To the extent it is just predicting stuff, it really calls into question the value of most of the school essays it writes so well now…
Well, LLMs can and do provide feedback about confidence intervals in colloquial terms. I would think one thing we could do is have some idea of how good the training data is in a given situation - LLMs already seem to know they aren’t up to date and only know stuff to a certain date. I don’t see why this could not be expanded so they’d say something much like many humans would - i.e. I think bla bla but I only know very little about this topic. Or I haven’t actually heard about this topic, my hunch would be bla bla.
Presumably like it was said, other models with different data might have a stronger sense of certainty if their data covers the topic better, and the multi cycle would be useful there.
Well, what you could do is run a DNS server so you don’t need to deal with IPs. You could likely adjust ports for whatever server to be 443 or 80 depending on if you’re internal only or need SSL. Also, something like zerotier won’t route your whole connection through your home internet if you set it up correctly, consider split tunneling. With something like zerotier it’ll only route the zerotier network you create for your devices.
I’m still sad about the day the real Opera with the presto rendering engine died. And while Vivaldi is getting many of the features and functionality, it’s still a chromium rebuild. I guess it just takes too much money to build your own rendering engine anymore.
My only interaction with Substack is that one podcast moved there for premium content. I thought it was mostly for written newsletters, which I always wondered how much of a market there actually is for paying for one newsletter, but then again I guess it’s just the written version of podcasts so I guess there is a market. Though promoting Nazi content gives me a lot of pause.
Honestly I never had a problem with MicroUSB and haven’t really seen a benefit to USB-C for basic charging of devices. I guess some might charge faster, but USB-C is so screwed up that you need a magic mix of cable, charger and device to get more than baseline anyway, it works the same as MicroUSB for me.
syncthing will work with pretty large amounts of data, unless you mean having the storage space on each device is the “won’t work” issue.
Noise doesn’t matter in a data center which is where the switches live. The power use might be more than a 1gbit, but they’re in line with any dual power enterprise switch really.
I will have to see that. I would be concerned about pushing cat5e that fast. I am not sure about cat6, but again that speed is not fast enough to buy new cards for the computers and if we were buying cards I guess the 10G fiber cards are likely cost competitive now that servers are dumping them as obsolete.
Yea, I think 2.5G is really searching for a market, that may not exist. For home use, 1Gbit is in general plenty fast enough, and maxes out most US customers Internet too. For enterprise use 10G is common and cheap. The cards to get an SFP+ port into any tower or server is just really small. Enterprise is considering how to do 100G core cheaply enough, and looking for at least 25G on performance servers, if not also 100G in some cases. If you’ve got the budget you can roll 400G core right now in “not insane pricing”.
2.5G to the generic office (that might well be remote) is likely re-wiring and unnecessary. And that’s if you don’t find ac WiFi sufficient, i.e. sub 1G.
I’m a big fan of cyberpower. If you want full remote management, buy one with the web control card, I’m pretty sure you can do anything via that. You should be able to get one in your pricerange.
For home use (and small uses at work) I’ve found cyberpower to be cheaper than APC and yet work as well. You’d likely need to get a model with a network card option, and that’ll cost more I think. I’m not in EU though, so IDK what model would meet your needs and price point (which seems pretty low to me for a network enabled UPS).