Completely agree.
25+ yr Java/JS dev
Linux novice - running Ubuntu (no windows/mac)
Completely agree.
I have a separate modem from my WiFi, but I’d sell you mine that’s a couple of years old for $50 because I just upgraded to fiber a few months ago and it’s just sitting in my network corner. But if you want one with integrated WiFi, this isn’t.
I asked ChatGPT for a tldr because same. The result reads like ad copy. Idk, man.
The memory packaging market is evolving with advancements like flip-chip, wire-bond, and through-silicon via (TSV) technologies. These innovations enable smaller, more powerful, and faster devices, particularly in smartphones, where efficient space use is crucial for sleek designs. DRAM, while still used in PCs, faces declining adoption due to its complexity and the rise of alternatives like 3D TSV, which offer better functionality. The APAC region, especially China, is leading the growth in memory packaging, driven by investments in assembly infrastructure and rising demand for mobile applications using system-in-package (SiP) technologies.
It’s been a minute since I’ve watched it, but as I recall she was just teleporting or something. The dark shadows were just a visual to get there. They telegraphed this a short while earlier. I guess I have to rewatch it since it’s in question, but at the time of watching it I felt it was clear that’s exactly what was going on.
I mean he knew she wasn’t turning into a shadow monster. He reacted because he thought she was killing Mae, and he could’ve said that, but it wouldn’t make it any better because he was arrogant and fearful of any use of the force outside of the Jedi way. That’s not any kind of exoneration even if he had explained it.
They want a series to keep viewers around for more than a month, and I think they are trying to replicate the water cooler conversation piece that Game of Thrones was. I remember spending a few minutes each week discussing GoT with coworkers, driving everyone’s interest.
That being said, I just really don’t like shows where you feel you never know what’s going on until they put the pieces together for you in the last episode. I get it’s supposed to keep you intrigued and speculating, but mostly I just get angry that show runners substitute mystery for caring about the characters.
Sol doesn’t clarify that she was turning into a shadow monster?
He realized that was wrong as soon as he killed her. Maybe he could’ve explained why he thought that, but it would’ve come across as making excuses. The rest I don’t disagree with.
She knows not to trust it. If the AI had suggested “God did it” or metaphysical bullshit I’d reevaluate. But I’m not sure how to even describe that to a Google search. Sending a picture and asking about it is really fucking easy. Important answers aren’t easy.
I mean I agree with you. It’s bullshit and untrustworthy. We have conversations about this. We have lots of conversations about it actually, because I caught her cheating at school using it so there’s a lot of supervision and talk about appropriate uses and not. And how we can inadvertently bias it by the questions we ask. It’s actually a great tool for learning skepticism.
But some things, a reasonable answer just to satisfy your brain is fine whether it’s right or not. I remember in chemistry I spent an entire year learning absolute bullshit about chemistry only for the next year to be told that was all garbage and here’s how it really works. It’s fine.
I don’t buy into it, but it’s so quick and easy to get an answer, if it’s not something important I’m guilty of using LLM and calling it good enough.
There are no ads and no SEO. Yeah, it might very well be bullshit, but most Google results are also bullshit, depending on subject. If it doesn’t matter, and it isn’t easy to know if I’m getting bullshit from a website, LLM is good enough.
I took a picture of discolorations on a sidewalk and asked ChatGPT what was causing them because my daughter was curious. Metal left on the surface rusts and leaves behind those streaks. But they all had holes in the middle so we decided there were metallic rocks missed into the surface that had rusted away.
Is that for sure right? I don’t know. I don’t really care. My daughter was happy with an answer and I’ve already warned her it could be bullshit. But curiosity was satisfied.
Fuck all that noise. Give me more Baldur’s Gate. I’m the biggest Star Wars fan and I haven’t bought a game since KotOR other than Survivor and the sequel. Because every time one catches my interest they start talking about all the cool DLC or things that are locked behind months or years of progression. I just won’t.
I’ve visited NY and Chicago, but I guess my digs were nice enough not to notice. And I used to live 75 minutes (assuming no traffic lol) from DC—far enough away that I didn’t have to deal with that kind of thing. Just like maybe some highway noise from far away.
I did once have a townhouse that had a rail track in the back yard, but I know what I was getting in that case. It was only noisy when there was a train.
I don’t think I’d want to live anywhere it is necessary to worry about sounds reduction levels. Wow.
I have a very feminist outlook on things, but I enjoy some problematic things. I know it’s not very progressive of me, but it is what it is. I acknowledge they are problematic.
Which is just to say, I don’t like this. I understand, but I’m not a fan.
That’s hilarious. I love LLM, but it’s a tool not a product and everyone trying to make it a standalone thing is going to be sorely disappointed.
The output for a given input cannot be independently calculated as far as I know, particularly when random seeds are part of the input. How is that deterministic?
The so what means trying to prevent certain outputs based on moral judgements isn’t possible. It wouldn’t really be possible if you could get in there with code and change things unless you could write code for morality, but it’s doubly impossible given you can’t.
Not exactly. My argument is that the more safety controls you build into the model, the less useful the model is at anything. The more you bend the responces away from true (whatever that is) the less of the tool you have.
Whether you agree with that mentality or not, we live in a Statist world, and protection of its constituent people from themselves and others is the (ostensible) primary function of a State.
Yeah I agree with that, but I’m saying protect people from the misuse of the tool. Don’t break the tool to the point where it’s worthless.
Again a biometric lock neither prevents immoral use nor allows moral use outside of its very narrow conditions. It’s effectively an amoral tool. It presumes anything you do with your gun will be moral and other uses are either immoral or unlikely enough to not bother worrying about.
AI has a lot of uses compared to a gun and just because someone has an idea for using it that is outside of the preconceived parameters doesn’t mean it should be presumed to be immoral and blocked.
Further the biometric lock analogy falls apart when you consider LLM is a broad-scoped tool for use by everyone, while your personal weapon can be very narrowly scoped for you.
Consider a gun model that can only be fired by left-handed people because most guns crimes are committed by right-handed people. Yeah, you’re ostensibly preventing 90% of immoral use of the weapon but at the cost of it no longer being a useful tool for most people.
I think I’ve said a lot in comments already and I’ll leave that all without relitigating just for arguments sake.
However, I wonder if I haven’t made clear that I’m drawing a distinction between the model that generates the raw output, and perhaps the application that puts the model to use. I have an application that generates output via OAI API and then scans both the prompt and output to make sure they are appropriate for our particular use case.
Yes, my product is 100% censored and I think that’s fine. I don’t want the customer service bot (which I hate but that’s an argument for another day) at the airline to be my hot AI girlfriend. We have tools for doing this and they should be used.
But I think the models themselves shouldn’t be heavily steered because it interferes with the raw output and possibly prevents very useful cases.
So I’m just talking about fucking up the model itself in the name of safety. ChatGPT walks a fine line because it’s a product not a model, but without access to the raw model it needs to be relatively unfiltered to be of use, otherwise other models will make better tools.
There are biometric-restricted guns that attempt to ensure only authorized users can fire them.
This doesn’t prevent an authorized user from committing murder. It would prevent someone from looting it off of your corpse and returning fire to an attacker.
This is not a great analogy for AI, but it’s still effectively amoral anyway.
The argument for limiting magazine capacity is that it prevents using the gun to kill as many people as you otherwise could with a larger magazine, which is certainly worse, in moral terms.
This is closer. Still not a great analogy for AI, but we can agree that outside of military and police action mass murder is more likely than an alternative. That being said, ask a Ukrainian how moral it would be to go up against Russian soldiers with a 5 round mag.
I feel like you’re focused too narrowly on the gun itself and not the gun as an analogy for AI.
you could have a camera on the barrel of a hunting rifle that is running an object recognition algorithm that would only allow the gun to fire if a deer or other legally authorized animal was visible
This isn’t bad. We can currently use AI to examine the output of an AI to infer things about the nature of what is being asked and the output. It’s definitely effective in my experience. The trick is knowing what questions to ask about in the first place. But for example OAI has a tool for identifying violence, hate, sexuality, child sexuality, and I think a couple of others. This is promising, however it is an external tool. I don’t have to run that filter if I don’t want to. The API is currently free to use, and a project I’m working on does use it because it allows the use case we want to allow (describing and adjudicating violent actions in a chat-based RPG) while still allowing us to filter out more intimate roleplaying actions.
An object doesn’t have to have cognition that it is trying to do something moral, in order to be performing a moral function.
The object needs it to differentiate between allowing moral use and denying immoral use. Otherwise you need an external tool for that. Or perhaps a law. But none of that interferes with the use of the tool itself.
Me and some old guildies have kept in touch off and on over the years. Every once in a while I’d buy a wow expansion and do a couple of dungeons. We were really looking forward to making Diablo 4 our new hang out.
We played like hell all through the beta. Then like twice in live. Then we all kinda decided it sucked. I think my good friend’s daughter is graduating soon. Or possibly already did. I can’t remember how much older than my own kids she was. I can remember when she was born.
He’s still like a brother to me, but we’ve got fuck all in common any more and we can’t keep talking about glory days that were damn near 20 years ago.