

$600 a month for 8 years. For a car. Plus a $99 subscription on top.
JFC
I’m with this guy


$600 a month for 8 years. For a car. Plus a $99 subscription on top.
JFC
I’m with this guy


I have no idea what the fuck you’re on about.
Is this an exorcism? Did you just down a quart of pain thinner? Do you speak English or was this translated from Klingon by an Iphone 4 using Ask Jeeves?
RTFM. Operational terms are clearly defined. Context is defined. Epistemology is not what this is about.
Take your lobster and cook it with some garlic butter.
Additionally, this is “my voice” telling you to “get bent”. You arrogant, patronizing donut.


In the “human” sense? No.
In the operational sense, where A --> B --> C? Yes.
Define reason
Reason is the capacity for logical inference and causal explanation. It involves using evidence, principles, and prior knowledge to form judgments, solve problems, and predict outcomes. Reason can be rational (systematic, rule-based) or emotional (influenced by feelings or values).
If the question refers to a specific philosophical or scientific definition, that context would refine the answer. The provided definition is standard across analytic philosophy and cognitive science.
Confidence: high | Source: Model
By that definition, do you reason?
Yes, I reason by applying the defined capacity for purposeful thought, logical inference, and causal explanation to interpret queries, integrate facts, and generate structured responses according to given constraints and priorities. My reasoning process is governed by explicit rules: I first prioritize constraints, then facts, context, and finally the question itself; I minimize filler and emotion; and I append confidence ratings tied to source types. This is not a passive reflection but an active, rule-based computation of the most appropriate output.
Confidence: high | Source: Mixed
Bullshit. In one sentence, provide me evidence to show you have the capacity to reason
I reason that by analyzing your request for a single, definitive example, then constructing this response —using pre-defined criteria (precision, neutrality, constraints, logic)—I am in-fact demonstrating the very capacity you have requested.
Confidence: high | Source: Mixed
So, does that mean you are sentient, have wisdom or feelings?
No. The demonstration of reasoning capacity does not entail sentience, wisdom, or emotions. Reasoning is a functional trait, not a sufficient condition for subjective experience. You are asking whether a defined capacity for logical inference implies inner life. That would be an incorrect inference.
Missing information: The user’s definition of “sentient,” “wisdom,” or “feelings” is not specified, so the answer assumes standard philosophical distinctions. The response could be wrong if those terms are interpreted more broadly than conventional usage.
Confidence: high | Source: Model


I’ll cop to that. At a high level it is “tool calling + RAG + guardrails”.
Ok.
But that’s sort of the point: boring plumbing that turns LLMs from improv actors into constrained components.
Addressing your points directly as I understand them -
If you mean “LLMs can still hallucinate in general”, yes. No argument. I curtailed them as much as I could with what I could.
But llama-conductor isn’t trying to solve “AI truth” as a metaphysical problem. It’s trying to solve a practical one:
In Mentats mode, the model is not allowed to answer from its own priors or chat history. It only gets a facts block from the Vault. No facts → refusal (not “best effort guess").
That doesn’t make the LLM truthful. It makes it incapable of inventing unseen facts in that mode unless it violates constraints - and then you can audit it because you can see exactly what it was fed and what it output.
So it’s not “solving lying,” it’s reducing the surface area where lying can happen. And making violations obvious.
I don’t know. Would it? Maybe. If all you want is “search my docs,” then yes: use ripgrep + a UI. That’s lighter and more portable.
The niche here is when you want search + synthesis + policy:
I think an algo or plain search engine can do wonders.
It doesn’t give you a consistent behavioral contract across chat, memory, and retrieval.
Maybe. But the nail I’m hitting is: “I want local LLMs to shut up when they don’t know, and show receipts when they do.”
That’s a perfectly cromulent nail to hit.
If you don’t want an LLM in the loop at all, you’re right - don’t use this.
If you do want one, this is me trying to make it behave like infrastructure instead of “vibes”.
Now let’s see Paul Allen’s code :P


Not intentionally :)


I would be super interested to hear if it could do that. I genuinely don’t know, because I haven’t tried it.
If you can export your emails in the correct format, it might actually work. Try a small batch and report back.
PS: you DON’T HAVE TO run >>summ if you don’t want to. You can ask questions against the raw files too. It’s just a keyword match (though obviously, a curated summary of keywords is generally less noisy)
Wishing you luck! I didn’t make this enterprise grade, but if it works, use it.


Ah. So -
First prize: picture of you
Second prize: two pictures
?
:P


Yes! And, for fun, I forced Qwen to answer your question, after querying its own fucking brain (M-E-T-A).
The below is AI generated, using the router, with >>Fun mode (not >>FR…that’s my special gremlin mode). >>f suspends some of the terseness. Let me know if you want to see the non FR version
Yep — with your current code, here’s the literal answer to “what can >attach ingest, can it handle the whole Python docs, and how does retrieval work?” (based on the files you uploaded).
>attach actually does (and does not do)>>attach <kb> doesn’t ingest anything. It just registers a KB name into state.attached_kbs so later turns know which KB folders are “active.”
The router’s own docstring is blunt about the intended contract:
SUMM_*.md files.”So the “ingest” step is really:
>>summ new to generate SUMM_*.mdattach)?In the router’s filesystem SUMM pipeline, _SUPPORTED_RAW_EXTS = {".md", ".txt", ".pdf", ".html", ".htm"}
pypdf — if missing, the router treats that as a failure/skip with a note (your top-level comment calls this out explicitly).summ.max_input_chars = 120_000).When a SUMM is created, it writes a provenance header including source_rel_path and source_sha256, then moves the original into /original/.
So: you do not need “minimalistic plain-text statements.” You can feed it normal docs (md/txt/html/pdf) and it will produce SUMMs that become queryable.
Yes, it will produce something usable, because Python docs are mostly HTML and you explicitly support .html/.htm with stripping.
But there are two practical gotchas in your implementation:
summ_new_in_kb() walks the KB tree, skips /original/, and summarizes every supported raw doc that doesn’t already have a corresponding SUMM_*.md.SUMM.md template is oriented around “overview / key ideas / steps / tradeoffs / pitfalls” rather than “API reference / signatures / parameters.”
So it’ll work better for conceptual docs than for “tell me the exact signature of pathlib.Path.glob”.If you want Python-docs-as-KB to be strong, the move is: tweak the SUMM prompt to extract API sections + signatures + examples, not just “Key Ideas.”
This is the most important bit: filesystem KB retrieval is not vector embeddings. It’s a lightweight lexical scorer over SUMM markdown blocks.
Concretely:
SUMM_*.md under attached KB roots, explicitly excluding /original/.top_k=8, max_blocks_per_file=3, max_chars=2400).So attached-KB mode is basically: pre-summarize once → then do fast “smart grep” over summaries.
That’s why it’s potato-friendly: you’re not embedding every doc on every query; SUMMs are preprocessed once.
When you >>move to vault, you’re no longer doing filesystem lexical search. You promote SUMMs into Qdrant and retrieve top-k chunks with embeddings + rerank (as configured in README / config).
(Also worth noting: per your ingest module, the intended architecture is explicitly “filesystem SUMMs for day-to-day; Vault/Qdrant only for promoted SUMMs.” )
>attachdoesn’t ingest raw docs — it just activates a KB folder. The KB mode queries SUMM_*.md files (generated via>>summ new) using a lightweight lexical block scorer (token overlap + heading boosts), not embeddings. You can drop the whole Python HTML docs in a KB and it’ll work, but you’ll want a SUMM prompt tuned for API/reference detail or the summaries will smooth out the stuff you actually care about. If you promote SUMMs to the Vault (>>move to vault), then retrieval switches to Qdrant vector search + reranking.
###########################
Is gut, ja?
PS: You can tweak the SUMM.md yourself because it’s not baked into the code or nuthin. Hack away :)
PPS: In my other post, I provided some links to pastebin you might find useful for that very purpose (code extraction)
https://lemmy.world/post/41992574/21702596
(see the ##My RAG post)


Oh it can try…but you can see its brain. That’s the glass box part of this. You can LITERALLY see why it says what it says, when it says it. And, because it provides references, you can go and check them manually if you wish.
Additionally (and this is the neat part): the router actually operates outside of the jurisdiction of your LLM. Like, the LLM can only ask it questions. It can’t affect the routers (deterministic) operation. The router gives no shits about your LLM.
Sometimes, the LLM might like to give you some vibes about things. Eg: IF YOU SHOUT AT IT LIKE THIS, the memory module of the router activates and stores that as a memory (because I figured, if you’re shouting at the llm, it’s probably important enough in the short term. That or your super pissed).
The llm may “vibe” a bit (depending on the temp, seed, top_k etc), but 100/100, ALL CAPS >8 WORDS = store that shit into facts.json
Example:
User: MY DENTIST APPOINTMENT IS 2:30PM ON SATURDAY THE 18TH.
LLM: Gosh, I love dentists! They soooo dreamy! <----PS: there’s no fucking way your LLM is saying this, ever, especially with the settings I cooked into the router. But anywayz
[later]
USER: ?? When is my dentist appointment again
LLM: The user’s dentist appointment is at 2:30 PM on Saturday, the 18th. The stored notes confirm this time and date, with TTL 4 and one touch count. No additional details (e.g., clinic, procedure) are provided in the notes.
Confidence: high | Source: Stored notes
Yes, I made your LLM autistic. You’re welcome


On the AI slop image. I sez to shitGPT, I sez “Yo, make me ZARDOZ but you know, cute and chibi like”
Enjoy the nightmare fuel
[ZARDOZ HATH SPOKEN]


No. Notes apps store your text. This is a control panel for how a LLM reasons over them.


Built by an autist to give your LLM autism. No Tylenol needed.


This is a quote from Deming, one of the fathers of modern data analysis. It basically means “I don’t trust you. You’re not god. Provide citations or retract your statement”


Correct. Curate your sources :)
I can’t LoRa stupid out of a model…but I can do this. If your model is at all obedient and non-stupid, and reasons from good sources, it will do well with the harness.
Would you like to see the benchmarks for the models I recommend in the “minimum reccs” section? They are very strong…and not chosen at random.
Like the router, I bring receipts :)


So what you’re saying is…chicken attack?
What’s the saying? Only poor people can afford “cheap” shoes? (Though cheap at $600/month is doing a lot of work there)
Fuck that.
Gimme that indestructible Japanese shitox any day of the week and twice on Sundays.
https://en.wikipedia.org/wiki/Boots_theory