How Council works
Why not just use one AI?
A single AI can sound confident even when it’s wrong — and it has no way to flag that for you. Council runs three AI systems independently on your question. Where they agree, where they disagree, and where evidence is thin all become visible. That’s not a guarantee of correctness, but it’s more than you get from a single answer.
What if they all agree?
Agreement doesn’t mean they’re right. These systems were trained on much of the same data, so they can share the same blind spots. When they all agree, it might be because the evidence is strong — or because they’re drawing from the same flawed sources. Council treats agreement as evidence, not proof.
The score reflects this. High agreement with a serious objection from the Critic produces a lower score than high agreement without one. The Critic’s findings carry real weight.
What each one does
Each system has a fixed job. They work separately and don’t see each other’s output until the end — so they can’t copy or defer to each other.
Strategist — reads the question and drafts the first answer.
Researcher — independently gathers evidence and rates how confident each claim should be.
Critic — reviews everything and looks for the strongest reason the answer could be wrong.
For health-related questions, Council automatically pulls in real citations from public databases before the final answer is assembled — published research, clinical trials, and regulatory safety reports. Non-health questions skip this step quietly. These are signals, not conclusions: a report count from FDA’s adverse event system tells you how many times something was reported, not how many times it was caused.
What you always see
The Critic’s main finding — the biggest reason the answer could be wrong — is always visible. No setting, mode, or future update will ever hide it. That’s a design rule, not a feature.
You also see where the systems agreed, where they disagreed, and which claims were flagged for you to verify yourself.
How the score works
The score is calculated by a fixed formula — no AI is involved in scoring. It looks at how much the systems agreed, how strong the evidence is, how current the sources are, and how serious the Critic’s objections were. Given the same inputs, the same score comes out every time.
The score is not a probability that the answer is correct. It tells you how well-supported the answer appears to be based on what the systems found. A high score means strong agreement and good evidence. It does not mean “definitely true.”
What the score doesn’t tell you
A score of 70 does not mean the answer is correct 70% of the time. That kind of precision would require large-scale testing against verified facts, which hasn’t been done yet. The scores are useful but not calibrated in that sense.
The three systems work separately, but they were trained on overlapping data. Their independence is structural — separate calls, no shared notes — but not guaranteed at the level of what they believe. They can still make the same mistake for the same reason.
Where Council is weaker
Council is strongest on evidence-based questions where providers can check specific claims against sources. It is weaker on politically or ideologically charged questions, where providers tend to share similar framing, draw from the same institutional sources, and develop the same blind spots.
In those cases, the Critic may challenge details inside a shared frame rather than challenge the frame itself. If your question carries strong political or ideological assumptions, treat the output with extra caution — the catch may not go deep enough.
Reviewing a document
You can upload a Word document or PDF, or paste a section of text, and Council will pressure-test it the same way it checks a question. It looks for weak claims, missing evidence, and the strongest reason the section could be wrong.
When you upload a file, Council detects the sections automatically. You review them one at a time — or all at once for short documents — and Council checks each one independently.
What this is not
Council is not truth. It is not a substitute for professional advice, domain expertise, or peer review. It does not verify whether claims are true — though for health queries, it checks whether specific cited studies and trials exist in public registries. It cannot guarantee that the AI systems are truly independent. Council is a structured attempt to make AI uncertainty visible rather than hidden — so you can decide for yourself what to trust.
Three AI systems, working independently.
Try Council →