Help & Guide
How to Ask Good Questions
To get the best results from ConvergePanel, frame your questions clearly and specifically:
- Be specific: Instead of "Tell me about climate change," try "What are the key factors affecting climate change, and what is the scientific consensus on each?"
- Ask for analysis: Questions that require comparison, evaluation, or synthesis work best with multiple models.
- Request numbers when relevant: Models may provide different statistics, which helps identify areas of uncertainty.
- Include context: Provide background information if your question is about a specific domain or recent event.
Understanding Consensus and Disagreement
✓ Strong Consensus
When multiple models agree on a claim or fact, it appears in the "Areas of Agreement" section. This indicates higher confidence in the answer.
⚠️ Contested Areas
When models provide different perspectives or conflicting information, these appear in the "Model Split" section. This highlights areas where the answer is uncertain or where different viewpoints exist.
🔢 Numeric Conflicts
When models provide different numbers, percentages, or statistics for the same claim, ConvergePanel flags this as a numeric conflict. Review both values and consider the source or context.
Why Minimum 2 Models Required?
Convergence analysis requires at least two models to compare responses. With only one model, there's no basis for:
- Identifying consensus or disagreement
- Detecting numeric conflicts
- Validating claims across different perspectives
- Generating a meaningful unified synthesis
If only one model responds successfully, ConvergePanel will show the raw response but will not generate a synthesis report. You'll be prompted to re-run with additional models.
Verification Gate
After every panel synthesis, ConvergePanel displays a Verification Gate at the top of the report. It gives you an at-a-glance decision-readiness signal based on how much the models agreed, where they diverged, and what evidence may be missing.
Broadly consistent
Models show broad agreement with supporting evidence and no major disagreements. Suitable for exploratory use — but always cross-check key claims with primary sources before acting on them.
Needs human review
The analysis detected model disagreements, a significant number of contested claims, or a combination of bias signals and uncertainty. Review the flagged areas and verify disputed premises independently before relying on the conclusions.
Low confidence — review required
Key findings lack source citations and models disagree or show low confidence. Treat the output as a starting hypothesis only. Request sources, narrow your question, and do not use for automated action until claims are independently verified.
What signals does it use?
The Verification Gate is computed from signals already present in your synthesis — no additional model calls are made. It checks for:
- Model disagreements on core conclusions
- Number of contested claims
- Missing sources or citations on key findings
- Bias and blind spot flags
- High uncertainty signals (low-confidence findings + open questions)
Along with the status badge, the gate shows why it reached its assessment (listing only the signals that triggered) and provides recommended next steps tailored to the specific issues detected.
The Verification Gate is an advisory signal derived from model comparison. It does not constitute factual certification and is not a substitute for independent professional review.
Claim Severity Tags
Not every claim in a synthesis carries the same weight. Claim Severity Tags label each finding, disagreement, and bias flag with one of three impact levels so you can focus your review where it matters most.
Low stakes
Supporting context, secondary framing, or low-impact observations. Useful background but unlikely to change a decision.
Important
Claims that materially shape interpretation, prioritization, or follow-up. Worth verifying before relying on them.
Decision-critical
Claims that affect action, compliance, legal exposure, financial exposure, safety, or strategic recommendations. Treat these with the highest scrutiny.
Severity is estimated from the text content using lightweight heuristics. It reflects potential impact on downstream decisions, not model confidence alone.
Source-Grounding Flags
Source-Grounding Flags indicate whether a claim appears to be backed by cited evidence, based on model inference, or a mix of both. They help you gauge how much independent verification a conclusion might need.
Source-backed
The claim references explicit citations, studies, institutions, or external evidence. Still verify the cited sources independently.
Model-reasoned
The claim appears based primarily on model inference and reasoning with little or no cited evidence. Exercise extra caution.
Mixed / unclear
The grounding is ambiguous — the claim blends sourced and inferred reasoning, or there is not enough signal to classify it clearly.
Grounding flags are informational signals estimated from text patterns. They do not guarantee that cited sources are accurate or that model-reasoned claims are incorrect.
Panel Verdict Card
At the bottom of every synthesis report, ConvergePanel generates a compact Panel Verdict — a shareable decision artifact that captures the essentials of the analysis in one card.
The card includes:
- The original question
- The top consensus point
- The top disagreement (if any)
- Verification Gate result
- Source-grounding signal
- One key caveat or blind spot
Use the Copy summary button to grab a plain-text version for pasting into Slack, email, or documents. The Copy for X (short) button produces a condensed version optimized for social sharing.
The Panel Verdict is auto-generated from multi-model synthesis and is provided for informational purposes only.
Panel Presets
- Quick Panel (2 models): Fastest results, good for simple questions.
- Balanced Panel (3 models): Good balance of speed and coverage.
- Deep Panel (5 models): Most comprehensive analysis, best for complex or important questions.
