Troubleshooting: Getting Started & Setup
Issues with creating your first Execution Scope, connecting meetings, adding team members, and defining initial goals.
The scope plan feels noisy and hard to rank
Cause: The scope is too broad — it covers multiple teams, unrelated cadences, or everything a department does rather than one accountable area.
Fix: Split the scope by ownership or cadence. A well-sized scope has one accountable person, one primary meeting cadence, and priorities that can be ranked without debate. If you can't rank priorities because "everything matters", that's a reliable sign the scope needs splitting.
The Living Execution Plan is filling up with task detail
Cause: The plan is being used as a task tracker rather than an execution reality surface — subtasks, ticket breakdowns, and delivery mechanics are being added directly.
Fix: Keep task detail in your delivery tools (Jira, Asana, or similar). The Living Execution Plan should hold accountable actions, priorities, risks, commitments, and decisions — not every subtask. If something belongs in a sprint backlog, it belongs in the tool that manages sprints.
Meetings keep sliding back into status updates
Cause: The pre-read isn't being reviewed before the meeting, so participants arrive without shared context and spend the meeting reconstructing what happened.
Fix: Share the pre-read with participants before each meeting and open with "what changed since last time?" When participants arrive oriented, the meeting can move directly to decisions.
The plan reads like a strategy document, not an execution plan
Cause: Strategy narratives, vision statements, or background context have been added directly to the Living Execution Plan, mixing aspiration with reality.
Fix: Keep the plan grounded — priorities, commitments, risks, ownership, and decisions. Strategy documents belong in docs tools (Notion, Google Docs). If something doesn't help answer "what are we doing now, who owns it, what's at risk?" — it doesn't belong in the plan.
AI suggestions feel unreliable or inaccurate
Cause: AI proposals from the post-meeting report were accepted without being checked against what actually happened. When reviews become rubber-stamps, errors accumulate and the plan loses fidelity to reality.
Fix: Treat the post-meeting report as a well-informed draft, not a final record. At the review step, check the high-leverage items: priority changes, ownership assignments, key decisions, and new risks. Correct wording that doesn't match intent, reject actions that were never agreed, and confirm only what you'd stand behind. If suggestions are consistently low-quality, check scope breadth — broad scopes produce noisier AI output than focused ones.
Post-meeting, people are reading the raw transcript instead of the report
Cause: The post-meeting report hasn't been set as the shared review surface — participants are going to the raw transcript to find decisions and actions.
Fix: Direct participants to the post-meeting report, not the transcript. The report surfaces what the Transcriber detected — decisions, actions, risks, ownership changes — in a reviewable format. Raw transcripts are the source material, not the output. Confirm decisions and actions in the report review step so the Living Execution Plan updates correctly.
Priorities feel disconnected from what the team is actually working on
Cause: The scope was created without goals. In Parallel has no anchor for what matters, so it captures signals from meetings but can't prioritize them accurately.
Fix: Add at least one goal to the scope before the next meeting. Even a rough milestone is enough to orient In Parallel's prioritization. Go to Goals in the left nav under your scope and click + to add one.
Still stuck?
Contact the In Parallel support team.
Related how-tos