CLI Commands

This is the quick reference for the NEXUS CLI.

Use it when you want to remember:

  • what commands exist
  • how to ask the CLI for command-specific help
  • which command fits which workflow
  • where to go for a more detailed runbook

Built-In Help

Global help:

dotnet run --project NEXUS-Code/src/Nexus.Cli/Nexus.Cli.fsproj -- --help

Command-specific help:

dotnet run --project NEXUS-Code/src/Nexus.Cli/Nexus.Cli.fsproj -- help import-provider-export
dotnet run --project NEXUS-Code/src/Nexus.Cli/Nexus.Cli.fsproj -- import-provider-export --help

The CLI supports both:

  • help <command>
  • <command> --help

Event-store default resolution now follows this order:

  1. NEXUS_EVENT_STORE_ROOT
  2. in-repo NEXUS-EventStore/ during transition
  3. sibling ../NEXUS-EventStore

Commands

write-sample-event-store

  • Writes a small sample canonical history bundle.
  • Use it to smoke-test event writing, layout, and projection rebuilds without touching real imports.
  • Details: docs/how-to/write-sample-event-store.md

compare-provider-exports

  • Compares two raw ChatGPT, Claude, or Grok export zips before canonical import.
  • Use it when you want a source-layer view of added, removed, and changed provider-native conversations or messages.
  • It also reports whether the two zip artifacts are byte-identical.
  • Details: docs/how-to/compare-provider-exports.md

compare-import-snapshots

  • Compares two normalized import snapshots after provider import.
  • Use it when you want full-export or rolling-window snapshot semantics inside the NEXUS pipeline, without confusing additive dedupe with snapshot membership.
  • It is keyed by provider-native conversation identity and is derived from the parsed provider payload before canonical dedupe.
  • If older imports are missing snapshot files, run rebuild-import-snapshots.
  • Details: docs/how-to/compare-import-snapshots.md

report-provider-import-history

  • Reports one provider's normalized import snapshots in chronological order.
  • Use it when you want a timeline view of export/import history plus adjacent snapshot deltas.
  • When the preserved raw artifacts are still available, it also reports raw SHA-256 and whether each artifact matches the previous snapshot's artifact.
  • This is a snapshot-history report, not an additive working-batch report.
  • Details: docs/how-to/report-provider-import-history.md

report-current-ingestion

  • Reports the latest known import state across providers.
  • Use it when you want one operational view of what is currently ingested without checking each provider separately.
  • It reads the newest import manifest per provider, adds normalized snapshot totals when available, and reports raw root-artifact SHA-256 when the preserved file still exists.
  • It also shows the current LOGOS source/channel/signal classification plus handling-policy and entry-pool metadata for known providers.
  • Details: docs/how-to/report-current-ingestion.md

report-logos-catalog

  • Reports the explicit allowlisted LOGOS source systems, source/access/acquisition/rights vocabulary, intake channels, signal kinds, and handling-policy dimensions.
  • Use it when you want to see the current concrete LOGOS intake vocabulary before classifying or seeding a source.
  • Details: docs/how-to/report-logos-catalog.md

report-logos-handling

  • Audits LOGOS intake and derived notes by access, rights, and handling policy.
  • Use it when you want to quickly see which notes are still raw, which are personal-private or customer-confidential, which derivatives are marked approved-for-sharing, which notes still need rights review, and which likely require attribution.
  • It scans the intake and derived trees recursively and reports entry-pool counts too.
  • This is an audit report, not a publication gate by itself.
  • Details: docs/how-to/report-logos-handling.md

export-logos-public-notes

  • Exports only public-safe LOGOS notes into a dedicated output folder.
  • Use it when you need an actual public-facing note set rather than just a handling audit.
  • It scans both intake and derived note trees recursively, but depends on the explicit PublicSafe pool boundary plus rights that allow public distribution, so merely sanitized, team-only, personal-training-only, or attribution-missing notes are skipped.
  • Details: docs/how-to/export-logos-public-notes.md

report-conversation-overlap-candidates

  • Reports conservative conversation-level overlap candidates between two providers' projection sets.
  • Use it when you want to spot possible cross-source overlap, such as local Codex capture vs later provider export, without collapsing anything automatically.
  • It is based on explainable signals like normalized title similarity, time overlap, and message-count closeness.
  • This is a candidate report only, not reconciliation.
  • Details: docs/how-to/report-conversation-overlap-candidates.md

rebuild-import-snapshots

  • Rebuilds normalized import snapshots for older provider-export imports from preserved raw artifacts.
  • This rewrites derived snapshot files only. It does not append canonical events.
  • Use --import-id <uuid> for one import or --all for an explicit full backfill pass.
  • Details: docs/how-to/rebuild-import-snapshots.md

import-provider-export

  • Archives a ChatGPT, Claude, or Grok export zip.
  • Parses provider records and appends canonical observed history into the sibling NEXUS-EventStore repo.
  • Also records restricted-by-default LOGOS source, signal, handling-policy, and entry-pool metadata for the import.
  • Also writes a normalized import snapshot under snapshots/imports/<import-id>/.
  • Also materializes a batch-local graph working batch under graph/working/imports/<import-id>/.
  • Details: docs/how-to/import-provider-export.md

import-codex-sessions

capture-codex-commit-checkpoint

  • Exports the current Codex local-session state, imports it, and links it to the current Git HEAD commit.
  • Use it after a commit when you want a durable path from that commit back to the Codex chat that led to it.
  • Writes a durable checkpoint manifest under work-batches/commit-checkpoints/<repo>/<commit>.toml.
  • Details: docs/how-to/capture-codex-commit-checkpoint.md

report-codex-commit-checkpoint

  • Reports the durable checkpoint linked to a repo commit.
  • Use it after copying a commit SHA from GitHub when you want the linked Codex import and conversation hints.
  • Defaults to the current Git HEAD commit when --commit <sha> is omitted.
  • Details: docs/how-to/capture-codex-commit-checkpoint.md

install-codex-commit-checkpoint-hook

  • Installs or refreshes a managed Git post-commit hook that captures Codex commit checkpoints automatically.
  • Use it when you want each commit in a repo to write a durable commit-to-chat link by default.
  • Existing hook content is preserved when possible, and the managed block logs to .git/nexus-hooks/codex-commit-checkpoint.log.
  • Details: docs/how-to/install-codex-commit-checkpoint-hook.md

capture-artifact-payload

  • Manually hydrates an artifact payload that was referenced earlier.
  • Archives the file into NEXUS-Objects/ and appends ArtifactPayloadCaptured when the payload is new.
  • Details: docs/how-to/capture-artifact-payload.md

rebuild-conversation-projections

create-concept-note

  • Creates a curated concept-note seed from one or more canonical conversation projections.
  • Use it to promote recurring ideas from chat history into durable repo memory with provenance.
  • Details: docs/how-to/create-concept-note.md

create-logos-intake-note

  • Creates a durable LOGOS intake seed note from explicit source, channel, signal, locator, and handling-policy metadata.
  • Creates a durable LOGOS intake seed note from explicit source, source-instance, access, acquisition, rights, locator, and handling-policy metadata.
  • Use it for forum/Talkyard/Discord/email/bug-report/app-feedback items before a full ingestion path exists for that source type.
  • New notes default to a restricted handling policy unless you explicitly choose other allowlisted values.
  • The note now enters an explicit LOGOS pool path at creation time: raw, private, or public-safe.
  • Details: docs/how-to/create-logos-intake-note.md

import-logos-blog-repo

  • Imports an owner-controlled public Markdown blog repo into durable public-safe LOGOS notes.
  • Use it when you want public writing in Git to become explicit LOGOS material without pretending the Git repo itself is the only working surface.
  • The current path is intentionally narrow and expects Markdown files with front matter such as title, slug, datePublished, cuid, and tags.
  • Details: docs/how-to/import-logos-blog-repo.md

create-logos-sanitized-note

  • Creates a derived sanitized LOGOS note from an existing restricted intake note.
  • Use it when the source note should stay restricted but a redacted, anonymized, or explicitly shareable derivative is needed.
  • The derived note keeps source classification, access, rights, and policy provenance, lands in private or public-safe based on policy plus rights, and does not copy raw locators or raw source text forward.
  • Details: docs/how-to/create-logos-sanitized-note.md

rebuild-artifact-projections

rebuild-graph-assertions

  • Rebuilds the first thin graph-assertion layer from canonical history.
  • Use it when you want a derived graph substrate over the canonical event store.
  • Large stores now require explicit --yes approval because full rebuild is treated as a heavyweight operation.
  • Writes a rebuild manifest under graph/rebuilds/ so timings and counts are durable.
  • Details: docs/how-to/rebuild-graph-assertions.md

export-graphviz-dot

  • Exports the derived graph assertions as a Graphviz DOT file.
  • Use it when you want an external visual lens over the graph to spot structure, clusters, and relationships.
  • It now supports provider, provider-conversation, and import scopes so you do not have to render the whole graph every time.
  • It also supports --working-node-id for one node's immediate neighborhood inside a fresh working import batch.
  • It can also traceably verify a --working-import-id batch back to canonical events and raw object refs before writing the DOT file.
  • Details: docs/how-to/export-graphviz-dot.md

render-graphviz-dot

  • Renders an existing DOT file into SVG or PNG using an explicitly allowlisted Graphviz engine.
  • Use it after export-graphviz-dot when you want a directly viewable file.
  • Details: docs/how-to/render-graphviz-dot.md

report-unresolved-artifacts

report-working-graph-imports

report-working-import-conversations

  • Summarizes the conversation nodes present in one import-local graph working batch.
  • Use it when you want to understand a fresh import batch in conversation terms before drilling into graph details.
  • Details: docs/how-to/report-working-import-conversations.md

compare-working-import-conversations

  • Compares the conversation contributions present in two import-local graph working batches.
  • Use it when you want a fast batch-to-batch view of added, removed, and changed conversation contributions in the working layer.
  • This is intentionally about batch-local derived contributions, not full provider snapshot truth.
  • Details: docs/how-to/compare-working-import-conversations.md

find-working-graph-nodes

  • Finds candidate nodes from the SQLite working index by title/slug text plus explicit role and batch filters.
  • Use it when you need node IDs before inspecting a local neighborhood.
  • Details: docs/how-to/find-working-graph-nodes.md

report-working-graph-batch

  • Summarizes one graph working import batch from the SQLite working index.
  • Use it when you want a quick structural view of a specific fresh import batch.
  • Legacy alias: report-working-graph-slice
  • Details: docs/how-to/report-working-graph-batch.md

report-working-graph-neighborhood

  • Shows the local neighborhood of one node inside a single graph working import batch.
  • Use it after find-working-graph-nodes when you want the nearby literals and node-to-node connections.
  • Details: docs/how-to/report-working-graph-neighborhood.md

verify-working-graph-batch

  • Verifies one graph working import batch back to canonical events and preserved raw objects.
  • Use it when a batch matters enough to trade speed for stronger provenance validation.
  • Legacy alias: verify-working-graph-slice
  • Details: docs/how-to/verify-working-graph-batch.md

rebuild-working-graph-index

  • Rebuilds the SQLite graph working index from the existing graph working import batches.
  • Use it when the local working index is missing, stale, intentionally reset, or absent after a fresh clone.
  • Details: docs/how-to/rebuild-working-graph-index.md

Common Workflow Sequences

Provider import:

  1. Run compare-provider-exports if you want to understand raw export-window deltas before import.
  2. Run import-provider-export.
  3. Run report-provider-import-history --provider <chatgpt|claude|grok|codex> if you want a chronological snapshot timeline for one provider. Add --objects-root <path> when the preserved raw artifacts are not under the repository-default object store and you want raw SHA-256 evidence in the report.
  4. Run report-current-ingestion if you want one cross-provider status view of what the store currently contains.
  5. Run compare-import-snapshots --base-import-id <uuid> --current-import-id <uuid> if you want normalized snapshot semantics for one specific import pair after import.
  6. Run rebuild-conversation-projections.
  7. Run report-conversation-overlap-candidates --left-provider codex --right-provider chatgpt if you want a first explicit cross-source overlap candidate check.

Codex commit checkpoint:

  1. Make the Git commit in the target repo.
  2. Run capture-codex-commit-checkpoint.
  3. Later, run report-codex-commit-checkpoint --repo-root <path> --commit <sha> when you want the linked Codex import and conversation hints for that commit.

Automatic Codex commit checkpoint:

  1. Run install-codex-commit-checkpoint-hook --repo-root <path> once per repo.
  2. Commit normally in that repo.
  3. Later, run report-codex-commit-checkpoint --repo-root <path> --commit <sha> when you want the linked Codex import and conversation hints for a specific commit.
  4. Run rebuild-artifact-projections.
  5. Run rebuild-graph-assertions if you want to refresh the thin graph layer.
  6. Run export-graphviz-dot if you want an external graph view.
  7. Run render-graphviz-dot if you want SVG or PNG output from the DOT file.
  8. Run report-unresolved-artifacts if you want to identify missing payloads.
  9. Run report-working-graph-imports if you want a quick view of the current graph working batches.
  10. Run report-working-import-conversations --import-id <uuid> if you want a conversation-centric view of one fresh import batch.
  11. Run compare-working-import-conversations --base-import-id <uuid> --current-import-id <uuid> if you want a batch-to-batch comparison of conversation contributions in the working layer.
  12. Run find-working-graph-nodes if you want to discover candidate node IDs from the SQLite working index.
  13. Run report-working-graph-batch --import-id <uuid> if you want the SQLite-backed summary for one import batch.
  14. Run report-working-graph-neighborhood --import-id <uuid> --node-id <node-id> if you want the local structure around one indexed node.
  15. Run rebuild-working-graph-index if the SQLite working index needs to be recreated from existing working batches.
  16. Run verify-working-graph-batch --import-id <uuid> if you want to validate that the batch still traces back cleanly to canonical and raw layers.
  17. Run export-graphviz-dot --working-import-id <uuid> --verification traceable if you want a graph export that refuses to render when that traceability chain is broken.

Codex session import:

  1. Run dotnet fsi NEXUS-Code/scripts/export_codex_sessions.fsx.
  2. Run import-codex-sessions.
  3. Run rebuild-conversation-projections.
  4. Run report-working-graph-batch --import-id <uuid> if you want the local working-index summary for that batch.

Concept harvest:

  1. Run rebuild-conversation-projections if needed.
  2. Run create-concept-note with one or more canonical conversation_id values.
  3. Edit the seed note under docs/concepts/.
  4. Use export-graphviz-dot --conversation-id <uuid> when you want the local graph neighborhood alongside the note.

LOGOS intake seeding:

  1. Run report-logos-catalog.
  2. Run create-logos-intake-note.
  3. Run create-logos-sanitized-note if the source note needs a safer derivative for broader sharing.
  4. Run report-logos-handling if you want to audit raw, restricted, and approved notes across the LOGOS note folders.
  5. Keep the restricted source note under docs/logos-intake/<pool>/.
  6. Refine the derived note under docs/logos-intake-derived/<pool>/ for the intended sharing scope.

Public writing import:

  1. Keep the canonical public article repo in Git.
  2. Run import-logos-blog-repo.
  3. Run report-logos-handling if you want an audit view over the imported notes.
  4. Run export-logos-public-notes if you want an explicit exported public-safe subset and manifest.

Manual artifact hydration:

  1. Identify a target artifact with report-unresolved-artifacts.
  2. Run capture-artifact-payload.
  3. Run rebuild-artifact-projections.

Defaults

Unless overridden, the CLI uses repository-local defaults:

  • event store root: NEXUS-EventStore/
  • objects root: NEXUS-Objects/
  • Codex snapshot root: NEXUS-Objects/providers/codex/latest/