Quick Start
Synced automatically from
seocho/docs/QUICKSTART.md
SEOCHO Quick Start
Section titled “SEOCHO Quick Start”This guide is optimized for one goal: raw data in -> graph build -> semantic/debate answer out.
0. Prerequisites
Section titled “0. Prerequisites”- Docker + Docker Compose
- OpenAI API key (
OPENAI_API_KEY) jq(for API response checks)- Git
1. Clone and configure
Section titled “1. Clone and configure”git clone https://github.com/tteon/seocho.gitcd seocho
cp .env.example .env# edit .env# required: OPENAI_API_KEY=sk-...Optional custom ports (if defaults collide):
NEO4J_HTTP_PORT=7475NEO4J_BOLT_PORT=7688EXTRACTION_API_PORT=8002EXTRACTION_NOTEBOOK_PORT=8890CHAT_INTERFACE_PORT=85022. Start services
Section titled “2. Start services”make updocker compose psExpected services:
neo4jextraction-servicesemantic-serviceevaluation-interface
3. Verify base endpoints
Section titled “3. Verify base endpoints”If you changed ports in .env, replace 8001/8501 below with your configured ports.
curl -sS http://localhost:8001/databases | jq .curl -sS http://localhost:8501/api/config | jq .Default URLs:
- Platform UI:
http://localhost:8501 - API docs:
http://localhost:8001/docs - DozerDB browser:
http://localhost:7474
4. Ingest your raw data
Section titled “4. Ingest your raw data”Use runtime ingest API to load text directly into a target graph database.
curl -sS -X POST http://localhost:8001/platform/ingest/raw \ -H "Content-Type: application/json" \ -d '{ "workspace_id":"default", "target_database":"kgruntime", "records":[ {"id":"raw_1","content":"ACME acquired Beta in 2024."}, {"id":"raw_2","content":"Beta provides risk analytics to ACME."} ] }' | jq .Success criteria:
statusis one ofsuccess,success_with_fallback,partial_successrecords_processed >= 1
5. Ensure fulltext index for semantic mode
Section titled “5. Ensure fulltext index for semantic mode”curl -sS -X POST http://localhost:8001/indexes/fulltext/ensure \ -H "Content-Type: application/json" \ -d '{ "workspace_id":"default", "databases":["kgruntime"], "index_name":"entity_fulltext", "create_if_missing":true }' | jq .6. Ask semantic and debate questions
Section titled “6. Ask semantic and debate questions”6.1 Semantic mode (API)
Section titled “6.1 Semantic mode (API)”curl -sS -X POST http://localhost:8501/api/chat/send \ -H "Content-Type: application/json" \ -d '{ "session_id":"qs_semantic_1", "message":"Show entities in kgruntime", "mode":"semantic", "workspace_id":"default", "databases":["kgruntime"] }' | jq '{assistant_message, route: .runtime_payload.route}'Success criteria:
assistant_messageis non-emptyruntime_payload.routeislpg,rdf, orhybrid
6.2 Debate mode (API)
Section titled “6.2 Debate mode (API)”curl -sS -X POST http://localhost:8501/api/chat/send \ -H "Content-Type: application/json" \ -d '{ "session_id":"qs_debate_1", "message":"Compare known entities across databases", "mode":"debate", "workspace_id":"default" }' | jq '{assistant_message, debate_results: .runtime_payload.debate_results}'Success criteria:
assistant_messageis non-emptyruntime_payload.debate_resultsexists
7. Run strict integration smoke test
Section titled “7. Run strict integration smoke test”make e2e-smokeWhat it checks end-to-end:
/platform/ingest/raw/indexes/fulltext/ensure- semantic chat (
/api/chat/send, modesemantic) - debate chat (
/api/chat/send, modedebate)
If OPENAI_API_KEY is real, debate is checked in strict pass mode.
If you are running custom ports, execute the script with explicit overrides:
EXTRACTION_API_PORT=8002 CHAT_INTERFACE_PORT=8502 bash scripts/integration/e2e_runtime_smoke.sh8. Validate through UI
Section titled “8. Validate through UI”- Open
http://localhost:8501 - Set
Ingest DBtokgruntime - Paste raw lines and click
Ingest Raw - Ask a question in
Semanticmode - Switch to
Debatemode and ask the same question - Compare
Traceand result payloads
9. Next practical steps
Section titled “9. Next practical steps”- Run SHACL-like readiness:
POST /rules/assess - Build ontology hints offline:
python scripts/ontology/build_ontology_hints.py ... - Read extension guide:
docs/OPEN_SOURCE_PLAYBOOK.md - Read first-run walkthrough:
docs/TUTORIAL_FIRST_RUN.md
Troubleshooting
Section titled “Troubleshooting”Extraction service logs:
docker compose logs --tail=200 extraction-serviceChat interface logs:
docker compose logs --tail=200 evaluation-interfaceIf ports conflict, set the port env vars in .env and rerun make up.