Works with
and any MCP-compatible agent.
Your agent asks Shaped for context. Here's what comes back.
SELECT content, title
FROM
text_search('SSO enterprise configuration', mode=vector),
text_search('SSO enterprise', mode=lexical),
similarity(user_id=$user_id)
WHERE doc_type = 'guide'
ORDER BY relevance(user, item)
LIMIT 10 Based on the SSO setup guide and SAML config docs:
1. Navigate to Admin → Org Settings → Authentication
2. Select your identity provider (Okta, Azure AD, Google)
3. Upload your SAML metadata XML
See docs/enterprise/sso-setup.md for full walkthrough. import pinecone
import cohere
from openai import OpenAI
# 1. Embed user query
res = client.embeddings.create(
input=query, model="text-embedding-3-small")
query_vec = res.data[0].embedding
# 2. Vector search (200 noisy docs)
idx = pinecone.Index("my-index")
raw_docs = idx.query(
vector=query_vec, top_k=200,
include_metadata=True)
# 3. Rerank with static model
co = cohere.Client('API_KEY')
reranked = co.rerank(
query=query,
documents=[d.metadata['text'] for d in raw_docs['matches']],
top_n=10)
# 4. Stuff massive context and hope
context = "\n".join(
[doc.document['text'] for doc in reranked.results])
prompt = f"Context: {context}\n\nAnswer: {query}" SSO (Single Sign-On) allows users to authenticate using a single set of credentials. Enterprise accounts can configure SSO through the admin panel...
// Generic overview with no specific steps 10 results · 2,100 tokens · 38ms · Your agent gets exactly what it needs — nothing more.
200 results. 190 are noise. Your agent re-retrieves, burning tokens and time.
Senior engineers and new hires get the same 200 chunks.
Day 100 is no smarter than day 1.
Shaped returns only relevant results. 100% of the context matters.
Shaped gets it right the first time. Your agent doesn't re-retrieve.
User rephrases or thumbs-down? Day 100 is dramatically better than day 1.
| Capability | DIY Retrieval Stack | Shaped |
|---|---|---|
| Retrieval | Single embedding space | Multi-retriever (vector + lexical + behavioral) |
| Ranking | Static reranker model | ML models that learn from outcomes |
| Personalization | None | User-aware ranking via user_id |
| Context size | 50K+ tokens (top-k dump) | 2,500 tokens (ranked LIMIT 10) |
| Infrastructure | Vector DB + Search engine + Reranker + Feature store + glue | 1 API call |
| Improves over time | No - static from day 1 | Yes - retrains on agent feedback |
| Query language | Multiple SDKs + custom code | ShapedQL (SQL-like, declarative) |
Keep your existing stack running.
Zero risk.
A/B test on real queries. Measure
everything head to head.
Swap one API call. Roll
back anytime.
"After assessing the landscape Shaped became the obvious choice"
Connect a data source in the Shaped console. Postgres, S3, BigQuery, or any of 20+ connectors.
Configure an engine. Write a ShapedQL query. Test it in the playground.
Your agent connects via MCP or API. <50ms. Retrains on outcomes automatically.
Deploy in a morning. Run alongside your existing stack. See the difference immediately.
$100 free credits. No credit card required. Sign up, connect a data source, and query in under 10 minutes.
See pricing →