Quick Answer: Rank the Rows First, Then Rank Within Them
80% of what people watch on Netflix comes from recommendations, not search. But here’s the thing most teams building homepages get wrong: it’s not which movies you recommend that matters most. It’s which rows you show.
Think about how you actually use Netflix. You don’t evaluate 200 titles sequentially. You scan row labels: “Sci-Fi Thrillers… nah. Documentaries… maybe. Dark Comedies… yes.” You pick a row, then you pick a title. If the first 3-4 rows aren’t relevant, you don’t scroll further — you leave, or you search, which means the recommendation system has already failed.
Netflix found that row ordering has significantly more impact on engagement than within-row ranking. Getting the right titles in the wrong row doesn’t matter if the user never scrolls to that row.
Your homepage might show: Mind-Bending Sci-Fi → Critically Acclaimed Documentaries → Dark Comedies → Emotional Anime
Your partner’s might show: Romantic Period Dramas → True Crime Docuseries → Stand-Up Specials → Feel-Good Reality TV
Same catalog of 15,000+ titles. Completely different page structure. The rows themselves are ranked per user.
This is attribute ranking — a fundamentally different problem from item recommendation. Most recommendation systems answer “which items should this user see?” Netflix’s homepage answers a harder question first: “which categories should this user see, and in what order?”
Get that wrong, and your within-row personalization is invisible. Get it right, and the user feels like the entire page was built for them.
Two layers. One homepage. Here’s how to build it.
Key Takeaways:
rank_attributesreturns the best genre values for a specific user — not movies, but the categories themselves- Two-step architecture — rank genres first (attribute ranking), then fill each carousel with personalized items (item ranking)
- AI Views generate micro-genres from plot descriptions — “Mind-Bending Sci-Fi” and “Slow-Burn Psychological Thrillers” instead of just “Sci-Fi”
- Score ensembles combine watch probability, completion rate, and freshness within each genre row
- Diversity reordering ensures titles don’t repeat across rows
- One engine, two query types — attribute ranking and item ranking from the same config, no separate systems
Time to read: 22 minutes | Includes: 9 code examples, 2 architecture diagrams, 1 comparison table
This is Part 3 of the “How to Build” series. Part 1 covers Spotify’s Discover Weekly (hybrid filtering). Part 2 covers Pinterest’s Related Pins (multimodal discovery). This article focuses on structural personalization — ranking the page layout itself, not just the items on it.
Table of Contents
- Why Structural Personalization Is a Different Problem
- Why Pure Item Ranking Fails for Homepages
- Why Static Genre Ordering Fails
- Part 1: The Traditional Approach (and Why It Hurts)
- Part 2: The Shaped Way — rank_attributes + AI Views + Score Ensembles
- Building the System End-to-End
- Score Ensemble Strategies
- Comparison: Traditional vs. Shaped
- FAQ
Why Structural Personalization Is a Different Problem
Most recommendation systems have one job: rank items for a user. The output is a flat list. But nobody browses Netflix as a flat list of 15,000 titles. The homepage is a grid — rows of categories, each containing a horizontal carousel.
The page has two dimensions of personalization:
Dimension 1: Which rows appear (and in what order) — this is attribute ranking
Dimension 2: Which titles appear in each row (and in what order) — this is item ranking
Dimension 1 is the problem most teams skip. They hardcode row order, sort by global popularity, or use the same genre ordering for every user. The result: a “personalized” homepage that’s only personalized inside rows no one scrolls to.
The Netflix Homepage Mental Model
Same catalog. Different rows. Different row order. Different titles in each row. Two dimensions of personalization working together.
Why Pure Item Ranking Fails for Homepages
The naive approach: run one big recommendation query, get the top 200 titles for a user, display them in a list.
This fails for three reasons.
No structure. Users don’t browse a flat list of 200 movies. They need categories to navigate. “I’m in a sci-fi mood” → scan for the sci-fi row. “Something light” → scan for comedies. Without genre rows, users face decision paralysis — research shows users abandon browsing after evaluating 10-15 items in a flat list.
Popularity collapse. A flat top-200 list is dominated by broadly popular titles. A user who watches mostly Korean dramas and obscure documentaries will see those interests reflected at positions 80+ — below the scroll fold where nobody looks.
No serendipity scaffolding. Genre rows serve a discovery function. Row 4 might be a genre the user hasn’t explored yet but might like. A flat list can’t communicate “here’s a category you haven’t tried” — it’s just more movies.
Why Static Genre Ordering Fails
The next approach: create genre rows, but use the same ordering for everyone — sorted by global popularity or editorial curation.
Everyone gets “Action” first. If you sort by global watch count, the same 5-6 genres dominate every homepage. Users with niche tastes (art house, foreign cinema, classic anime) never see their preferred categories without scrolling past rows they don’t care about.
No taste signal. A user who has watched 30 documentaries and 2 action movies still sees the Action row before Documentaries — because Action is globally more popular.
Stale layout. Static ordering doesn’t adapt. A user who recently binged a Korean drama series should see “Korean Dramas” surface to the top of their homepage now, not after an editorial team manually updates the row order.
| Approach | What Happens | Impact |
|---|---|---|
| Flat item list | No structure, popularity collapse | Users abandon after 10-15 items |
| Static genre order | Same rows for everyone | Niche interests buried |
| Popularity-sorted genres | Top 5 genres dominate | 80% of users see the same layout |
| Attribute ranking | Rows adapt per user | Every user gets their genres first |
Part 1: The Traditional Approach (and Why It Hurts)
The traditional approach to building a personalized homepage requires orchestrating multiple separate systems.
Architecture:
Here’s what that looks like in practice:
# homepage_traditional.py — The orchestration nightmare
def build_homepage(user_id, num_rows=10, items_per_row=15):
# Step 1: Compute genre affinities (runs as nightly batch job)
watch_history = get_watch_history(user_id)
genre_counts = Counter()
genre_completion = defaultdict(list)
for watch in watch_history:
for genre in watch['genres']:
genre_counts[genre] += 1
genre_completion[genre].append(watch['completion_rate'])
genre_scores = {}
for genre in genre_counts:
genre_scores[genre] = (
0.6 * genre_counts[genre] / len(watch_history)
+ 0.3 * np.mean(genre_completion[genre])
+ 0.1 * recency_weight(genre, watch_history)
)
# Step 2: Rank genres
top_genres = sorted(genre_scores, key=genre_scores.get, reverse=True)[:num_rows]
# Step 3: For each genre, hit a SEPARATE ranking service
homepage = []
seen_items = set()
for genre in top_genres:
items = item_ranker.rank(user_id=user_id, filter_genre=genre,
exclude_items=seen_items, limit=items_per_row)
homepage.append({'genre': genre, 'items': items})
seen_items.update(item['id'] for item in items)
return homepage
This is conceptually simple but operationally painful:
| What You Maintain | What It Costs |
|---|---|
| Genre affinity batch job | Nightly ETL, stale for up to 24 hours |
| Hand-tuned affinity formula | 0.6 × frequency + 0.3 × completion + 0.1 × recency — doesn’t learn |
| Flat genre taxonomy | ”Sci-Fi” collapses Arrival and Transformers into the same bucket |
| Separate item ranking service | N API calls per homepage, latency multiplied |
| Cross-row deduplication | Application-side logic across three client platforms |
The real killer: the genre affinity formula doesn’t learn. It’s a hand-tuned weighted average. It can’t capture that a user who watches Arrival and Ex Machina cares about cerebral sci-fi specifically, not sci-fi in general. To capture that, you’d need micro-genre labels that don’t exist in your metadata — and a model trained on those labels.
Part 2: The Shaped Way — rank_attributes + AI Views + Score Ensembles
Shaped solves both dimensions of homepage personalization in one engine: rank_attributes for the row ordering, score ensembles for the item ranking within rows, and AI Views to generate micro-genre labels that make attribute ranking actually useful.
Architecture:
Three key differences from the traditional approach:
-
AI Views generate micro-genres. Instead of ranking coarse labels like “Sci-Fi,” Shaped’s AI View analyzes plot descriptions and generates rich category tags — “Mind-Bending Sci-Fi,” “Slow-Burn Psychological Thrillers,” “Feel-Good Competition Reality.” These are specific enough to capture actual taste.
-
rank_attributesreplaces the batch job. Instead of a nightly ETL computing genre affinities with a hand-tuned formula,rank_attributesuses trained embeddings to rank genre values in real-time. No batch lag. No manual formula. -
One engine serves both queries. The same engine that ranks genres also ranks items within genres. No separate ranking service. No N cross-service calls.
Step 1: AI Views — Generating Micro-Genres That Capture Taste
Netflix’s real power isn’t “Action” and “Comedy.” It’s the micro-genres: “Gritty Crime Dramas Based on Real Life,” “Visually Stunning Sci-Fi,” “Heartwarming Animated Adventures.” These labels are specific enough to distinguish taste — and they don’t exist in standard metadata.
AI Views generate them automatically.
# views/title_enrichment.yaml
version: v2
name: title_enrichment
view_type: AI_ENRICHMENT
source_table: titles
source_columns:
- item_id
- title
- plot_description
- cast
- director
source_columns_in_output:
- item_id
- title
enriched_output_columns:
- micro_genres
prompt: |
Given this title's metadata, generate 2-3 micro-genre labels that capture
the specific flavor of this content. Go beyond broad genres — focus on tone,
pacing, visual style, emotional register, and subject specificity.
Good examples: "Mind-Bending Sci-Fi", "Slow-Burn Psychological Thrillers",
"Heartwarming Animated Adventures", "Gritty True Crime Docuseries",
"Witty Ensemble Comedies"
Return as a comma-separated list.
Example enrichment output:
| Title | Micro-genres |
|---|---|
| Arrival | Mind-Bending Sci-Fi, Cerebral First Contact Drama |
| Parasite | Dark Social Satire, Suspenseful Class Commentary |
| Jiro Dreams of Sushi | Meditative Craft Documentary, Japanese Food Cinema |
| Nanette | Boundary-Pushing Stand-Up, Comedy as Social Commentary |
| Bridgerton | Lavish Romantic Period Drama, Escapist Historical Romance |
Without the AI View, you’re ranking “Sci-Fi” vs. “Documentary.” With it, you’re ranking “Mind-Bending Sci-Fi” vs. “Meditative Craft Documentary.” The second set captures taste. The first doesn’t.
For more on AI View configuration, see the AI enrichment documentation.
Step 2: Configure the Engine
# engines/homepage.yaml
version: v2
name: homepage
data:
item_table:
name: titles
type: table
user_table:
name: users
type: table
interaction_table:
name: watch_events
type: table
schema_override:
item:
id: item_id
features:
- name: title
type: Text
- name: plot_description
type: Text
- name: micro_genres
type: Set[TextCategory]
- name: release_year
type: Numerical
- name: rating
type: Numerical
created_at: release_date
interaction:
id: event_id
item_id: item_id
user_id: user_id
label: watch_completion
created_at: watched_at
index:
embeddings:
- name: content_embedding
encoder:
type: hugging_face
model_name: sentence-transformers/modernbert
batch_size: 256
item_fields:
- title
- title_enrichment.micro_genres
- name: elsa_embedding
encoder:
type: trained_model
model_ref: elsa_collab
training:
models:
- name: elsa_collab
policy_type: elsa
strategy: early_stopping
Shaped trains ELSA on watch history for collaborative signals and generates text embeddings on the AI-enriched micro-genre labels for content-based attribute ranking. Both are indexed for fast retrieval.
Why ELSA? Streaming platforms have massive implicit feedback data (watches) but minimal explicit signals (ratings are rare). ELSA is designed for large-scale implicit feedback — it learns item-item relationships from co-watch patterns without requiring item features. For platforms with rich item metadata where you want the collaborative model to incorporate content features, Two-Tower is the better choice.
Step 3: Two Queries, One Homepage
Query 1: Which rows should this user see?
# app.py
import requests
SHAPED_API_KEY = "your-api-key"
def get_personalized_rows(user_id: str, num_rows: int = 10):
"""
Attribute ranking: returns the best micro-genre values for this user.
"""
response = requests.post(
"https://api.shaped.ai/v2/engines/homepage/query",
headers={"x-api-key": SHAPED_API_KEY},
json={
"query": {
"type": "rank_attributes",
"input_attribute": "micro_genres",
"input_user_id": "$user_id",
"embeddings": "content_embedding",
"limit": num_rows
},
"parameters": {
"user_id": user_id
}
}
)
return response.json()['attributes']
Example response:
{
"attributes": [
{ "value": "Mind-Bending Sci-Fi", "score": 0.92 },
{ "value": "Meditative Craft Documentary", "score": 0.83 },
{ "value": "Dark Social Satire", "score": 0.79 },
{ "value": "Emotional Anime", "score": 0.74 },
{ "value": "Slow-Burn Psychological Thrillers", "score": 0.71 },
{ "value": "Witty Ensemble Comedies", "score": 0.68 }
]
}
Look at what just happened. This isn’t a list of movies. It’s a list of row labels — the skeleton of this user’s entire homepage, returned in one API call.
“Mind-Bending Sci-Fi” at 0.92 — not “Sci-Fi,” which would lump Arrival and Transformers together. The AI View micro-genre and the trained embedding worked together to surface the specific flavor of sci-fi this user actually watches.
“Dark Social Satire” at 0.79 — a genre this user has never searched for. But ELSA learned from co-watch patterns that users who watch Arrival and Ex Machina and Jiro Dreams of Sushi also tend to watch Parasite and Sorry to Bother You. The model found a taste cluster the user hasn’t consciously identified yet. That’s row 3 on their homepage — a discovery row that feels like Netflix read their mind.
“Witty Ensemble Comedies” at 0.68 — ranked last. This user watches some comedies, but it’s not their primary mode. On a static homepage, “Comedy” might be row 2 (it’s globally popular). Here, it’s row 6. The genres this user actually cares about get the top slots.
Query 2: Fill each row with personalized titles
# app.py
def get_row_items(user_id: str, genre: str, exclude_ids: list = None, limit: int = 15):
"""
Personalized item ranking within one genre carousel.
"""
response = requests.post(
"https://api.shaped.ai/v2/engines/homepage/query",
headers={"x-api-key": SHAPED_API_KEY},
json={
"query": """
SELECT *
FROM column_order(columns='_derived_popular_rank ASC', limit=500)
WHERE micro_genres CONTAINS $genre
AND item_id NOT IN $exclude_ids
ORDER BY score(
expression='
0.5 * click_through_rate
+ 0.3 * watch_completion
+ 0.2 / (1.0 + item._derived_popular_rank)
',
input_user_id=$user_id,
input_interactions_item_ids=$interaction_item_ids
)
LIMIT $limit
""",
"parameters": {
"user_id": user_id,
"genre": genre,
"exclude_ids": exclude_ids or [],
"limit": limit
}
}
)
return response.json()['results']
Assemble the full homepage:
# app.py
def build_homepage(user_id: str, num_rows: int = 8, items_per_row: int = 15):
# Step 1: Which rows?
ranked_genres = get_personalized_rows(user_id, num_rows=num_rows)
# Step 2: Fill each row, deduplicating across rows
homepage = []
seen_ids = []
for genre_attr in ranked_genres:
genre = genre_attr['value']
items = get_row_items(
user_id=user_id,
genre=genre,
exclude_ids=seen_ids,
limit=items_per_row
)
homepage.append({
'genre': genre,
'score': genre_attr['score'],
'items': items
})
seen_ids.extend([item['item_id'] for item in items])
return homepage
The exclude_ids pattern ensures a title that fits multiple micro-genres (Parasite is both “Dark Social Satire” and “Suspenseful Class Commentary”) only appears in the most relevant row.
The Difference in Action
Here’s what each approach produces for a user who watches cerebral sci-fi and documentaries.
Traditional (coarse genres, hand-tuned affinity):
| Row | Genre | Top titles | Quality |
|---|---|---|---|
| 1 | Sci-Fi | Transformers, Star Wars, Arrival, The Matrix | Mixed — blockbusters dominate |
| 2 | Documentary | Tiger King, Our Planet, Senna | Mixed — viral hits dominate |
| 3 | Drama | Shawshank Redemption, Forrest Gump | Generic — no taste signal |
| 4 | Action | John Wick, Mad Max, Mission Impossible | Wrong genre entirely |
Coarse genres. Popularity-dominated. Row 4 is irrelevant.
Shaped (micro-genres, trained attribute ranking, score ensembles):
| Row | Genre | Top titles | Quality |
|---|---|---|---|
| 1 | Mind-Bending Sci-Fi | Arrival, Ex Machina, Annihilation, Stalker | All cerebral sci-fi |
| 2 | Meditative Craft Documentary | Jiro Dreams of Sushi, Free Solo, Senna | Specific doc subtype |
| 3 | Dark Social Satire | Parasite, Sorry to Bother You, The Lobster | Discovery genre — ELSA found the pattern |
| 4 | Emotional Anime | Spirited Away, Your Name, Grave of Fireflies | Niche interest surfaced |
Micro-genres. Personalized ranking. Row 3 is a discovery row — the user never searched for “dark social satire” but ELSA learned the co-watch pattern.
That’s the difference. Coarse genres + popularity gives every user the same homepage with different thumbnails. Micro-genres + attribute ranking + score ensembles gives every user a homepage that feels like it was built for them.
Building the System End-to-End
Full setup in four steps
1. Connect your data
# tables/titles.yaml
version: v2
name: titles
connector:
type: postgres
connection_string: $DATABASE_URL
table: titles
schema:
- name: item_id
type: STRING
- name: title
type: STRING
- name: plot_description
type: STRING
- name: cast
type: STRING
- name: director
type: STRING
- name: genres
type: STRING
- name: release_date
type: TIMESTAMP
- name: rating
type: FLOAT
# tables/watch_events.yaml
version: v2
name: watch_events
connector:
type: postgres
connection_string: $DATABASE_URL
table: watch_events
schema:
- name: event_id
type: STRING
- name: user_id
type: STRING
- name: item_id
type: STRING
- name: watch_completion
type: FLOAT
- name: watched_at
type: TIMESTAMP
2. Create the AI View and engine
shaped create-view --file views/title_enrichment.yaml
shaped create-engine --file engines/homepage.yaml
3. Define saved queries
# Add to engines/homepage.yaml
queries:
get_user_rows:
query:
type: rank_attributes
input_attribute: micro_genres
input_user_id: $parameters.user_id
embeddings: content_embedding
limit: 10
parameters:
user_id:
default: null
get_row_items:
query: |
SELECT *
FROM column_order(columns='_derived_popular_rank ASC', limit=500)
WHERE micro_genres CONTAINS $genre
AND item_id NOT IN $exclude_ids
ORDER BY score(
expression='0.5 * click_through_rate + 0.3 * watch_completion + 0.2 / (1.0 + item._derived_popular_rank)',
input_user_id='$user_id',
input_interactions_item_ids='$interaction_item_ids'
)
LIMIT 15
parameters:
user_id:
default: null
genre:
default: null
exclude_ids:
default: []
Saved queries ensure every client (mobile, web, TV) uses the same scoring logic. Changing blend weights happens in the config, not across three codebases.
4. Build the homepage
homepage = build_homepage(user_id="user_8829", num_rows=8, items_per_row=15)
Score Ensemble Strategies
ShapedQL score expressions let you tune ranking within each genre row without retraining or redeploying.
Strategy 1: Engagement-optimized
Balance click-through with watch completion to avoid clickbait:
ORDER BY score(
expression='
0.4 * click_through_rate
+ 0.4 * watch_completion
+ 0.2 / (1.0 + item._derived_popular_rank)
',
input_user_id=$user_id,
input_interactions_item_ids=$interaction_item_ids
)
Strategy 2: Freshness boost
Surface new releases alongside catalog titles:
ORDER BY score(
expression='
0.4 * click_through_rate
+ 0.3 * watch_completion
+ 0.15 / (1.0 + item._derived_popular_rank)
+ 0.15 / (1.0 + days_since_release)
',
input_user_id=$user_id,
input_interactions_item_ids=$interaction_item_ids
)
Strategy 3: Discovery rows with exploration
For rows lower on the homepage (genres the user hasn’t explored), inject variety:
SELECT *
FROM column_order(columns='_derived_popular_rank ASC', limit=500)
WHERE micro_genres CONTAINS $genre
ORDER BY score(
expression='
0.3 * click_through_rate
+ 0.3 * watch_completion
+ 0.2 / (1.0 + item._derived_popular_rank)
+ 0.2 / (1.0 + item._derived_chronological_rank)
',
input_user_id=$user_id,
input_interactions_item_ids=$interaction_item_ids
)
REORDER BY exploration(diversity_lookback_window=50)
LIMIT 15
REORDER BY exploration() injects titles from outside the candidate set — ideal for discovery rows where you want to surprise the user. For more on exploration and diversity, see the ranking architectures series.
Strategy 4: Context-aware — time of day
Lighter content in the morning, longer-form in the evening:
ORDER BY score(
expression='
CASE
WHEN hour_of_day < 12 THEN
0.5 * click_through_rate
+ 0.2 * watch_completion
+ 0.3 / (1.0 + item._derived_popular_rank)
ELSE
0.3 * click_through_rate
+ 0.5 * watch_completion
+ 0.2 / (1.0 + item._derived_popular_rank)
END
',
input_user_id=$user_id,
input_interactions_item_ids=$interaction_item_ids
)
Comparison: Traditional vs. Shaped
A nightly batch job + hand-tuned formula + N separate ranking calls → two query types against one engine. Same homepage. Dramatically less infrastructure.
| Component | Traditional Approach | Shaped Approach |
|---|---|---|
| Genre taxonomy | Flat: “Sci-Fi,” “Comedy,” “Drama” | AI Views: “Mind-Bending Sci-Fi,” “Dark Social Satire” |
| Genre ranking | Nightly batch job, hand-tuned formula | rank_attributes in real-time with trained embeddings |
| Adapts to behavior | Up to 24h stale | Real-time |
| Item ranking per row | Separate service, N API calls | Same engine, ShapedQL score expressions |
| Cross-row dedup | Application-side logic | NOT IN $exclude_ids in the query |
| Scoring logic | Hardcoded per platform | Saved queries — one config, all clients |
| Infrastructure | Batch ETL + affinity store + item ranker + dedup | 1 engine, 2 query types |
| Lines of code | ~400 (batch + orchestration + ranking + dedup) | ~60 (YAML config + 2 queries) |
| Bottom line | A nightly batch job and a prayer the formula is right | Real-time attribute ranking with trained models |
If you’re building a recommendation system and want to understand how Shaped’s four-stage pipeline works under the hood, the Anatomy of Modern Ranking Architectures series covers this in depth.
FAQ
Q: What exactly does rank_attributes return?
A: A scored, sorted list of attribute values — not items. If you rank micro_genres for a user, you get back genre labels with relevance scores: [{"value": "Mind-Bending Sci-Fi", "score": 0.92}, ...]. You then use those values to query items within each genre. See the attribute ranking docs for the full API reference.
Q: How does rank_attributes work under the hood?
A: It uses the embedding you specify (e.g., content_embedding) to compute the user’s affinity toward each attribute value. It maps the user’s interaction history into the embedding space and ranks attribute values by proximity to the user’s learned preferences. No hand-tuned formula — it’s a trained model.
Q: Can I rank standard genres instead of micro-genres?
A: Yes — rank_attributes works on any column. But coarse genres collapse different tastes into one bucket. A user who loves Arrival doesn’t want Transformers — yet both are “Sci-Fi.” Micro-genres distinguish “Mind-Bending Sci-Fi” from “Space Opera Blockbusters.” The AI View generates these from plot descriptions automatically.
Q: How many API calls per homepage?
A: One rank_attributes call + one item ranking call per row. For 10 rows, that’s 11 queries — all against the same engine endpoint. You can parallelize the per-row queries since they’re independent.
Q: Does this work for e-commerce?
A: Yes. The pattern applies to any catalog with categories. E-commerce: rank departments (“Running Shoes,” “Trail Gear,” “Recovery Equipment”) per user, then fill each section with personalized products. Marketplace: rank seller categories. Content platform: rank topic tags. input_attribute can be any column. See the faceted filtering guide for more patterns.
Q: How do I handle new users?
A: For cold-start users, rank_attributes falls back to popularity-weighted ranking. As watch history accumulates, ranking becomes personalized. You can also show a mix of popularity rows and editorial “Start Here” rows for new users, transitioning to fully personalized rows over time.
Q: Can I mix attribute-ranked rows with other row types?
A: Absolutely. Add a “Because You Watched X” row using similarity(embedding_ref='elsa_embedding', input_item_id=$last_watched). Add a “Trending Now” row using column_order(columns='_derived_popular_rank ASC'). The homepage is a list of queries — some attribute ranking, some similarity, some trending. All from the same engine.
Q: What about Netflix’s actual implementation?
A: Netflix uses a multi-stage system with bandits for row selection, separate models for within-row ranking, and extensive A/B testing infrastructure. Their micro-genre taxonomy (“altgenres”) was originally human-curated but has moved toward automated generation. The architecture reflects over a decade of iteration. Shaped lets you build a production-grade version of this two-dimensional personalization without the multi-year infrastructure investment.
Conclusion
Remember the stakes from the top of this article: 80% of what people watch comes from recommendations, and row ordering matters more than within-row ranking. A user who scrolls past 4 irrelevant genre rows to find their content has already half-abandoned your platform. The within-row personalization you spent months building is invisible if the rows themselves are wrong.
That’s why attribute ranking isn’t a nice-to-have. It’s the foundation. The user who watches Arrival, Ex Machina, and Jiro Dreams of Sushi needs to see “Mind-Bending Sci-Fi” and “Meditative Craft Documentary” at the top of their homepage — not “Action” and “Comedy” because those are globally popular. And they need a row 3 surprise: “Dark Social Satire,” a genre they didn’t know they loved, surfaced by a model that found the co-watch pattern.
The traditional approach hacks this with a nightly batch job and a hand-tuned formula — 0.6 × frequency + 0.3 × completion + 0.1 × recency. It doesn’t learn. It uses coarse genres that collapse different tastes into one bucket. It’s stale for 24 hours. And it runs a separate ranking service for each row.
Shaped replaces all of it. AI Views generate micro-genres that capture actual taste — “Mind-Bending Sci-Fi,” not just “Sci-Fi.” rank_attributes ranks those micro-genres per user in real-time using trained embeddings — one API call returns the entire homepage skeleton. Score ensembles personalize titles within each row. Saved queries ensure mobile, web, and TV all use the same logic. One engine, two query types, ship in a day.
If you’re building a flat recommendation feed, start with the Discover Weekly playbook. If you’re building visual discovery, see the Related Pins playbook. For personalized homepages with category rows, this is your playbook.
Ready to build a personalized homepage? Sign up for Shaped and get $100 in free credits.
Want us to walk you through it?
Book a 30-min session with an engineer who can apply this two-dimensional personalization to your specific catalog and homepage.
Want us to walk you through it?
Book a 30-min session with an engineer who can apply this to your specific stack.