Increased D1 Retention
20%
Increased D6 Retention
79%
Experiments in
12 weeks
12 weeks
7
Goals
Atmosfy aimed to boost user retention and fuel long-term growth by improving how people discover dining, nightlife, travel, and local experiences.
Short-term, the team focused on increasing D1-D7 retention by launching a personalized recommendation feed that surfaces the most relevant experiences, keeping users engaged beyond their first week.
Long-term, Atmosfy’s goal is to drive sustained global growth by building smarter, AI-powered discovery flows that dynamically adapt to user preferences, behaviors, and city context—ensuring every experience feels personal and timely.
Short-term, the team focused on increasing D1-D7 retention by launching a personalized recommendation feed that surfaces the most relevant experiences, keeping users engaged beyond their first week.
Long-term, Atmosfy’s goal is to drive sustained global growth by building smarter, AI-powered discovery flows that dynamically adapt to user preferences, behaviors, and city context—ensuring every experience feels personal and timely.
The challenge
Atmosfy’s existing feed was logic-based. Logic-based ranking is a system where content or items are ordered based on predefined rules rather than advanced machine learning models. This approach uses fixed criteria and manual configuration to determine the ranking of videos.
- Static Algorithms: Algorithms that follow a fixed formula or logic, such as prioritizing content by recency or popularity.
- Manual Adjustments: Ranking based on manually inputted factors or business decisions, such as promotions or seasonal content.
Atmosfy’s goal was to power their existing global and local discovery feed with dynamic, tailored content. The belief was that more relevant content, even within a discovery context, would resonate better with users leading to increased engagement and a higher likelihood of return visits.
To achieve this, the primary objective was to improve the user-to-video ranking. This meant ensuring that video shown to users in both their global and local feed were more accurately aligned with individual user preferences and past interactions.
The technical problems faced were significant, they needed to address real-time interaction ingestion to capture user interactions promptly and cold start problems- ensuring new users and new content could be effectively integrated into the personalized ranking system. This all pointed to the need for a sophisticated recommendation system.
- Static Algorithms: Algorithms that follow a fixed formula or logic, such as prioritizing content by recency or popularity.
- Manual Adjustments: Ranking based on manually inputted factors or business decisions, such as promotions or seasonal content.
Atmosfy’s goal was to power their existing global and local discovery feed with dynamic, tailored content. The belief was that more relevant content, even within a discovery context, would resonate better with users leading to increased engagement and a higher likelihood of return visits.
To achieve this, the primary objective was to improve the user-to-video ranking. This meant ensuring that video shown to users in both their global and local feed were more accurately aligned with individual user preferences and past interactions.
The technical problems faced were significant, they needed to address real-time interaction ingestion to capture user interactions promptly and cold start problems- ensuring new users and new content could be effectively integrated into the personalized ranking system. This all pointed to the need for a sophisticated recommendation system.
Logic-based system
Predefined rules that are static and require manual adjustments
Shaped
Unified recsys with a
real-time feedback loop
real-time feedback loop
Our solution
Phase 1: Building the Foundation for Personalized Recommendations
A. Developing a Base Recommender
A foundational model was built using users’ historical interactions and the categories of videos. To accurately guide model training and relevance scoring, user engagement events were weighted as follows:
- Multi-views: 1 (Low signal)
- Likes, Comments, Bookmarks: 5 (Medium signal)
- Shares: 8 (High signal of engagement)
- Been here: 10 (High signal of user intent and satisfaction)
B. Leveraging Shaped's ELSA Autoencoder
Shaped's advanced ELSA autoencoder was used to generate text encodings and construct the embedding space, which:
- Learns Meaningful Patterns: It autonomously identifies and learns subtle yet significant patterns within the user interaction and video data.
- Connects Across the Dataset: Unlike traditional methods that analyze items in isolation, ELSA understands connections across the entire dataset, creating a holistic view of content relationships.
- Creates Flexible Representations: The embeddings generated are highly flexible and can be seamlessly reused across various applications, future-proofing the system.
C. Real-Time Integration for Dynamic Engagement
Following the foundational build, the system was evolved to incorporate real-time capabilities. This was achieved through the integration of Kinesis streams. This integration allowed for the seamless ingestion of both real-time user interactions and new video data, ensuring that recommendations remained current and responsive to immediate user behavior.
Phase 2: Online Experimentation to Drive D1- D7 retention
With a solid personalization engine in place, the focus shifted to rigorous online experimentation. This phase focused on answering critical business questions and testing hypotheses about user behavior through iterative model optimization, ultimately driving Atmosfy's core goal of increasing D1-D7 retention.
- Duration: 3 months
- Iterations: 7 distinct experiments
- Variants Tested: 16 different variants with distinct hypotheses were deployed and analyzed.
A. Developing a Base Recommender
A foundational model was built using users’ historical interactions and the categories of videos. To accurately guide model training and relevance scoring, user engagement events were weighted as follows:
- Multi-views: 1 (Low signal)
- Likes, Comments, Bookmarks: 5 (Medium signal)
- Shares: 8 (High signal of engagement)
- Been here: 10 (High signal of user intent and satisfaction)
B. Leveraging Shaped's ELSA Autoencoder
Shaped's advanced ELSA autoencoder was used to generate text encodings and construct the embedding space, which:
- Learns Meaningful Patterns: It autonomously identifies and learns subtle yet significant patterns within the user interaction and video data.
- Connects Across the Dataset: Unlike traditional methods that analyze items in isolation, ELSA understands connections across the entire dataset, creating a holistic view of content relationships.
- Creates Flexible Representations: The embeddings generated are highly flexible and can be seamlessly reused across various applications, future-proofing the system.
C. Real-Time Integration for Dynamic Engagement
Following the foundational build, the system was evolved to incorporate real-time capabilities. This was achieved through the integration of Kinesis streams. This integration allowed for the seamless ingestion of both real-time user interactions and new video data, ensuring that recommendations remained current and responsive to immediate user behavior.
Phase 2: Online Experimentation to Drive D1- D7 retention
With a solid personalization engine in place, the focus shifted to rigorous online experimentation. This phase focused on answering critical business questions and testing hypotheses about user behavior through iterative model optimization, ultimately driving Atmosfy's core goal of increasing D1-D7 retention.
- Duration: 3 months
- Iterations: 7 distinct experiments
- Variants Tested: 16 different variants with distinct hypotheses were deployed and analyzed.
3 months
Phase 1: Building the foundation
Built a recommender with weighted engagement events and flexible embeddings using Shaped’s ELSA autoencoder.
Phase 2: Real-time integration
Integrated Kinesis streams for real-time user interactions and new video data, keeping recommendations fresh.
Phase 3: Online experimentation
Ran 7 experiments with 16 model variants over 3 months, iterating rapidly to optimize retention.
Results & outcomes
- D1 Retention: +15%
- D2 Retention: +32%
- D3 Retention: +29%
- D4 Retention: +54%
- D5 Retention: +8%
- D6 Retention: +79%
- D7 Retention: +10%
- Faster experimentation: 7 experiments in 12 weeks

.png)