Secret Savas Gift Exchange Gets an AI Update

Chris Russo on background

Chris Russo

Ava in a December holiday setting generating gift ideas to support Secret Savas

As we continue to lean more heavily into AI at Savas Labs, we have often have conversations with others who are excited about AI, but unsure where to start, what’s feasible, and where to invest. I hear this from prospective clients, long-time partners, and even from business leaders here in San Francisco, the global epicenter of AI. The opportunity is apparent; the path forward often isn’t.

To help cut through the abundant options and bring focus to AI investment, we’ve formalized an approach we’ve been using in our work for some time: our AI Readiness & Opportunity Assessment. It's a structured way to identify where AI can create real value today and how to prepare for what’s coming next.

Recently, we began an internal initiative to apply AI across our organizational knowledge stores: Slack, Google Workspace, our project management systems, code repositories, and more. While this system is internal by design, we wanted to find a way to share some of the insights we were harnessing. We soon realized we could do that and enhance an annual holiday tradition at the same time. 

Years ago, we built a custom Slack-integrated matching tool for Secret Savas™, our anonymous, holiday gift exchange. Remember? Of course you do. It was about due for some love, and we figured an AI-powered update to be the perfect, low-risk way to demo some of the powerful search work while highlighting a number of the dimensions we explore in our AI Readiness & Opportunity Assessments as well.

Over a weekend, we built a data-processing pipeline that ingested historical Slack messages, analyzed sentiment, surfaced recurring interests, and used a retrieval-augmented generation (RAG) workflow to deliver personalized gift ideas to the team. It was a playful experiment, but also a compact demonstration of some applied AI.

Below, we walk through how this project touched several of the key dimensions we help clients navigate.

Generative AI

For this project, we used generative AI in two main ways. First, we leaned on modern code-generation tools to help us design and build the application—including two versions of the interface and the full data-processing pipeline—in roughly two days. Second, we used simple prompts to generate and refine the instructional copy that guides people through the tool.

Even with relatively lightweight prompting, the model produced clear, usable text that required minimal editing. Combined with code generation, it significantly accelerated our ability to build a production-adjacent system while still maintaining control over the architecture and behavior.

Data & Engineering

The project required accessing, transforming, and interpreting years of Slack history, highlighting both the value and the messiness of real-world organizational data. We ran multiple rounds of tuning to make the semantic structure richer and more meaningful, from adjusting how we chunked messages to refining how we classified and filtered them.

We started with a local processing workflow for testing, then deployed a system to the cloud to run background jobs, handle errors, and coordinate between the pipeline and the LLM. This mirrors what we see in client environments: data readiness, normalization, and scalable processing are often the first major hurdles to meaningful AI adoption.

Here I discuss some of the underlying search concepts highlighting vector databases, embeddings and how they harvest sentiment and meaning from text:


Search & Content Discoverability

RAG was at the core of this experiment. We focused on Slack messages, embedded them into a vector database, and retrieved the most semantically relevant content based on statements representing personality traits, hobbies, and interests.

That distillation process turned a large, noisy dataset into a ranked set of signals that could inform personalized gift suggestions. It underscored a key lesson we see in many projects: retrieval quality, not model size, is often the determining factor in successful AI search implementations that derive real insight.

Here's a look under the hood of all the search is pulling in interests, collating the Slack messages, and the full interaction with the LLM:


Agents, Assistants & Automation

This project didn’t rely heavily on agentic orchestration; most of the workflow was intentionally hands-on. But building it made clear where agents could add value, particularly around retrieval refinement, content validation, and multi-step decision-making.

Those insights have already inspired other more direct applications for agentic workflows that we're implementing for other internal processes. Even a simple experiment can reveal where agent patterns would meaningfully enhance reliability or reduce manual effort. Stay tuned for updates there.

Personalization

The tool was fundamentally about deriving personal insights from Slack behavior, sentiment patterns, recurring topics, and subtle indicators of someone’s interests and personality. This was a low-risk way to demonstrate the power personalization: instead of customizing a buying journey or site/app experience, we used those signals to generate thoughtful gift ideas.

The underlying mechanics, however, map directly to applications like tailored user journeys, intelligent product or content recommendations, and behavior-aware content surfacing. It’s a small example of how even modest amounts of well-structured historical data can unlock meaningful personalization.

Privacy, Security & Compliance

Even though this was an internal tool, we approached data handling carefully. We excluded certain Slack channels, including all private channels, filtered sensitive or personal content, including any photos, and ensured that only the minimal necessary context was sent to the LLM during both processing and generation.

This mirrors the governance questions organizations face: what data is appropriate to use, what must be redacted, and how can teams design systems that respect privacy while still delivering value? The experiment reinforced the importance of policy-aware design, even in low-risk AI applications. Often times it's not just about systems and procedures, but ensuring the right conversations are happening to mitigate data risk and ensure compliance.

Product Strategy

This project echoed the strategic questions we help clients navigate: What problem are we actually solving? What value will AI add? How do we define “good enough” for an early implementation? Fundamentally this was a strong, and engaging enhancement to a tool that was previously not interactive, and was an element the team really enjoyed using.

Even for something as lighthearted as gift recommendations, we evaluated trade-offs like data quality vs. development time, retrieval accuracy vs. complexity, and user delight vs. engineering overhead. AI isn’t just a technical exercise; it requires thoughtful scoping, prioritization, and alignment with real business needs.

Training & Empowerment

After the build, we held a skillshare session (see the videos above) to walk the design and project management teams through how the system worked, from semantic embeddings to the RAG stack to how generative components were orchestrated. When we regularly share in this way, we unlock new ways for our experience managers and designers to unlock what's possible as the world around us lurches forward under the pace of AI change. This kind of hands-on literacy is essential for sustainable AI adoption within an organization.

And it wouldn't be an internal meeting without closing out with a dad joke, brought to you by Claude, or was it Gemini, or was it ChatGPT...:


Want to see more?

We're happy to share more under the hood in more detail, just reach out.

If someone you know wants to leverage AI, but doesn't know where to start, point them to our AI Readiness & Opportunity Assessment. We're always happy to chat.