No-Frills AI Product Search for a Well-Curated Catalog
> How can I implement conversational product search?Searching 7 SaaS vendors...
Have you considered creating a new ETL pipeline which embeds and stores your data in a vector database? When a user asks a question, have an LLM rewrite the query, embed it, vector-search it, and pipe results back to the LLM. If the results are still meh, just throw in a specialized rerank model. Just one more tool call, bro. I promise, just one more model and it'll fix everything. I mean, what could go wrong? The results are still lackluster? Have you considered going hybrid and using RRF for final-pass-ranking?
3% context left> /clear
> My product attributes are normalized — variant colors, dimensions, and other specs are mapped to canonical values — enabling faceted search and clean ETL into my search engine, which is already configured with relevance and merchandising rules.> How can I implement conversational product search with optimal cost, latency and UX?Thinking with ultra-extended-effort...
Get filter metadata across all product categories and cache it. A gaming laptop example:
categories.lvl0:Laptops|520
categories.lvl1:Laptops > Gaming Laptops|145
brands:Dell|95,HP|88,Lenovo|82,Apple|75,ASUS|68,Acer|52,MSI|35,Razer|25
prices.sale_price:[range]
attributes.screen_size:[range]
attributes.processor:Intel Core i7|120,Apple M3|75,AMD Ryzen 7|68,Intel Core i5|65,Apple M3 Pro|42,AMD Ryzen 9|38
attributes.ram:16GB|195,32GB|148,8GB|92,64GB|55
attributes.storage:512GB SSD|210,1TB SSD|175,256GB SSD|80,2TB SSD|55
attributes.color:Silver|165,Space Gray|98,Black|88,Midnight|45,Starlight|32
When the user’s query arrives and you’ve detected the category (via regex or an LLM), pull that category’s filter metadata.
If the user asks about "best gaming laptops under $1500 with a dark finish", the LLM has enough context to generate this Algolia payload:
{
"index": "products_best_selling",
"query": "",
"facetFilters": [
["categories.lvl1:Laptops > Gaming Laptops"],
["attributes.color:Black", "attributes.color:Midnight"]
],
"numericFilters": [
"prices.sale_price <= 1500"
],
"facets": [
"brands",
"attributes.processor",
"attributes.ram",
"attributes.storage",
"attributes.color",
"prices.sale_price",
"attributes.screen_size"
]
}
How does an LLM do that?
- It can map semantic phrases to filter values when lexical matching fails: ‘dark finish’ becomes
Black OR Midnight. - It’s very good at constructing DSL syntax, in this case as an Algolia payload.
- It has enough context (filter metadata) to determine which parts of the query map to a filter/value pair and which parts are numeric operators.
97% context left> That's cool, I won't have to set up a new ETL or a vector database, nor will I need embedding or rerank model at runtime. A single fast LLM call will get the job done and the context window won't have to include product data. But what UX does this unlock?
The frontend receives a search payload that users can refine. When a user changes filters, the client issues a new query without another LLM roundtrip. Relevance and merchandising rules carry over from your search engine.
For follow-up messages, send the client's filter state and have the LLM adjust the payload:
> /compact
Look carefully at the shape of your data; it is a good indicator of how you can architect conversational search. If your product specs aren't standardized, vector search on unstructured text makes sense. If you already have faceted search, you can probably get by with generating search syntax. A nice byproduct: users get interactive filters they can refine without waiting for another LLM roundtrip. Use the right tool for the job.
Press Ctrl-C again to exit