3 Critical Ways Amazon Bedrock Drives fabric’s Generative AI Innovation

Umer Sadiq, fabric's CTO, on how Amazon Bedrock drives fabric's Generative AI innovation

At fabric, AI has been a pivotal technology as we uplevel retail operations for our customers. We are transforming our product to use natural language and AI-driven recommendations to accomplish complex commerce operations. As we extend our platform, we challenge ourselves to validate business cases rapidly, deploy nightly releases, and respond to feedback quickly and incrementally. It is essential that we leverage the right selection of tools and infrastructure to accelerate our innovation.

One of our earliest strategic decisions was to use Amazon Bedrock to power our generative AI solutions, a decision that has significantly benefited our customers. Here are three critical ways Amazon Bedrock has created a strong foundation and continues to help us innovate the fabric platform.

1. Building trust

Maintaining trust is fundamental to everything we do, especially as we explore AI-driven solutions. Our platform must uphold the highest standards of privacy, security, and regulatory compliance to protect our customers. 

Since our AI solutions directly influence retailer decisions, which in turn affect their customers, our commitment to transparent and explainable AI is non-negotiable. We design our AI products to ensure that each retailer can leverage their own data without the risk of exposure to competitors. This guarantees that personalized models and proprietary data remain completely isolated. 

Every decision related to our infrastructure, architecture, and platform is scrutinized to maintain this level of trust for our customers.

How Amazon Bedrock helps fabric foster trust in our platform

  • Privacy, security, compliance, and governance enablement
    Amazon Bedrock operates within AWS’s comprehensive compliance framework, supporting the stringent regulatory standards that matter most to our customers, such as GDPR and ISO certifications. Strong encryption practices within Amazon Bedrock keep our customer data protected, at rest, and in transit. Data remains under our control, securely stored in protected systems, and is only accessed when necessary to create value for our customers.

  • Isolation where and when we want it
    Amazon Bedrock provides powerful isolation capabilities that enable us to maintain data and model separation for each customer. We can access foundational LLMs, tune them with publicly available data, and then personalize them for each retailer using their unique commerce data. This ensures that no data or insights from one customer are ever shared or exposed to another, maintaining a strong boundary between tenants and preventing cross-pollination of models.

  • Transparency for accountability
    While the inner workings of large language models can sometimes be opaque, Amazon Bedrock allows us to offer our customers transparent access to the infrastructure behind our AI solutions. We can trace the entire data flow—from the models to the data used—ensuring that we provide clear insights into how decisions are made. This traceability is crucial when it comes to explaining AI-driven decisions, giving both fabric and our customers a deeper level of accountability and trust in the models’ outputs.

2. More effective experimentation

Speed of innovation is crucial in shortening the gap between identifying a use case and prototyping a viable solution. With many possible use cases and various approaches to each, the complexity of model selection, training, and architecture can slow down the process considerably.

However, our approach follows a common pattern:

  1. Identify the business case and key outcomes.
  2. Define the inputs and outline the expected outputs.
  3. Select the model(s) most suited to solving the problem.
  4. Test quickly for viability, making a few iterations to see how close we can get to the desired outcome.
  5. Evaluate the results to decide whether to switch models, fine-tune the current model, or rethink the experiment entirely.

Switching between models or experimenting with different approaches requires repeating this cycle, which can be resource-intensive. Once a model shows promise, tuning it becomes crucial to improve response accuracy. Quick access to model hosting and training without a significant degree of scaffolding work significantly streamlines this approach.

How Amazon Bedrock helps fabric increase innovative experimentation at fabric

  • Hosted models for rapid flexibility
    Amazon Bedrock simplifies the experimentation process by providing immediate access to pre-trained LLMs or enabling us to upload our own models. Amazon Bedrock-hosted models can be directly accessed through its APIs, and we only pay for what we use, which makes it both flexible and cost-effective.

  • Faster testing
    Amazon Bedrock allows us to rapidly test models using sample data, reducing the friction between idea and execution. Testing is as simple as passing a request to the model, allowing us to evaluate viability quickly without needing a complex setup.

    When our experiments require more advanced logic or data, Amazon Bedrock’s built-in features like Knowledge Bases and Agents enable seamless integration. We can move data into Knowledge Bases to test how well the LLM retrieves and applies the data. For more complex interactions, Agents add context-aware capabilities, making the model’s responses feel more natural and informed during testing.

  • More efficient tuning
    Tuning models within Amazon Bedrock is highly efficient, especially with few-shot learning supported natively. This enables us to fine-tune models using minimal data to quickly assess how well they adapt to specific use cases. While more advanced tuning is available through Amazon Bedrock and Amazon SageMaker, the ability to perform lightweight tuning with just a small subset of data has greatly accelerated our progress.

3. Working software

Once we successfully develop an AI-powered prototype, our next objective is to get working software into production and the hands of our customers as quickly as possible. We’ve learned that delivering an LLM with a rich array of commerce functionalities requires an architecture that supports multiple knowledge bases and ensures the model is context-aware.

Maintaining and incrementally improving our AI solutions requires a securely hosted LLM, alongside data systems and agents, that can be updated seamlessly with minimal operational overhead. The LLM must be capable of retrieving and responding with accurate, real-time data from the appropriate sources within a live environment. Furthermore, any updates or improvements to this architecture must be easily testable before they are deployed into production.

How Amazon Bedrock helps fabric increase the speed of software deployment

  • Robust and flexible infrastructure
    Amazon Bedrock provides us with the necessary infrastructure to test, fine-tune, and deploy LLMs efficiently. With Amazon Bedrock’s multi-stage capabilities, we can rigorously test our LLMs using sample data before fully releasing them. This flexible approach reduces the need for extensive environment setup, allowing us to focus on refining the functionality rather than managing infrastructure.

  • RAG for output optimization
    One of the key features that enables rapid development is Amazon Bedrock’s Retrieval-Augmented Generation (RAG). Our LLM needs access to multiple data sources to respond intelligently in various contexts. Amazon Bedrock’s RAG feature provides built-in access to databases and APIs, ensuring that our model can pull relevant and industry-specific information from internal and external sources. RAG enhances the model’s ability to handle request context effectively, incorporating our own data sources along with Amazon Bedrock’s knowledge base and agents.

  • Streamlined delivery
    By integrating the LLM hosting, RAG capabilities, and knowledge management features of Amazon Bedrock, we can make iterations rapidly and deploy changes quickly without the need to develop large, costly pipelines. This streamlined process has allowed us to reduce the time it takes to get live working software without major operational complexity.

See for yourself how fabric is delivering AI for retailers

Learn more about our partnership with Amazon Web Services (AWS) and how together we’re powering smarter, faster, and more flexible retail solutions. 

With fabric’s AI-driven solutions, retailers can create personalized experiences, streamline operations, and scale effortlessly. Discover how fabric and AWS are delivering tangible AI value and transforming the future of retail today.


Topics: Product
Umer Sadiq

CTO @ fabric. Previously software development @ Amazon and Williams Sonoma.

Learn more about fabric