2.1 Project Overview

Overview of Project ☁️

Scenario:

TechAssist Inc., an internal IT Help Desk for a large enterprise, manages hundreds of Standard Operating Procedure (SOP) documents, FAQs, and policy PDFs.

Support agents waste time searching through Confluence pages, SharePoint folders, and outdated PDFs when answering employee questions. Simple tickets often take minutes — or even hours, because the right information is spread across multiple documents.

When the team experimented with public LLMs, they quickly ran into problems:

  • Public AI tools cannot access internal SOPs, so answers were often incomplete.
  • Sensitive company information cannot be uploaded to external chatbots.
  • Responses had no grounding, often inventing steps or incorrect procedures.
  • There was no auditability, a must for enterprise environments.

To solve this, the organization chose Amazon Bedrock Knowledge Bases, which keep all data inside AWS and generate answers strictly from your documents.


Our Solution

We’ll build a private, document-aware FAQ chatbot powered entirely by:

  • Amazon Bedrock Knowledge Bases for retrieval and grounded generation
  • Lambda Function URL for a lightweight serverless backend
  • A simple, modern S3-hosted web app for user interaction

Your documents stay in Amazon S3, Bedrock automatically indexes them, and the web app lets users ask natural-language questions with instant, accurate responses.

This becomes a secure, enterprise-ready AI assistant that:

  • Answers ONLY from your documents
  • Never hallucinates
  • Stays fully inside your AWS environment
  • Requires no model training and minimal code

About the Project

In this project, you’ll build a complete end-to-end FAQ Chatbot backed by your own documents.

You will:

  • Create a Bedrock Knowledge Base connected to an S3 bucket
  • Run content ingestion to index your SOP or FAQ file
  • Verify Q&A directly in the Bedrock Playground
  • Build a serverless backend using Lambda Function URL
  • Create a clean, modern web interface that connects to your backend
  • Display grounded answers and retrieved sources to the user

This project demonstrates how modern AI assistants are built using RAG (Retrieval-Augmented Generation), the same approach used in enterprise-grade AI applications.


Steps To Be Performed 👩‍💻

We’ll go through the following steps in this project:

  1. Upload your SOP/FAQ document into an S3 bucket to serve as the data source.
  2. Create a Bedrock Knowledge Base and connect it to the S3 bucket.
  3. Run ingestion to embed and index your document.
  4. Test the Knowledge Base in the Bedrock Playground with grounded Q&A.
  5. Create a Lambda backend (Function URL) that calls retrieve_and_generate.
  6. Build and host the Web App on S3 to interact with your Knowledge Base.

Services Used 🛠

  • Amazon S3 → Stores your documents and hosts the web app
  • Amazon Bedrock Knowledge Bases → Retrieval + grounded answer generation
  • AWS Lambda → Serverless backend responding to browser requests
  • Lambda Function URL → Public HTTPS endpoint (no API Gateway needed)
  • AWS IAM → Access permissions for S3, Bedrock, and Lambda

Estimated Time & Cost ⚙️

  • Estimated time: 2 - 3 hours
  • Cost: ~$0 - $1

➡️ Architectural Diagram



➡️ Final Result

Once completed, you’ll have:

  • A private FAQ chatbot that answers only from your uploaded FAQ/SOP document(s)
  • A working RAG pipeline using Bedrock Knowledge Bases (no custom training)
  • Clear tests showing grounded answers and polite refusal when content isn’t in the docs

Complete and Continue