Chat Icon

All from

Gemini Transformers

Building a RAG System without Vector Databases: PostgreSQL and Gemini Transformers

Retrieval-Augmented Generation (RAG) has revolutionized how we build AI applications that can reason over custom documents and knowledge bases. In this post, I'll walk you through a complete RAG architecture that combines Google's Gemini model with PostgreSQL's vector capabilities to create a powerful document Q&A system.

‍Why PostgreSQL for Vector Storage?

Before diving into implementation, let's understand why PostgreSQL makes an excellent choice for vector databases:

  • Operational Simplicity: If you're already running PostgreSQL in production, adding vector capabilities means one less service to manage, monitor, and scale.
  • Rich Query Capabilities: Combine vector similarity search with traditional SQL operations, enabling complex queries that mix semantic search with filters, joins, and aggregations.
  • Cost Efficiency: Leverage existing PostgreSQL infrastructure instead of paying for separate vector database services.
  • Hybrid Search: Seamlessly combine full-text search with vector similarity for more nuanced retrieval strategies

‍Architecture Overview

Our RAG system follows a clean, six-phase workflow:

Phase 1: Data Preparation

The journey begins with raw documents that need to be processed:

  • Document Ingestion: Accept various document formats
  • Markdown Conversion: Standardize format for consistent processing
  • Intelligent Chunking: Split documents into meaningful sections while preserving context

Phase 2: Embedding Generation

This is where the magic happens:

  • Gemini Embedding Model: Convert text chunks into high-dimensional vectors
  • Semantic Representation: Each vector captures the meaning and context of the text
  • Consistency: Using the same model ensures embedding compatibility

Phase 3: Vector Storage

Efficient storage is crucial for performance:

  • PostgreSQL + pgvector: Leverage the reliability of PostgreSQL with vector capabilities
  • Scalable Storage: Handle millions of document chunks efficiently
  • ACID Compliance: Ensure data integrity and consistency

Phase 4: Query Processing

When users ask questions:

  • Query Embedding: Convert user questions using the same Gemini model
  • Vector Representation: Maintain consistency between storage and query vectors
  • Preparation: Ready the query for similarity search

Phase 5: Similarity Search

Find the most relevant information:

  • Vector Similarity: Use mathematical distance to find semantically similar content
  • Top-K Retrieval: Get the most relevant chunks (typically 3-5)
  • Performance: Leverage pgvector's optimized indexing for fast searches

Phase 6: Response Generation

Bring it all together:

  • Context Integration: Combine retrieved chunks with the user query
  • Gemini Generation: Use the language model to create coherent, accurate responses
  • Source Attribution: Maintain traceability to original documents

Why This Architecture Works

Unified Model Ecosystem

Using Gemini for both embedding and generation ensures:

  • Semantic Consistency: Embeddings and generation logic are aligned
  • Optimized Performance: Models are designed to work together
  • Simplified Deployment: Fewer API endpoints and model versions to manage

PostgreSQL as Vector Database

While specialized vector databases exist, PostgreSQL + pgvector offers:

  • Production Reliability: Battle-tested database with ACID guarantees
  • Ecosystem Integration: Easy integration with existing applications
  • Cost Effectiveness: No need for additional database infrastructure
  • Advanced Querying: Combine vector search with traditional SQL operations

Scalable Design

This architecture handles growth gracefully:

  • Horizontal Scaling: PostgreSQL can be scaled across multiple nodes
  • Efficient Indexing: pgvector provides HNSW and IVFFlat indexes for fast searches
  • Batch Processing: Document ingestion can be parallelized

Implementation Considerations

Chunking Strategy

The quality of your chunks directly impacts RAG performance:

  • Size Matters: Balance between context preservation and specificity
  • Overlap: Consider overlapping chunks to prevent information loss
  • Structure Awareness: Respect document structure (sections, paragraphs)

Vector Similarity Metrics

Choose the right distance function:

  • Cosine Similarity: Best for semantic similarity (recommended for most cases)
  • Euclidean Distance: Good for exact matching scenarios
  • Dot Product: Useful when magnitude matters

Performance Optimization

Key areas to monitor and optimize:

  • Index Configuration: Tune pgvector indexes based on your data size
  • Batch Operations: Process multiple documents efficiently
  • Caching: Cache frequently accessed embeddings and responses
  • Connection Pooling: Manage database connections effectively

‍Real-World Benefits

This RAG architecture delivers tangible value:

For Developers:

  • Rapid deployment using familiar PostgreSQL infrastructure
  • Consistent API patterns with Google's model ecosystem
  • Easy debugging and monitoring with standard database tools

For Organizations:

  • Accurate answers from proprietary documents
  • Reduced hallucination compared to standalone LLMs
  • Auditable responses with source traceability
  • Cost-effective scaling without specialized vector database licensing

For End Users:

  • Fast, relevant responses to complex queries
  • Ability to ask questions about specific documents or topics
  • Contextual answers that cite sources

Database Schema

CREATE EXTENSION IF NOT EXISTS vector;
 
CREATE TABLE document_chunks (
	id SERIAL PRIMARY KEY,
	document_name VARCHAR(500) NOT NULL,
	content TEXT NOT NULL,
	embedding VECTOR(768),
	created_at TIMESTAMP DEFAULT NOW()
);
 
CREATE INDEX ON document_chunks
USING hnsw (embedding vector_cosine_ops);

Code Examples

1. Store Document Chunks

import google.generativeai as genai
import psycopg2
 
# Generate embedding
def get_embedding(text):
	response = genai.embed_content(
        model="models/embedding-001",
    	content=text
	)
	return response['embedding']
 
# Store in database
def store_chunk(content, doc_name):
	embedding = get_embedding(content)
    cursor.execute("""
    	INSERT INTO document_chunks (document_name, content, embedding)
    	VALUES (%s, %s, %s)
    """, (doc_name, content, embedding))

2. Search Similar Chunks

SELECT content, document_name
FROM document_chunks
ORDER BY embedding <-> %s::vector
LIMIT 5;

3. Generate Response

def query_rag(question):
	# Get query embedding
	query_vector = get_embedding(question)
	
	# Find similar chunks
    cursor.execute("""
    	SELECT content FROM document_chunks
    	ORDER BY embedding <-> %s::vector
    	LIMIT 5
    """, (query_vector,))
	
	chunks = [row[0] for row in cursor.fetchall()]
	context = "\n".join(chunks)
	
	# Generate answer
	prompt = f"Context: {context}\nQuestion: {question}\nAnswer:"
	response = genai.GenerativeModel('gemini-pro').generate_content(prompt)
	return response.text

Getting Started

To implement this architecture:

  1. Set up PostgreSQL with the pgvector extension
  2. Configure Gemini API access for embedding and generation
  3. Create the database schema using the SQL above
  4. Implement the Python classes for document processing and querying
  5. Test with sample documents and optimize based on your use case

Conclusion

This RAG architecture represents a practical, production-ready approach to building intelligent document Q&A systems. By combining Google's powerful Gemini models with PostgreSQL's reliability and vector capabilities, you get the best of both worlds: cutting-edge AI performance with enterprise-grade data management.

The beauty of this system lies in its simplicity and power. With just six clear phases, you can transform static documents into an interactive knowledge base that provides accurate, contextual answers to user questions.

Ready to build your own RAG system? The combination of proven technologies and modern AI capabilities makes this the perfect time to start building intelligent applications that truly understand your data.

Building a RAG System without Vector Databases: PostgreSQL and Gemini Transformers
Suganth Solamanraja
September 19, 2025

Building a RAG System without Vector Databases: PostgreSQL and Gemini Transformers

Retrieval-Augmented Generation (RAG) has revolutionized how we build AI applications that can reason over custom documents and knowledge bases.