Literary Companion is a web application designed to enrich the classic reading experience. As you read a novel, it provides a modern English translation and generates contextually relevant “fun facts” and trivia in a side panel, bringing the story’s world to life.
Built with Python, Flask, and the Google Agent Development Kit (ADK), this project leverages the power of generative AI (Vertex AI) to create a dynamic and interactive companion for your literary journeys.
The application operates using two primary workflows, both orchestrated by AI agents built with the Google ADK.

Before a novel can be read in the UI, it must be prepared. This is a batch process that you run once for each book.
BookPreparationCoordinator agent. This agent uses a tool to read the novel’s raw text from Google Cloud Storage (GCS), splits it into paragraphs, and uses a generative model to translate each paragraph into modern English. The final structured data (original text, translated text, paragraph IDs) is saved as a single JSON file back into GCS.This is the core interactive loop that happens as a user reads the book.
FunFactCoordinator agent, which in turn uses an orchestrator tool to generate different types of fun facts in parallel. These facts are generated by a set of specialized functions that call Vertex AI. The results are collected and sent back to the user’s browser.Follow these steps to get the Literary Companion running on your local machine.
gcloud CLI installed and authenticated.git clone <your-repo-url>
cd lit-comp
python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
Configure Environment Variables:
Create a .env file in the project root and add the necessary configuration. This file is loaded by the Flask application for local development.
# .env
# --- GCP Configuration ---
GOOGLE_CLOUD_PROJECT="your-gcp-project-id"
GOOGLE_CLOUD_LOCATION="your-gcp-region" # e.g., us-central1
# --- Application Specific ---
GCS_BUCKET_NAME="your-gcs-bucket-name"
GCS_FILE_NAME="name-of-your-book.txt" # The book to load by default
# --- AI Model Configuration ---
DEFAULT_AGENT_MODEL="gemini-1.5-flash-001"
GOOGLE_GENAI_USE_VERTEXAI=TRUE
Upload a Book: Upload a plain text (.txt) version of a novel to your GCS bucket.
python scripts/run_book_preparation.py --bucket "your-gcs-bucket-name" --file "name-of-your-book.txt"
This will create a _prepared.json file in your bucket.
python app.py
http://127.0.0.1:5001 to start reading.After preparing a book, you may want to create a smaller version for testing or demos. The scripts/filter_prepared_book.py script allows you to do this.
_prepared.json file containing only chapters up to a specified number.python scripts/filter_prepared_book.py path/to/your_book_prepared.json 10
This command will create a new file (e.g., your_book_prepared_chap_1-10.json) in the same directory, containing only the content from chapters 1 through 10.
We welcome contributions! Here are a few ideas to get you started:
Immersive Audio Experience: Add a “Listen” button that generates and plays short, relevant audio clips for the current scene. This could involve creating a new tool that uses a text-to-audio generation model to create background soundscapes (e.g., a bustling 19th-century London street, the creaking of a ship at sea) or sound effects for key actions.
Multi-Format Book Support:
Currently, the application only processes .txt files. A great enhancement would be to extend the book_processor_tool to handle popular ebook formats like .epub and .pdf. This would require adding libraries like ebooklib and PyPDF2 to parse these files and extract their text content before passing it to the translation and analysis pipeline.
Persistent User Sessions with Redis:
The current InMemorySessionService loses user history when the app restarts, which is not ideal for a production environment. A valuable contribution would be to replace it with a persistent session store using Redis. This would involve creating a RedisSessionService class that implements the BaseSessionService interface, connecting to a Redis instance, and handling the serialization/deserialization of session data.
This project is licensed under the Apache 2.0 License.