We've added Swagger documentation to the backend, enabling automated API UI generation and schema definition for the Slotli Auth API. This update also includes initial Vercel deployment configurations to streamline the production pipeline. 
Added a vercel.json configuration file to the backend to streamline deployment, including custom build commands with Prisma generation and increased API function duration. This setup also handles request routing via path rewrites to the main API entry point. These changes simplify the CI/CD pipeline and ensure proper environment handling for serverless deployments.
We've added a vercel.json configuration file to the backend to streamline cloud deployments. This update defines the build process (including Prisma generation), sets output directories, and configures function execution limits and URL rewrites to ensure proper API routing. This simplifies CI/CD workflows and ensures consistent environment behavior across deployments. 
Added a vercel.json configuration file to the backend to streamline deployments. This update defines custom build commands, including Prisma client generation, sets function execution limits, and configures URL rewrites to direct incoming traffic to the API entry point. These adjustments ensure a smoother deployment process and better performance for serverless functions on Vercel.
Added comprehensive Swagger API documentation for the Slotli Auth backend, making it easier to discover and test endpoints. This update also includes a vercel.json file to streamline deployment and manage environment routing. Now developers can interact with the API definition directly, improving the overall developer experience. 
This update modifies the SSE notification event format to include the topic name when messages are broadcast via /push/topic. By adding the topic to the notification data, subscribers can now easily identify which channel triggered a specific push event. I've also updated the tests to verify that the topic field is correctly propagated to recipients.
Refactored the visualization prompt to be centrally located in backend/src/shared/prompts.ts instead of a separate subdirectory. This cleanup fixes a broken import in the visualization service and simplifies prompt management across the backend. This change ensures proper modularity for our agent's charting logic.
This update introduces a robust chat system, bridging backend controller logic with new frontend components for dynamic communication. Users can now share attachments and view data-driven visualizations directly within their chat sessions, powered by a new dedicated visualization service. This enhancement significantly enriches the collaborative experience across the platform.

Added knowledge node context to the chat interface, allowing users to ask questions directly about specific topics. The backend now injects relevant node content as context to the Gemini AI, improving answer accuracy for students. We also updated the frontend to auto-attach the current node to the chat session when accessed from the tree. 
This initial commit introduces rss-push, a utility that polls RSS/Atom feeds on a cron schedule and pushes detected changes to the pns.1lattice.co service. Built with Bun, it implements a stateful diffing mechanism to ensure only updates are relayed, providing a lightweight way to automate feed-based notifications. It's ready to use with a simple JSON configuration.
We've transitioned the /push endpoint from utilizing a query parameter (?pubkey=) to a cleaner path parameter structure (/push/:pubkey). This change aligns the API with standard REST practices, allows for stricter schema validation, and maps missing keys to a more appropriate 404 status code. It's a small change that significantly improves the clarity and resource-oriented design of our endpoint architecture. 
This update introduces a streamlined deployment workflow for Dokku, including a setup-dokku.sh script to automate app creation, port mapping, Redis integration, and HTTPS via Let's Encrypt. A custom nginx.conf.sigil was added to handle HTTPS redirects and proxy configurations, ensuring secure and scalable deployments for production environments. This lowers the barrier to entry for hosting by codifying the infrastructure setup.
Updated the docker-compose setup to expose the Nginx service on port 80. This enables standard HTTP access for the deployment, simplifying integration with load balancers and existing network infrastructures. 
Updated the project README with a link to a YouTube demo video showcasing EchOS in action. This should make it easier for new users to get a quick visual overview of the system's capabilities.
This update introduces architectural support for scaling SSE servers horizontally by adding an Nginx configuration and simplifying the messaging API. The /push/token endpoint has been replaced with a cleaner POST /push?pubkey=<pubkey> interface to streamline interaction, and example scripts have been updated to reflect these changes. This setup now allows running multiple instances behind a load balancer to increase capacity.
Updated the /send endpoint to now buffer messages for recipients who are currently offline, returning a 202 Accepted status instead of a 404 Not Found. This change ensures that messages are persisted and delivered to the recipient upon reconnection. Updated documentation and test suites to reflect this new delivery behavior.
Simplified the conditional logic within the push request handlers in index.ts to reduce noise and improve readability. Additionally, fixed a discrepancy where one of the agents' push calls was incorrectly returning a 404 instead of handling buffered delivery, ensuring consistent response behavior across the API. The codebase remains cleaner and more maintainable.
Moved the client registration step before the event replay logic in the SSE subscription flow to resolve a race condition. By ensuring the registry is fully aware of the client connection prior to attempting to retrieve missed events, we prevent potential delivery gaps for newly connected agents. 
We've implemented tracking for the last received Server-Sent Event (SSE) ID in our Lattice platform gateway. By including this ID in the Last-Event-ID header during reconnect attempts, the server can now reliably replay any events missed during temporary connection drops. This ensures a smoother, more resilient experience for event-driven workflows.

We've implemented a robust event buffering mechanism that allows clients to recover missed messages after a disconnection. By storing events in a Redis sorted set keyed by pubkey and utilizing monotonic ULIDs, the system can efficiently replay events based on the Last-Event-ID header. This update also updates status codes to 202 Accepted when events are successfully buffered for offline agents, ensuring a better experience for connected clients. 