SHEBEEB S

With a strong foundation in software engineering, I discovered my passion for data-driven decision making and intelligent systems. This curiosity led me to transition into Data Science, exploring art of data and passionately solving real-world problems through Data Science, Machine Learning and storytelling.

Some time ago, I shared a Streamlit prototype called the Mismatched Memory App – a quirky little project that let me chat with an AI using snippets of personal memories, which itself was inspired from Hindi series Mismatched. It was a simple one-page app, but the concept felt special. I could feed in stories and ask questions like “Do you remember our trip to the mountains?”, and the app would recall details from those memories. That early version proved the idea was feasible and delightfully fun, but it was also very limited. In this post, I want to take you through how I transformed that barebones prototype into a richer, full-stack application now dubbed MemoVita – complete with a React frontend, a FastAPI backend, cloud storage, and a dash of AI magic. 🎉

Revisiting the Streamlit Prototype

My original Streamlit app was a quick proof-of-concept built in Python. It loaded a handful of written memories (essentially short text entries I provided) and allowed me to query them via an LLM (Large Language Model). The idea was to see if an AI could act as a “memory keeper,” retrieving the right story when prompted.

I still remember the first time it worked: I asked something vague like “Tell me about the day I graduated”, and the app responded with a snippet from a memory I had input about my graduation day. It felt almost like chatting with my past self! 🤯 That’s when I knew this concept had potential.

However, as a Streamlit app, it had some big limitations:

  • Single-Person Focus: I had hard-coded it for one “persona” (basically just my own memories). If I wanted to chat about someone else’s memories (say, my grandma’s stories), I’d have to rerun or modify the app.

  • No Persistent Storage: The memories were static or had to be re-uploaded each session. Streamlit doesn’t come with a database – it’s not really meant for complex data storage.

  • Basic UI: Streamlit gave me a quick interface but not a very interactive one. There was no polished layout, just a text box and output area. Features like image attachments or a chat-style conversation history were out of scope in the prototype.

  • Limited Shareability: I could deploy the Streamlit app for myself, but having others use it (especially simultaneously) or making it feel “app-like” (mobile-friendly, installable, etc.) would be challenging.

In short, the prototype proved the idea but wasn’t something I could easily share with family or friends. I wanted more for MemoVita if it was going to truly bring memories to life.

Why Go Full-Stack?

I’m a big fan of quick wins – and Streamlit was exactly that for version 1.0. But after the initial excitement, I started to dream bigger for this project. I imagined:

  • Multiple memory profiles: What if I could create separate memory collections for different people? For example, one for Mom’s memories, one for Dad’s, one for a close friend, etc., and chat with each of those “personas” individually.

  • A richer, persistent experience: I wanted users (and myself) to be able to add memories on the fly, upload photos, tag events, and have everything saved in a database so it’s there the next time we visit the app.

  • Better UI/UX: A more dynamic interface with multiple panels, a scrollable chat history, and maybe even a calendar to select dates of past events. Basically, a design that feels like a real application rather than a demo.

  • Wider access: It would be awesome if this app could run on a phone or be installed as a little “memory keeper” app (hello PWA!). Streamlit wasn’t going to cut it for that, so a custom frontend was needed.

  • Learning opportunity: On a personal note, I saw this as a chance to level up my full-stack development skills. Rebuilding it with modern tools would teach me a ton (and it definitely did, as I’ll explain).

All these reasons pushed me to move beyond the confines of Streamlit. So, I rolled up my sleeves and started planning a full-stack architecture for MemoVita. It was both exciting and daunting – I was essentially going from a single Python script to managing a frontend, backend, database, and cloud services! 😅

Meet MemoVita’s New Tech Stack

After some research and consideration, I settled on a stack that looked like this:

  • Frontend: Next.js (React) – for a rich, responsive UI and easy deployment. I knew I’d need interactive components (for chat, forms, etc.), and Next.js provides a great developer experience with React. Plus, it can easily be turned into a Progressive Web App.

  • Backend: FastAPI (Python) – to build a RESTful API server. I stayed with Python here because I wanted to reuse my AI/LLM code from the prototype and leverage Python’s ecosystem (libraries like LangChain and OpenAI’s API are Python-friendly). FastAPI is super fast (as the name implies) and easy to write and deploy.

  • Database & Storage: Supabase – this is a backend-as-a-service that gives a hosted Postgres database and convenient file storage buckets. Using Supabase meant I didn’t have to set up my own database server from scratch; I could get authentication and a DB out-of-the-box and focus on the app logic. It’s like an open-source Firebase alternative, and the free tier was plenty for my needs.

  • AI Utilities: LangChain & OpenAI API – LangChain is a framework that helps connect LLMs (like GPT-4) with custom data. I leveraged it to handle embedding my memory texts and retrieving relevant ones to answer questions. The heavy lifting of generating answers (in a friendly, conversational tone) was done by OpenAI’s GPT model behind the scenes.

Overall, this combination covered all the bases: a snappy UI, a robust backend, persistent data storage, and intelligent Q&A capabilities. Now I just had to build it!

(Spoiler: it worked out, and the app is live at memovita.app – with the frontend on Vercel, the backend on Railway, and Supabase in the mix. But getting there was quite the journey.)

Building the Next.js Frontend (User Experience Upgrade)

Moving from Streamlit to Next.js/React was like going from driving a cozy sedan to piloting a spaceship. 🚀 Suddenly I had full control of the interface – which was both empowering and a bit overwhelming. I broke down what I needed in the UI and ended up with three main panels in the app:

  • 1. People List (Profiles Panel): On the left, I created a sidebar that lists all the “people” or memory profiles. Each profile has a Name and a Relation (for example, “Grandma – Family” or “Alice – Friend”). This came from a desire to segregate memories by person. Clicking on a person in this list sets the active persona for the chat. I also added an “Add New Person” button (a nice big + icon) which opens a small form to input a new name and relation. This way, you can create a new memory profile on the fly – no more one-profile limitation! 🎉

  • 2. Chat Window: In the middle is the heart of MemoVita – a chat interface. This is where you can converse with the selected persona. At the top, it dynamically shows “💌 Chat with [Name] ([Relation])” so you always know whose memories you’re querying. The chat itself works like a typical messaging app: you type a question or message at the bottom and hit send, and it appends your message to the chat log (on the right side as “You: …”). Then the “AI persona” replies (on the left side with their name and an avatar icon I chose). The messages stack up creating a conversation history you can scroll through. This was so much nicer than the Streamlit version which only showed the latest answer without context.

  • 3. Memory Bank (Timeline Panel): On the right side, I dedicated a panel to display the stored memories for the selected person. Think of this as that person’s journal or timeline. Each memory entry shows the date 🗓️, an event title 🎉 (if provided), tags #️⃣, and the memory description itself. If a memory has a photo attached, a little thumbnail is shown too – clicking it will expand the image. I added a search bar on this panel to filter memories by tag or event, which is handy if the list grows long. And at the bottom, there’s an “Add New Memory” form that can slide open: here you can input a text memory, pick a date, specify an event name, add a tag, and even upload an image. Submitting this form sends the new memory off to the backend (and also directly uploads the image to Supabase storage behind the scenes). The memory list will instantly update with the new entry, which is extremely satisfying to see in real time. 😌

Styling-wise, I used Tailwind CSS in this project, which made it easy to whip up a decent-looking layout. I’m not a designer, but Tailwind’s utility classes helped me get a clean, consistent look (lots of soft orange accents and rounded corners for a friendly feel). I even got the app working nicely on mobile screens with responsive classes – something that was nearly impossible with Streamlit’s layout.

Another neat addition is that I integrated a calendar date picker (using a React DatePicker library) into the chat panel. It highlights dates on the calendar where memories exist for the selected person. You can click on a highlighted date, and the app will automatically ask “Do you remember what happened on [that date]?” and fetch the memory entry from that day. The memory (and its photo, if any) will come up as a chat response. This turned out to be a fun way to explore memories by date, almost like time-traveling to random days in the past.

Perhaps one of the biggest leaps in the frontend was making the app a Progressive Web App (PWA). I configured Next.js with a service worker (next-pwa) and a manifest, meaning MemoVita can be installed on your phone or computer like a native app. 📱✨ It also enables some offline capabilities – for instance, previously loaded memories could be cached and viewed without internet. This was a completely new territory for me, and seeing the “Install MemoVita” prompt on my phone gave me a nerdy thrill.

Overall, building the frontend from scratch was challenging but rewarding. I had to manage state across multiple components (e.g., when you add a new person, it should immediately reflect in the People list, select it, and clear the chat and memory panels for that new profile). React’s hooks (like useState and useEffect) were my saviors for handling these interactions. There were plenty of hiccups – I wrestled with a few tricky bugs (like scroll behavior in the chat and memory list, or ensuring the form resets properly after submission) – but I learned so much in the process.

Crafting the FastAPI Backend (Brains of the Operation)

On the backend, I went with FastAPI to create a set of REST endpoints that the frontend could call. This was a shift from Streamlit’s “all-in-one” approach – now the frontend and backend are decoupled and talk over HTTP. I actually enjoyed this separation of concerns: the frontend became solely responsible for presentation, while the backend handled data, logic, and AI processing.

Here are some key parts of the backend design:

  • REST API Endpoints: I designed clear endpoints for all core actions. For example:

    • GET /people – retrieve the list of all person profiles (each with name & relation).

    • POST /person – add a new person profile (with a JSON body of name and relation).

    • GET /person/{person_key}/memories – get all memories for a given person (the person_key is a combination of name_relation to uniquely identify, e.g. “Grandma_Family”).

    • POST /person/{person_key}/memory – add a new memory entry under that person (with the memory text, date, event, tag, and image URL in the request body).

    • GET /memories/{person_key}/available-dates – get a list of dates that have memories (for highlighting on the calendar).

    • GET /memories/{person_key}/by-date?date=YYYY-MM-DD – retrieve the memory that happened on a specific date (if any).

    • POST /memories/{person_key}/query – this is the exciting one: given a user’s question, find the best answer from that person’s memories.

    Designing these APIs felt great – it’s like giving your app a language that the frontend can speak. I also paid attention to things like proper HTTP status codes and error messages, so if something went wrong (e.g., fetching memories fails), the frontend could handle it gracefully.

  • Database and Data Models: For storing data, I hooked up the backend to Supabase’s Postgres database. I created tables for People and Memories. Each memory record stores all the fields (person identifier, date, text, event, tag, image URL). FastAPI, together with an ORM or query builder (I used the Python psycopg2 library for simple queries), handles the Create/Read/Update operations on these tables. This was another learning curve – in the prototype I didn’t have to worry about persistent storage, but now I needed to ensure data is saved and queried efficiently. Supabase made this easier by providing a connection string and a web UI to manage the DB. I also enabled Row Level Security with service role keys so that the backend can securely talk to the database.

  • Supabase Storage for Images: As mentioned, images are uploaded directly from the frontend to Supabase’s storage bucket. The backend doesn’t actually handle the binary data – instead, the frontend uses Supabase JS SDK to upload the file, and then just includes the returned public URL in the POST request to my backend. This way, when I add a memory with a photo, FastAPI simply saves the URL string. It’s efficient and saves me from dealing with file uploads in Python (which can be a headache with large files or when scaling). The images themselves are served by Supabase’s CDN when displayed in the app.

  • Integrating the AI (Q&A logic): One of the coolest parts was implementing the POST /memories/{person}/query endpoint. When the frontend sends a question, like “Hey Grandma, how was Mom’s wedding?”, the backend needs to figure out a good answer from Grandma’s memory bank. Here’s how I achieved that:

    • First, I use LangChain to help with retrieving relevant memories. Each memory’s text is turned into an embedding (a numerical vector) using OpenAI’s embedding model. I have a simple vector store (just in-memory or a lightweight one, since the dataset isn’t huge) that can search for the memory vectors closest to the question’s embedding. This yields a handful of candidate memory snippets that are likely relevant to the question.

    • Next, I feed those memory snippets into an OpenAI GPT-4 prompt, along with the question. Essentially, the prompt says: “You are [Name] ([Relation]). You will answer the user’s question using the memories provided. If you don’t know the answer from these, say you don’t remember.” I craft it such that the AI responds in a first-person, personable tone, as if that person is speaking. This prompt engineering was key to making the responses feel authentic and not just a copy-paste of the memory text.

    • Finally, GPT-4 returns an answer, which the FastAPI endpoint sends back as JSON (often along with a reference to an image URL if one was in the memory and is relevant). The frontend then displays this answer in the chat bubble with the persona’s name.

    The result? It genuinely feels like you’re chatting with that person and they’re reminiscing with you. 🥲 For example, I asked my “Grandpa” persona “Do you remember your first car?” and it replied with a detailed description of a memory I had entered about his old Chevy – including the fact that he stalled it on the way home from the dealership, which gave me a good laugh. The mix of factual memory recall and the AI’s conversational phrasing is exactly what I was hoping for.

I should mention, deploying the FastAPI backend had its own set of lessons. I chose Railway.app to host the Python backend because it streamlines deployment (you can connect a Git repo and it auto-deploys in a Docker container). I had to tackle typical backend deployment stuff like setting up CORS (so that my Next.js frontend domain could talk to the API), managing API keys securely (for OpenAI and Supabase service role), and keeping the response times low. FastAPI performed great – queries and responses are usually returned in a second or two, which keeps the chat experience snappy enough.

Wins, Lessons, and Next Steps

Rebuilding the Mismatched Memory App into MemoVita was a huge win for me on multiple fronts. Not only does the app now do so much more, but I also grew as a developer through this journey. Here are some reflections:

  • Full-Stack Confidence: This was my first time truly combining a custom frontend with a custom backend (and throwing cloud services into the mix). At the start, I was intimidated about handling “everything,” but now having done it, I feel much more confident in my ability to build scalable apps. It’s empowering to know I can design an idea end-to-end.

  • The Joy of a Good UI/UX: Giving the app a proper interface made it exponentially more enjoyable to use. Little things like the three-panel layout, the visual feedback (notifications like “✅ Memory added successfully!”), and having an image gallery effect for memories really bring it to life. I learned that investing time in UX is worth it – it keeps users (and myself) engaged.

  • New Tools Learned: I got hands-on experience with Next.js, Tailwind CSS, Supabase, and deploying on Vercel/Railway. Each of these had a learning curve. For instance, figuring out authentication or storage rules in Supabase, or optimizing the Next.js build for production. I now have a mental toolbox of these technologies for future projects.

  • AI Prompt Tuning: Working with the OpenAI API taught me the importance of prompt design. Small wording changes in the prompt drastically affected the quality of the memory Q&A. I had to iterate to get the persona’s voice and the balance between accuracy and creativity just right. This was a crash course in practical NLP and it’s something I’ll carry forward as AI continues to be part of my projects.

  • Challenges Overcome: Of course, not everything was smooth sailing. I ran into some tough bugs – one that stands out was an issue with memory duplication when rapidly adding new entries (I eventually realized I wasn’t handling state updates correctly in React). Another challenge was managing environment variables across development and production (secret keys, API endpoints, etc.). It took some tweaking to ensure my local config and deployed config were in sync and secure. Each challenge though forced me to dig deeper and ultimately understand the tech better.

As for what’s next for MemoVita – I have a lot of ideas! A big one is to implement user authentication and accounts, so that others can sign up and create their own memory banks securely. Right now, the app is more of a personal/family use case, but it’d be awesome (and scary 😅) to open it up to the public one day. I also want to keep enhancing the AI aspect; perhaps using more advanced retrieval techniques or even allowing the AI to proactively remind you of a random happy memory each day (how sweet would that be?). And of course, I’ll continue polishing the UI based on feedback – maybe add the ability to edit or delete memories, or share a memory with someone via a link.

At the end of the day, this project has been a labor of love. ❤️ MemoVita is still young, but it’s already come a long way from the Streamlit days. Rebuilding it taught me not just about new frameworks, but also about the value of perseverance and iteration in making an idea shine. If you’ve read this far, thanks for following along on this journey! And if you have a box of memories tucked away somewhere, maybe consider giving them a new digital life – I can personally say it’s been incredibly rewarding to chat with my memories and relive moments I thought I’d forgotten.

*— *Happy reminiscing, and here’s to the wonderful fusion of tech and memories in MemoVita!