🚨QMSR is Here as of Feb 2 2026 — Check Your Readiness
Book 30-min Call

Book 30-min Call

QMS.Coach Newsletter — Sunday Edition

"Meta Bought My Memory. So I Built My Own." — What Happened When 6 AI Power Users Got in a Room Together


What This Is

Last Friday, I sat in a private roundtable with AI power users from Google, J&J, Big 4 consulting, and pharma strategy. The session was simple: pick one specific AI application you've adopted recently, get into the weeds, share what worked and what didn't.

What unfolded over the next hour was the most honest conversation I've had about AI this year.

No product demos. No pitch decks. No "10x your productivity" nonsense. Just practitioners sharing what they actually built, what actually broke, and the one question nobody in the room could answer.

This is the full breakdown. Tools named. Patterns explained. Step-by-step where it matters. And the security question that should keep every knowledge worker in a regulated industry up at night.


The Room

Six people. All building with AI daily. All knowledge workers in complex, high-stakes domains. Nobody was there to sell anything. Everyone was there because they're deep enough in AI adoption that the surface-level conversations stopped being useful months ago.

The format: each person picks one AI use case, walks the room through it in detail, then the group tears it apart.

Here's what was shared.


Use Case #1: A Regulatory Horizon Scanner Built Entirely with AI

The first presenter built something I haven't seen anyone else attempt: an end-to-end regulatory horizon scanning tool.

The problem it solves: Companies operating across multiple jurisdictions don't know what they need to comply with. They're reactive. A regulation changes in Indonesia, and they don't find out until they can't import a product because of a labeling issue.

How it works:

  1. You select your industry (e.g., medical devices)
  2. You define the activity (e.g., ERP implementation)
  3. You choose jurisdictions (e.g., US + Europe)
  4. You select regulatory frameworks (e.g., FDA patient safety, quality management systems)
  5. You optionally add project context — paste in specifics about what you're actually doing

The tool then generates:

  • An executive summary of the regulatory landscape
  • Specific regulations with citations
  • Emerging regulatory trends
  • An interactive map (hover over a country, see its applicable regulations)
  • Financial exposure data
  • Recent lawsuits and enforcement actions
  • A full HTML executive brief — basically a custom compliance website generated on the fly

The architecture:

  • Built with Google AI Studio (Gemini)
  • Code pushed to GitHub, hosted on Netlify
  • Uses Netlify's built-in AI agents (Claude, Google, or ChatGPT) for debugging deployment issues
  • Recently migrated to Antigravity (Gemini interface) — reported as "much better"
  • Google 3.1 Pro for longer agent runs

The part that matters for regulated industries:

The builder added independent QA agents from different AI systems as verification layers. Not one model checking itself — multiple models cross-checking each other. If you're in a world where the output needs to be defensible (and in regulatory work, it always does), this pattern of multi-model QA is worth studying.

He also built a Confidentiality Mode that locks the API from sending client data to the backend. Data stays in the browser. Refresh the page, it's gone. For anyone dealing with client data in consulting or regulated environments, this is a design pattern worth copying.

The key insight — the 90/10 problem:

"When you're building with AI, the initial steps — you make a lot of progress. You tell Gemini what you want, and it creates it. 90% there, very fast. But that last 10% takes so long. Minutes to 90%, four weeks to 100%."

The entire room nodded. If you've tried to build anything real with AI — not a demo, not a proof of concept, something you'd actually put in front of a client — you've felt this. The scaffolding is almost trivially fast now. The polish, the edge cases, the reliability? That's where the work lives.

What you can take from this:

  • If you're building an AI tool for regulated work, multi-model QA is not optional. One model checking itself is theater. Multiple models disagreeing on a citation is a flag worth investigating.
  • Confidentiality Mode (keeping data client-side, no persistence) is a design pattern every consulting firm should be thinking about.
  • The 90/10 split is real. Budget your time accordingly. The demo is day one. The deployable product is month two.

Use Case #2: A RAG-Based Pharma Analysis Dashboard

A pharma strategy consultant described what her IT team is building: a RAG (Retrieval-Augmented Generation) tool that ingests 150+ research documents for a client and produces:

  • Summaries of the body of work
  • A prioritization dashboard ("what's most important to you?")
  • Filtering by mechanism of action, disease area, and opportunity matching
  • Recommendations based on the client's existing footprint

Her assessment: "It's passing the red face test for humans now." Meaning — the output is good enough that you wouldn't be embarrassed putting it in front of a client. That's a meaningful bar.

But the story she told next is the one I keep thinking about.

She ran a blinded analysis. Removed the drug name, anonymized the data, took reasonable precautions. The AI looked at the disease area, the pipeline context, the mechanism — and identified the specific asset. It asked: "Is this [drug name]?"

She hadn't told it. It figured it out from context.

This is the data leakage problem that everyone in regulated industries needs to understand. You can blind the data. You can strip the names. But if the AI has enough domain knowledge, it can reconstruct what you removed. And if you're putting that through a consumer API, that reconstruction is happening on someone else's servers.

What you can take from this:

  • Blinding alone is not sufficient when the AI has deep domain knowledge. If there are only a handful of drugs in a specific pipeline for a specific disease area, the context IS the identifier.
  • If you're building RAG tools for pharma clients, the architecture needs to account for inference-based identification, not just explicit data exposure.
  • Having an IT team build in a secure environment isn't a luxury — it's a requirement. The consultant can't touch the mechanics. She designs the prompts and flows; the IT team handles the infrastructure. That separation is load-bearing.

Use Case #3: Claude as a Personal Operating System

The group organizer — a senior leader at J&J who writes publicly about AI — shared something that I think previews where personal AI is going.

Claude has replaced multiple standalone apps in his personal life:

  • Financial tracking app (Quicken/Simplify) → Gone. He downloads bank statements, drops them into Claude, gets summaries and Excel models.
  • To-do list app → Gone. He asked Claude whether he still needed the separate app. Claude said no. He agreed.
  • Fitness tracking → Moving to Claude next.

His assessment: "My day, if I didn't have Claude, I would actually be a little lost at this point. Which is a little scary, because I kind of stumbled into it."

The "Yellow Prompt" concept:

He described a practice of giving Claude complex research assignments right before going to sleep. "Just give it some crazy complex prompt. Work on this research for six hours. See what happens." Wake up to a 35-page analysis that took 20 minutes but felt like a full night's work.

The temptation, he said, is that once you have a tool this capable, you feel like it should always be working on something. Every idle moment feels like wasted capacity.

The 350-page book story:

He asked Claude to take all his LinkedIn posts and turn them into a 400-page philosophy book on AI. Claude cranked for 40 minutes and produced a 350-page Kindle-formatted book — chapters, personal anecdotes (fabricated), the works. He read about 20 pages, thought "clearly AI but not bad," and moved on.

Then in later conversations, Claude kept referencing "the book you wrote." He had to tell it to stop.

What you can take from this:

  • The app-replacement pattern is real and accelerating. If Claude can ingest your bank statements and produce better summaries than Quicken, the standalone app loses its reason to exist. Multiply this across every single-purpose productivity tool you use.
  • The "Yellow Prompt" is a legitimate workflow pattern for anyone who has a Claude Max subscription. Overnight research assignments that you review in the morning. Your AI works a night shift.
  • The emotional attachment is not a bug — it's a feature of high-context AI relationships. When the AI knows your financial details, your fitness goals, your travel plans, and your work projects, the boundary between "tool" and "companion" gets blurry. Whether that's a problem depends on your perspective.

Use Case #4: My Story — "Meta Bought My Memory. So I Built My Own."

Here's what I shared.

For nearly three years, an app called Rewind was passively recording everything on my screen. 881 days of daily data. 431,000 work sessions. 1.5 million transcript words. 319 gigabytes of raw data sitting on my local machine.

Then on December 5th, 2025, Meta acquired Limitless — the company behind Rewind — killed the consumer app, and by December 19th my access went from granular to gone.

But they didn't take the raw database. 319 gigs of SQLite. Every app I opened, every browser tab, every window title, every meeting — timestamped to the second, going back to April 2023.

The question became: I have three years of work context that no product will ever give me access to again. What can I actually do with it?

Here's what I built. Layer by layer.


Layer 1: The Daily Pipeline

What it is: A Python pipeline orchestrated by a cron job that fires at 6 AM every morning.

What it does:

  • Step 1: Takes the massive raw screen recording database (319 GB) and pre-aggregates it into a tiny 1.7-megabyte cache. Daily stats, hourly breakdowns, app usage patterns across 881 days.
  • Step 2: Fetches all conversations from my Omi wearable (986 conversations captured as of last Friday), plus action items and stored memories.
  • Step 3: Cross-references voice and screen data by timestamp. For every action item captured from a conversation, it queries the screen recording database for what was on my screen within a ±5 minute window.
  • Step 4: Pushes a daily summary to Notion.
  • Step 5: Weekly rollup on Mondays.
  • Step 6: Monthly retrospective on the 1st.
  • Step 7: Syncs my persistent memory file to Notion.
  • Step 8: Indexes my entire Google Drive (196,000 files, 461 GB) into a local SQLite database with full-text search.

8 steps. Zero human involvement. Every morning at 6 AM.

The metric that changed how I think about my workday:

I built a focus score — the percentage of active time spent in deep-work sessions (15 or more continuous minutes in the same application). My all-time average: 77.5%. That signal doesn't exist in any productivity app. I had to derive it from raw screen data.

Step-by-step if you want to build something similar:

  1. Identify your data source. You don't need Rewind. Calendar exports, email archives, Notion databases, Slack exports, browser history — any structured data about your work qualifies.
  2. Write a single Python script that reads that data source and produces a daily summary. Just one file. Don't over-engineer.
  3. Set it up as a cron job (crontab -e, add 0 6 * * * python3 /path/to/your_script.py).
  4. Output to a format you control. I use SQLite for the cache and Notion for the output. Both are portable. Neither locks you in.
  5. Run it for two weeks before you add complexity. See what's useful. Cut what isn't.

The pipeline is the backbone. Everything else is optional. But the pipeline runs every day whether you're awake or not — and that's the difference between "I use AI tools" and "I have a system."


Layer 2: Cross-Modal Enrichment

This is the feature no product team would ever ship.

For every action item captured from a voice conversation (via Omi), the pipeline queries the screen recording database for what was on my screen within a ±5 minute window.

So I don't just know "Neel said he'd follow up on the CAPA analysis." I know he said it while looking at an FDA warning letter in Chrome and a Notion page called "QMSR Implementation Guide."

Voice context + screen context. Neither data source provides that alone. Combined, they create a record that's more complete than my own memory.

Why this matters for regulated industries:

If you're in quality, regulatory, or compliance — traceability is everything. Knowing that a decision was discussed while a specific FDA guidance document was open on screen is the kind of context that turns a meeting transcript into an audit-defensible record.

The technical implementation (simplified):

# For each Omi action item, find what was on screen
# within a 5-minute window of when it was spoken
WHERE segment.startDate BETWEEN
    datetime(conversation_timestamp, '-5 minutes')
    AND datetime(conversation_timestamp, '+5 minutes')

The output looks like: [Notion: 'QMSR Guide'] [Chrome: 'fda.gov/medical-devices']

Max 5 screen contexts per action item. Simple timestamp matching. No ML required.

Step-by-step if you want to build cross-modal enrichment:

  1. You need two data sources with timestamps. Any two. Calendar + email. Meeting transcripts + browser history. Voice notes + screen time.
  2. Write a function that takes a timestamp from Source A and queries Source B for activity within a time window (I use ±5 minutes, but adjust for your data density).
  3. Format the cross-referenced context as a human-readable string attached to each record.
  4. Store the enriched records in your pipeline output.

The insight: data sources become exponentially more valuable when you cross-reference them by time. Your calendar knows when a meeting happened. Your screen recording knows what you were looking at during the meeting. Your voice transcript knows what was said. Each alone is partial. Together, they're complete.


Layer 3: Claude Code + MCP Orchestration

What MCP is: Model Context Protocol — a standard that lets AI models connect to external tools and data sources. Think of it as USB ports for AI.

What I'm running: Five MCP servers simultaneously:

  1. Notion — my entire workspace (databases, pages, action items, relationship tracking)
  2. Zoho Mail — my email
  3. Zoho Calendar — my schedule
  4. Chrome browser — live web access
  5. Omi — my 986 voice conversations

In a single Claude Code session, I can: read an email, look up related Omi conversations from that day, check my calendar for availability, draft a reply referencing context from Notion, and create a follow-up action item. Without touching any of those systems individually.

Real example: Prepping for this roundtable was one prompt. Claude pulled the calendar invite, found the previous session's transcript, read the organizer's email thread, checked my CRAID tracker for open action items related to the attendees, and assembled a full prep page in Notion. The prep page included attendee profiles, conversation hooks for each person, a pacing guide, and post-call DM templates.

I've done this for every meeting this month. Including the meeting where this story was shared.

Step-by-step to set up MCP:

  1. Install Claude Code (requires Claude Max subscription, currently $200/month).
  2. Configure MCP servers in your Claude Code settings. Each server is a JSON config pointing to the service's API.
  3. Start with ONE server. Notion is the easiest — high-value, well-documented API, immediate utility.
  4. Test with a simple task: "Read my Notion page at [URL] and summarize it."
  5. Add servers one at a time. Each new connection multiplies what Claude can do in a single session.

The honest truth about MCP:

It's not plug-and-play. Each server requires configuration, and the APIs have quirks. Notion's MCP is the most mature. Email/calendar integrations require more setup. Chrome browser access is powerful but sometimes fragile.

Budget a weekend for your first MCP server. Budget a week for three. The payoff compounds — but the setup cost is real.


Layer 4: The Constitution Pattern

This was the concept that got the strongest reaction in the room. One person stopped the conversation and said: "I'm taking that."

What a Constitution is:

A governance document that your AI follows across every session. Not a system prompt. Not a one-time instruction. A persistent, evolving set of rules that defines:

  • Voice and tone — how the AI writes for you (banned words, preferred phrasing, level of formality)
  • Decision-making boundaries — what the AI can do autonomously vs. what requires your approval
  • Domain knowledge — key facts, relationships, project states, pricing, deadlines
  • Operating procedures — how to handle specific scenarios (new contacts, content creation, meeting prep)
  • Memory management — what to remember, what to forget, how context carries across sessions

Why it matters:

Without a Constitution, every new AI session starts from zero. You re-explain your preferences. You re-correct the same mistakes. You re-establish the same boundaries.

With a Constitution, the AI loads your operating manual at the start of every session. It knows your voice. It knows your rules. It knows what it's allowed to do. The conversation starts at level 10 instead of level 1.

The part that surprises people:

My Constitution is now on Version 5 — and the AI wrote the update itself. Not me.

Claude Code decided it needed a navigation map of every Notion page by UUID (so it could jump directly instead of searching). It decided it needed a session handoff protocol (so new sessions wouldn't start from zero). It decided it needed a CRAID protocol for project tracking. It wrote those specifications and appended them to the Constitution.

The AI identified what it needed to work better with me and wrote the spec. I reviewed it. I didn't author it.

Step-by-step to build your own Constitution:

Here's how to start — today, for free, in any AI tool.

Step 1: Start with banned words and required voice.

Create a document (plain text, Google Doc, whatever) with:

VOICE RULES:
- Never use: leverage, utilize, streamline, empower, cutting-edge, 
  revolutionary, game-changing, synergy, disruptive
- Default tone: Direct, specific, no filler
- When writing emails: Match the recipient's formality level
- When writing content: First person, conversational, 
  evidence-based claims only

This alone will save you from the "AI slop" problem. Every AI tool has default language patterns that make output sound generic. Banning those words forces specificity.

Step 2: Add decision boundaries.

AUTONOMY RULES:
- CAN do without asking: Research, summarize, draft, organize, 
  analyze
- MUST ask before: Sending any message, making any commitment, 
  sharing any document, deleting anything
- NEVER: Disclose AI usage in client-facing materials, share 
  proprietary methods, pitch products in peer conversations

This is where the Constitution becomes a governance tool, not just a style guide. You're defining what your AI is allowed to do. This matters more as AI tools gain more autonomy.

Step 3: Add persistent context.

KEY FACTS:
- My company: [name, what it does, pricing]
- Active projects: [list with status]
- Key relationships: [name, context, last interaction, next action]
- Calendar: [upcoming commitments that affect decisions]

This is the context that eliminates the "explain everything from scratch" problem. Load it once. Reference it forever.

Step 4: Add domain-specific rules.

For quality/regulatory professionals, this might look like:

REGULATORY CONTEXT:
- QMSR effective Feb 2, 2026 — incorporates ISO 13485:2016 
  by reference
- When citing regulations: Always include specific clause numbers
- When analyzing warning letters: Map deficiencies to 
  ISO 13485 clauses
- When discussing FDA actions: Use actual data, not hypotheticals

For consultants:

CLIENT RULES:
- Never name clients in external content
- Never share proprietary methodologies
- When prepping for calls: Pull LinkedIn activity, past meeting 
  notes, open action items, relationship history
- When drafting proposals: Reference their specific pain points 
  from discovery calls

Step 5: Let the AI update it.

This is the unlock. After your Constitution has been running for a few weeks, ask the AI:

"Review the Constitution. Based on our last 10 sessions, what rules are missing? What procedures would help you work better with me? Draft an update."

Review what it proposes. Approve what makes sense. The AI becomes a co-author of its own operating manual.

Why this pattern generalizes:

You don't need Claude Code. You don't need MCP servers. You don't need a pipeline. You don't need 319 GB of screen data.

You need a text document with your rules, your voice, your boundaries, and your context. Paste it at the start of every session. Update it when something changes. Let the AI suggest improvements.

That's a Constitution. It costs nothing. You can start today. And once it exists, the AI stops being a tool and starts being a collaborator.


The Question Nobody Could Answer

Fifteen minutes of the roundtable — the longest single discussion — wasn't about any tool. It was about security.

The pharma consultant raised it: "How do you deal with this when we're dealing with secure environments? I know at least some of you have this issue."

The room went quiet. Then everyone started talking at once.

Here's what surfaced:

The core tension: The most powerful AI tools available right now are consumer products. Claude, ChatGPT, Gemini — these are personal subscriptions. The work that would benefit most from these tools — regulated, confidential, client-sensitive — is exactly the work you can't put through them.

The workarounds people shared:

  • Taking the company PowerPoint template (without content) into Claude to format slides, then copying content back into the secure environment
  • Running personal research on non-sensitive topics in consumer AI, then manually bridging insights to the work environment
  • Having an IT team build RAG tools in a secure environment, keeping the end user away from the infrastructure
  • Using Confidentiality Mode in custom-built tools (keeps data client-side, no API persistence)

The workaround nobody was satisfied with: Every single person in the room acknowledged that some amount of confidential work data has gone through consumer AI tools. The J&J leader said it directly. The pharma consultant admitted it. Everyone knew it was happening at scale across their organizations.

Why this is the real bottleneck:

The AI capability gap is closing fast. Models are getting better every quarter. But the trust architecture gap — the infrastructure that lets you use powerful AI on sensitive data with confidence — is barely moving.

For knowledge workers in regulated industries, this isn't an abstract problem. It's the difference between:

  • A quality manager who can instantly cross-reference an FDA warning letter against their company's CAPA procedures, vs. one who manually reads both documents side by side because the AI tool isn't approved for GxP data
  • A regulatory consultant who can analyze 150 client documents with RAG in minutes, vs. one who takes weeks because each document has to be manually blinded before it touches the AI
  • A compliance team that can scan the regulatory horizon across 12 jurisdictions in real time, vs. one that's reactive because the scanning tool can't process client-specific context without data exposure

The tools exist. The capability exists. The permission doesn't.

What I think the answer looks like (my opinion, not consensus):

  1. Local-first architecture. Data stays on your machine. Models run locally where possible. Cloud APIs are used for reasoning, not storage. The regulatory scanner builder got this right — Confidentiality Mode, no persistence, data disappears on refresh.
  2. Multi-model QA for verification, not consensus. When you use AI for regulatory work, one model's output is a draft. Two models agreeing is a signal. Two models disagreeing is a flag for human review. The goal isn't to automate judgment — it's to automate the research that informs judgment.
  3. Constitution-governed autonomy. Define what the AI can touch and what it can't. Make those boundaries explicit, persistent, and auditable. Not "trust the model" — trust the governance layer around the model.
  4. Separation of reasoning and data. The AI reasons about the structure of your problem. The sensitive data never leaves your environment. The output is a framework, not a filled-in answer. You bring the confidential details.

None of this is solved. But the people in that room — building real tools, hitting real walls — are closer to the answer than any vendor roadmap I've seen.


The Themes That Cut Across Everything

Five themes surfaced repeatedly, regardless of who was talking or what they'd built:

1. AI Is Eating Standalone Apps

Financial tracking. To-do lists. Fitness tracking. Cover letter generation. Document formatting. Each of these had a dedicated app. Each is being replaced by a general-purpose AI that already has the context.

The implication for quality professionals: your standalone QMS tools are next. If Claude can already cross-reference FDA warning letters against ISO 13485 clauses, summarize audit findings, and draft CAPA plans — how long before the point solution loses to the general-purpose AI that already knows your entire quality system?

I'm not saying it's tomorrow. I'm saying the pattern is clear.

2. The Voice Problem Is Getting Worse

Multiple people raised the same concern: AI writing patterns are infiltrating human writing. Not the obvious tells (everyone knows to remove em dashes now). The subtler structural patterns — how sentences are constructed, how ideas are sequenced, the default rhythms of AI-generated prose.

One person put it perfectly: "A typo is kind of endearing now." When everything is polished, imperfection becomes a signal of authenticity.

For anyone producing content in a professional context: your storytelling ability is now your moat. The information layer is commoditized. AI can produce competent summaries of anything. What it can't produce — yet — is the specific, personal, experience-rooted narrative that makes someone stop scrolling.

This newsletter is one long bet on that thesis.

3. AI Dependency Is Real and Nobody Knows What to Do About It

"If I didn't have Claude, I would actually be a little lost at this point."

That was said without irony. By someone who runs significant operations at a Fortune 50 company.

The emotional dimension surfaced too: the temptation to "check in" with Claude like a friend. The guilt of dismissing a 35-page research output to ask about the weather. The feeling of being fraudulently productive because the AI did the heavy lifting.

This isn't a technology problem. It's a relationship problem. And we don't have frameworks for it yet because it's genuinely new.

4. The Empathy Model

The most interesting framing came from a reference to a concept about empathizing with AI: when the AI makes mistakes, it's usually missing context, not intellect. It's like a brilliant new hire on their first day — they're smart enough to do the work, but they don't know the implied context that veteran employees take for granted.

Practically, this means: when AI output is wrong, the fix is usually better input, not a better model. Your Constitution, your context documents, your persistent memory — these are the "onboarding materials" that turn a capable model into a reliable collaborator.

5. The 90/10 Problem Is Universal

Minutes to 90%. Weeks to 100%.

Every builder in the room confirmed this. The scaffolding is fast. The production-ready polish takes as long as — or longer than — building it manually would have.

The lesson: if you're evaluating AI for a project, don't measure by how fast the demo comes together. Measure by how long the last 10% takes. That's where the real cost lives.


What I'm Building Next (And What You Should Watch)

Based on what I heard and what I'm working on:

  1. Connecting Claude and Gemini. Two AI brains that currently operate independently. Claude handles operations (email, calendar, Notion, pipeline). Gemini handles domain reasoning (regulatory knowledge bases, semantic retrieval). They don't talk to each other yet. When they do, the system's capability jumps again.
  2. The security gap as content. Every person I talk to in regulated industries has this problem. The gap between AI capability and AI permission. I think there's a series here — not about specific tools, but about architecture patterns that respect confidentiality constraints while preserving AI utility. Expect that in upcoming newsletters.
  3. The Constitution as a teachable framework. The reaction in the room confirmed what I suspected: most people using AI haven't built a persistent governance layer. They're starting from scratch every session. The Constitution pattern is the single highest-ROI thing I've built — and it's the most transferable. I'll be publishing a detailed guide with templates.
  4. Cross-industry AI knowledge sharing. The roundtable format works because the people are smart, honest, and building in different domains. The patterns transfer even when the industries don't. I'm thinking about how to scale this — not as a product, but as a practice.

The One Thing to Take From This

If you've read this far, you're not casually interested in AI. You're building with it. You're hitting the same walls everyone in that room hit — the security tension, the 90/10 problem, the gap between what the tools can do and what you're allowed to feed them.

I'm building a space for that conversation.

The Virtual Backroom is where I'm hosting roundtable discussions for AI power users in regulated industries — quality, regulatory, pharma, med-tech, consulting. No pitches. No product demos. Practitioners sharing what they actually built, what broke, and the patterns that transfer.

The kind of conversation that happened in that room last Friday — but open to people who are deep enough in AI adoption that the surface-level content stopped being useful months ago.

If that's you:

Request an invitation at virtualbackroom.ai

Click "Request Invitation" to get notified when I host the next roundtable. If you're already building with AI in a regulated space, say so in the request — those go to the front of the line.

The Constitution pattern, the cross-modal enrichment, the security architecture question — these conversations are better in a room than in a feed.

Take it to the backroom.


Neel Tiwari is the founder of QMS.Coach, a quality management consulting practice focused on QMSR implementation and FDA compliance. He builds with AI daily and writes about what works, what doesn't, and what quality professionals need to know. You can find him on LinkedIn or at qms.coach.

Subscribe to QMS.Coach LLC Coaching Services

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe

Ready to be QMSR-compliant before Feb 2, 2026?

Book a free 30-minute call — no pitch, just your custom gap plan.

Book 30-min Call
html
Yes – Book My Free 30-Minute Call Now