• The Flop
  • Posts
  • Issue 11: Privacy, Patents, and Chatbot Boundaries

Issue 11: Privacy, Patents, and Chatbot Boundaries

The Flop

Welcome back to The Flop - your cozy corner for demystifying AI with warmth, wit, and a dash of whimsy!

I am back home after a few glorious days in Victoria, BC, one of the loveliest cities I’ve been to in a while. Between the harbor views and the fact that they know how to make a proper cup of tea, I’m a fan.

This week’s edition is all about what happens to the things you type into AI tools and and how to protect your info if you need to.

⚖️ Quick disclaimer: I’m not a lawyer, just a curious person figuring out this new world of AI. If you’re working with sensitive legal, medical, or business information and need to be sure, please consult an actual attorney or advisor. When in doubt, better to ask a human.

🐰 Down the Rabbit Hole

Is ChatGPT Confidential?

If you’ve ever used ChatGPT like a digital diary, be it venting, reflecting, or workshopping ideas, you’re not the only one. But here’s the thing: ChatGPT is not a doctor, lawyer, or therapist and your conversations with it don’t get the same legal protections.

Sam Altman, CEO of OpenAI, made everyone real nervous this week when he went on a pod and casually reminded us that ChatGPT conversations are not legally confidential. That means:

🔒 No legal privilege
Typing something into ChatGPT doesn’t make it private. You’re not protected by HIPAA, attorney-client privilege, or any other formal safeguard.

📜 Chats can be subpoenaed
Even deleted conversations may be retrievable and disclosed in legal proceedings.

🧠 OpenAI may access your chats
Unless you turn off training, your conversations can be reviewed and used to improve the system.

Thanks, Sam. Super fun info for all of us with a raging ChatGPT habit.

💬 Sam and others have publicly called for new legislation to address AI privacy because right now, there’s no clear legal framework. It’s a classic case of the law lagging behind the tech.

💡 Intellectual Property: When “Just Asking ChatGPT” Gets Risky

If you’re using AI to brainstorm inventions, business ideas, or creative work, the stakes can be higher:

🧬 Sharing trade secrets could void protections
Typing a confidential idea into ChatGPT might count as public disclosure, which can ruin your ability to patent it.

🚫 Loss of novelty
Even if no one's watching, U.S. and international law says: once an invention is “made public,” it may no longer be patentable.

🤖 Who owns AI-generated stuff?
Most countries don’t grant copyright or patent rights to AI-only creations. If the bot did the work, you may not be able to claim it.

📚 AI could create prior art
If ChatGPT or another tool outputs something similar before you file, it could block your patent later.

Big thanks to Robbie H. for flagging this particular use case.

🧪 Medical Privacy: What About Health Info?

I know a lot of people using chatbots to optimize their health. I am one of them. Be it to analyze labs, prep for doctor visits, or make sense of supplements, AI is empowering when it comes to our health. Here’s what to know:

🔐 HIPAA doesn’t apply to AI tools
If you're using ChatGPT or any consumer AI tool, you’re probably outside HIPAA protections.

🧩 Little control over who sees your data
Most people trust doctors with their medical info. Tech companies - not so much. But when you use a chatbot, you may be consenting to more access than you realize.

🗑️ Deleting isn’t always deleting
Once data is shared, it may be stored, copied, or shared in ways you can’t fully trace.

⚖️ Legal exposure is real
Health data could be subpoenaed and used in court. Yes, even info you typed into a chatbot.

Personally? I’ll use AI to dig into my own lab results. But I don’t put my kids’ info anywhere near a chatbot.

🔎 The Real Question: What’s Your Risk Level?

So… is it safe to share personal info with ChatGPT? The real answer is: it depends. We’ve covered the risks but what you choose to share will ultimately vary from person to person.

It really comes down to your risk profile and how sensitive the info is. What feels fine for one person might feel too personal for someone else. And that’s okay. You get to decide what’s right for you. That might mean sharing everything, just a little, or choosing to consult an expert to help you give you peace of mind.

🛠️AI Hack of the Week

3 Steps to More Private AI Use

If you do want to use AI but keep things more private, here are three ways to make your chats safer, from least effective to fully locked down.

🟨 Quarter-Step: Turn Off Model Training

ChatGPT and Claude (from Anthropic) both let you limit how your data is used:

  • In ChatGPT: Go to Settings → Data Controls and turn off “Improve the model for everyone.”

  • In Claude: By default, no training happens in Claude unless you explicitly opt in.

💡 How this reduces risk:
You stop your data from being used to improve the model, which lowers the chance of human review or unintended reuse. It’s a solid baseline privacy move.

🟦 Half-Step: Use ChatGPT’s “Incognito Mode”

Also called Temporary Chat, this mode means:

  • 🫥 Your convo isn’t saved to your history

  • 🧠 It won’t be used for training

  • 🗑️ It’s stored briefly (30 days), then deleted

  • 📎 Each session starts fresh (no memory)

Claude behaves similarly by default: no memory, no training, no profiles.

💡 How this reduces risk:
You’re not creating a long-term record. That means less exposure if your account is ever accessed, and your chats won’t live forever on someone else’s server.

🟩 Full-Step: Run a Local AI with LM Studio

For maximum privacy, try LM Studio - a free app that lets you download and run open-source AI models entirely on your own computer.

💻 Nothing leaves your device
📵 Can run offline
🔐 Great for journaling, IP, client work, or medical docs

💡 How this reduces risk:
No servers, no uploads, no third parties. Your data stays fully in your hands, making this the most secure option especially for high-stakes or sensitive information.

The downside: Unlike tools like ChatGPT with browsing enabled, LM Studio models don't pull real-time info from the internet. They can’t look up current events or fetch links - they only know what was included in their training data.

 Note from me:
Because I was traveling this week, I haven’t had a chance to run LM Studio through its paces. It took me around 10 minutes to install LM Studio on my machine, and the setup was easy. I will be playing with the local instance this week and will report back on the pros and cons. I will share a deeper dive (with setup tips!) in a future edition of The Flop.

📦 Claude Corner: Is It Safer? Is It Better?

Claude is more private than ChatGPT by default:

No training without consent
Memoryless by default
Strong privacy design principles
⚠️ Still not legally confidential
⚠️ No memory = clunkier UX. You have to retype context every time, which can get very annoying.

If continuity matters, ChatGPT with memory may be smoother. If privacy is your top concern, Claude is a solid cloud-based option.

🗳️ Nosy Rosie Wants to Know

Quick Poll: How private are you with AI?

How do you feel about sharing personal info with AI tools like ChatGPT or Claude?

Login or Subscribe to participate in polls.

👋 Until Next Week

Thanks for being here. I know your inbox is a busy place, and I hope this one made you feel just a little smarter and more prepared in this new world of AI.

Next week, we’ll get back to the fun stuff - how AI can make everyday life easier, smoother, and maybe even a little more delightful. After two weeks of pitfalls and privacy, we’re ready for a little useful magic.

Warmly,
Ricci

Did a friend forward this to you? Sign up for the weekly email here.