Not Your Lawyer, Not Your Doctor: What You Lose When You Trust AI With Sensitive Questions

Smart technology and “free” online tools make life feel effortless. They answer questions in seconds, remember our passwords, and are always within reach.

Assessing Risk

I recently attended a presentation where, Michael Blair, founder and CEO Centristic, discussed how technology and AI sit at the uneasy intersection of convenience and privacy.

The same data collection that enables seamless, personalized experiences also erodes the boundaries of individual security and control.

I had the opportunity to discuss this timely subject further with Michael afterwards.

According to Michael,

Convenience without governance is just unmanaged risk. The moment you put sensitive information into an AI system without defined controls, you’ve effectively expanded your threat surface and forfeited any expectation of confidentiality. GRC (Governance, Risk Management, and Compliance) exists to ensure that innovation doesn’t outpace accountability—because once that data leaves your control, you don’t get to decide how it’s used, protected, or disclosed.”

Every time we rely on technology—especially for sensitive issues—we trade a little more of our privacy and, in some cases, our legal protections.

“Free AI tools aren’t free—you’re paying with data, context, and control. “, Michael said. “From a GRC perspective, that’s an unvetted third-party relationship with no defined safeguards.”

Most people never read the terms of service that come with social media and many apps. Buried in that fine print is language saying that when you post a photo, a video, or even a long comment, you are giving the company a broad license to use it.

In plain English, that means they can reuse, adapt, and distribute what you share, often worldwide and without paying you. Your “private” vacation shots or detailed rant about a bad experience can end up training algorithms, shaping ads, or being stored indefinitely.

The risks increase when the topic is legal, medical, or deeply personal. If you paste the facts of a dispute into a public AI tool and ask, “What should I do about this lawsuit?”, you are not talking to a lawyer.

There is no attorney–client relationship, no privilege, and no duty of confidentiality.
A recent federal case out of New York, United States v. Heppner, made that painfully clear. The court held that AI‑generated documents created through a public platform were not protected by attorney–client privilege or work product at all; the user had effectively shared the information with a third party.

The same basic problem exists on the medical side. Most apps and chatbots are not your doctor and are not bound by HIPAA, even if you pour your entire health history into a text box.

Michael said that “the absence of a confidentiality framework isn’t a technical gap—it’s a governance failure. When sensitive data is shared with AI systems, you have to assume it is discoverable, reusable, and no longer exclusively yours.”

Managing Risk

None of this means you have to go off the grid. It does mean you should be more intentional.
Start with the basics: tighten your privacy settings on social media, turn off ad tracking where you can, and say no to “share with partners” when that pop‑up appears. At home, disable “always listening” features, and switch off microphones and cameras when you are not using them.

Whenever possible, pay for the services that really matter to you. A subscription model that clearly promises no data brokering and limited data collection is usually safer than a “free” tool that survives by profiling you.

Given that true invisibility online is nearly impossible, the goal is to reduce how much of your activity is logged, profiled, and tied to your real‑world identity. The above-suggestions can shrink your digital footprint, cannot erase it entirely.

For the hardest questions, such as those about your health, your freedom, or your family’s finances, pick up the phone and call a real lawyer, advisor or doctor. The answers may not be instantaneousmay, but the conversation will be confidential, and that is still worth protecting.

“The absence of a confidentiality framework isn’t a technical gap—it’s a governance failure. When sensitive data is shared with AI systems, you have to assume it is discoverable, reusable, and no longer exclusively yours.”, Michael said.

Michael observed, “Most people think in terms of convenience versus effort. GRC forces a different question: what is the business impact if this data is exposed, reused, or misinterpreted—and are you willing to accept that risk?”

Conclusion

My take away from talk with Michael is that AI adoption by regulated professionals requires heightened sensitivity to confidentiality.


This post is for general informational purposes only and does not constitute legal advice, medical advice, or the formation of an attorney–client relationship. You should not act or refrain from acting based on this content without consulting a licensed attorney or qualified healthcare professional about your specific situation.


Discover more from A Lawyer In Florida

Subscribe to get the latest posts sent to your email.

Leave a Reply

Your email address will not be published. Required fields are marked *

Discover more from A Lawyer In Florida

Subscribe now to keep reading and get access to the full archive.

Continue reading