Other AI documentation tools are telling therapists that their session data is "encrypted", an assurance that, hey, it's safe! No one else will be able to access it or understand it, because it's encrypted!
But...
LLMs do not work with encrypted data -- they require regular text. They were trained on regular text. This is simply how they work.
So when session recordings are transcribed, and that transcription text is processed by an LLM to generate a note?
That transcription text has to be decrypted for it to be handled by the LLM.
It is frustrating (to say the least) when encryption is used as marketing language by these tools, because it's not the full story.
- Is the data encrypted in transit? Of course. That's what the "s" in "https://" stands for -- secure. This means that the data being sent from your computer to the tool's server is encrypted.
- Is the data encrypted at rest? We'd hope so! That's very standard with most databases and file storage solutions.
- But is the data always encrypted? Is it encrypted when it's being processed? Is it ever decrypted/unencrypted on the tool's server (or on third-party servers)? If AI is being used, the AI (LLM) is working with the unencrypted transcript of the therapy session.
At Quill, we do not handle the therapy session recording or transcript ever. And we do not store the session summaries or documentation that we do handle.
But to be clear, as mentioned above, LLMs work with text. And so our prompts to our LLM work with the text of the session summary. Because that is how AI and LLMs function. Rest assured, the other AI documentation services are doing this too -- but with the entire session transcript, not just a summary provided by the therapist.
Therapists shouldn't have to be software or security experts, but it's important to know what's happening here -- and how you can best protect the sensitive details that are shared with you during a therapy session.
It's important to understand (and limit) the scope of the data you share with AI tools.