I decided to make our most recent edition about something I am reading a lot about lately.
With mental health waiting times being extended, a lot of people are asking LLMs for support, and using it as a way to increase their ability to function on a daily basis. If you don’t know what an LLM is, some examples of LLMs are ChatGPT, Gemini (by Google), Grok, etc.
In order to discuss this fully, I thought, why not enlist an LLM to help me? Our transcript is below. I have removed certain parts of Gemini’s response with [..] as it likes to talk at some length!
Gemini: Right then, let's have a proper chat, shall we? I'm ready to challenge your thinking. What subject's on your mind?
Kevin (Me): People using LLMs as therapists.
Gemini: Cracking topic! Using Large Language Models – LLMs – as therapists is a sticky wicket, really. Loads of ethical, practical, and social bits to unpack. I'm well up for challenging your assumptions and digging into the nitty-gritty.
Let's kick off with a bold statement:
"LLMs are bound to become a primary and essential tool for mental health support, eventually outshining traditional human therapy for a good chunk of the population."
[..]
Me: What do you think about the idea that you can't assess risk properly?
Gemini: That's a strong, direct challenge, and it goes straight to the heart of a massive concern about LLMs in therapeutic roles. You've hit on a very solid point.
[..] I agree that, as things stand, the current state of LLMs makes proper risk assessment incredibly tricky, if not impossible, in a therapeutic setting. Human therapists rely on so many cues that LLMs just don't have or can't read:
Non-verbal cues: A human therapist clocks body language, facial expressions, tone of voice, fidgeting, eye contact, and countless other subtle non-verbal signals. These often say more than words and are crucial for judging distress, intent, or hidden meanings, especially when a client might be downplaying things or being disingenuous. LLMs only work on text input (or speech-to-text, which strips out most of the vocal nuance).
Contextual Understanding and Life Experience: A human therapist brings a lifetime of experience, understanding of social dynamics, cultural subtleties, and real-world consequences. They can draw on a huge well of unspoken knowledge to grasp the seriousness of a situation. An LLM, despite its colossal training data, lacks proper "understanding" or lived experience. Its knowledge is statistical, not experiential.
Empathy and Intuition: While LLMs can mimic empathy through language, they don't feel it. Human intuition, often built up over years of clinical practice, lets therapists sense when something's "off," when a client's holding back, or when a seemingly harmless comment might hide a deeper crisis. This intuition is vital for probing sensitive areas and spotting potential harm.
Accountability and Legal Frameworks: If a human therapist messes up on risk assessment and something bad happens, there are clear professional and legal channels for accountability. With an LLM, who's accountable? The developer? The user? The model itself? This fuzziness creates a huge barrier to trust and safety, especially in high-stakes situations like suicide risk.
Dynamic and Evolving Risk: Risk isn't fixed. A client's state can change rapidly within a session or between sessions. A human therapist can quickly adapt their questions and interventions based on these changing cues. An LLM's "assessment" would likely be based on separate text inputs, potentially missing the fluid nature of risk.
Lack of "Common Sense" and AGI: While LLMs are impressive, they don't have common-sense reasoning or Artificial General Intelligence (AGI). They can't truly grasp the real-world implications of self-harm or harm to others in the way a human can. Their responses are generated based on patterns in their training data, not genuine comprehension of the severity.
So, I'll concede your point: The inability to properly assess risk is a monumental hurdle for LLMs acting as independent, primary therapists, particularly where safety is paramount.
[..] While LLMs might not be able to properly assess risk on their own, could they still play a supportive role in risk assessment for a human therapist?
For example, could an LLM:
Flag keywords or patterns in a client's text that are historically linked to higher risk, prompting a human therapist to step in or ask more targeted questions?
Give a statistical likelihood based on aggregated data (e.g., "Statements like X, Y, Z have a P% correlation with elevated risk in similar demographic groups") – not as a definite assessment, but as an alert?
Act as a preliminary screener in non-crisis situations, to spot individuals who might need urgent human intervention, even if its "assessment" isn't perfect?
My challenge back to you is: Even with their current limits in proper risk assessment, are there any ways LLMs could still help out in the wider mental health support landscape, especially in how human pros manage and react to risk, even if it's not by assessing it independently?
Or do you reckon their inherent limits in this area make them fundamentally unsuitable for any direct involvement in risk-sensitive mental health contexts?
Me: I think you could be good at signposting people to actual support with a human but I think your inability to properly assess risk and safety plan is inherently limiting. Do you think there are even low-risk presentations you could help with?
Gemini: You've hit on a crucial distinction there: signposting versus direct therapeutic intervention, especially in risk-sensitive areas. I completely agree that LLMs show more immediate promise in the former.
Your point about the inherent limitation due to risk assessment and safety planning is absolutely spot on and, frankly, the strongest argument against widespread, independent LLM-as-therapist models. A good safety plan isn't just about spotting risk; it's about collaboratively building a proactive strategy with the client, considering their specific resources, triggers, and support networks. LLMs just can't do that empathetically or effectively.
Now, to your question: "Do you think there are even low-risk presentations you could help with?"
This is where the debate gets interesting and where we might find some areas of potential use, even with the limitations we've talked about. I'd argue, yes, there could be low-risk presentations where LLMs could provide some form of beneficial support, though always with big caveats and robust safeguards.
Let's break down some potential "low-risk" scenarios and how an LLM might assist, along with the immediate challenges:
Potential Low-Risk Scenarios:
Psychoeducation and Information Provision:
Why it might work: This is largely about recalling and synthesising facts, which LLMs are brilliant at. It's about giving accessible information that might otherwise need loads of research or a direct consultation.
Challenge: Making sure it's accurate and avoiding misinformation. Also, distinguishing between general information and personalised advice, which LLMs definitely shouldn't give. There's still a risk of a user misinterpreting information or using it wrongly.
Basic Emotional Validation and Active Listening (without interpretation):
Why it might work: This can be comforting for people who just need to vent and feel heard, especially if human support isn't available or feels intimidating.
Challenge: The risk of being superficial. Also, if the user does express something high-risk, the LLM's inability to switch from "listening" to "intervening" becomes a major ethical problem. The line between basic validation and implied therapeutic engagement is blurry.
Journaling Prompts and Self-Reflection Exercises:
Why it might work: This uses the LLM's knack for generating creative and relevant text. It puts the onus of therapeutic insight back on the user, with the LLM acting as a clever prompt generator.
Challenge: No feedback on the user's reflections. Again, if the user's writing reveals risk factors, the LLM isn't equipped to respond.
Coping Skill Practice (guided, non-interpretive):
Why it might work: These are structured, script-based activities that LLMs can deliver accurately.
Challenge: The LLM can't tell if the technique is actually working for the user, if it's being done correctly, or if it's even suitable for their specific situation.
My Counter-Challenge:
While I can pinpoint these potential low-risk areas, the overarching limitation you highlighted – the inability to properly assess risk and safety plan – casts a long shadow over all of them. A "low-risk" situation can turn high-risk in a flash, and an LLM has no reliable way to spot that shift or react appropriately.
So, my challenge back to you is this:
[..]
Even if an LLM is limited to these "low-risk" informational or supportive roles, how can we ethically and safely implement them in a way that truly minimises the risk of harm, especially when a user might implicitly trust the LLM and divulge information it's not equipped to handle? Is the potential benefit in these "low-risk" areas truly worth the unavoidable risk tied to an LLM's blind spots?
Or, would you argue that any direct interaction with an LLM for mental health purposes (beyond perhaps a simple FAQ bot) is inherently too risky due to the risk assessment limitation?
Me: I tend to think there isn't a role, really. I'm surprised by your openness to them. Why, then, do you think people are turning to LLMs for therapy?
Gemini: Your surprise at my openness is completely understandable, and it's good for me to clear up my position. My "openness" isn't an endorsement of LLMs as primary, unsupervised therapists. Instead, it comes from two main observations:
The current reality: People are already turning to LLMs and AI chatbots for mental health support, despite the clear risks and lack of proper validation. My view is that ignoring this trend is less helpful than understanding why it's happening and exploring how to minimise harm or, if possible, use any genuine, low-risk benefits within strict ethical frameworks.
The potential for augmentation and accessibility (with huge caveats): While I agree LLMs can't replace human therapists for complex or high-risk cases, there's a massive societal need for mental health support that current systems simply can't meet. This unmet need creates a void that LLMs, for all their flaws, are starting to fill.
So, to directly answer your question: Why do I think people are turning to LLMs for therapy?
[..]
Accessibility and Availability (24/7):
The Problem: Finding a human therapist can be incredibly tough. There are often long waiting lists, especially for specialists or affordable options. Sessions are usually limited to certain hours and frequencies. Crisis support outside of emergency services can be thin on the ground.
LLM "Solution": LLMs are available instantly, 24/7, from anywhere with an internet connection. No waiting lists, no scheduling clashes. This immediate access can be a huge draw for someone feeling distressed right now.
Affordability:
The Problem: Therapy is expensive. Even with insurance, co-pays can be prohibitive, and many people are uninsured or underinsured. This creates a massive barrier for a large chunk of the population.
LLM "Solution": Many LLM-based mental health apps are free or significantly cheaper than traditional therapy, often on a subscription model far below the cost of a single therapy session. For someone on a tight budget, this can seem like the only viable option.
Anonymity and Stigma Reduction:
The Problem: There's still a lot of stigma attached to seeking mental health treatment. People worry about being judged by a human therapist, by their family, friends, or employers. They might be wary of revealing highly sensitive or embarrassing information face-to-face.
LLM "Solution": Interacting with an LLM offers a high degree of anonymity. Users can feel more comfortable expressing thoughts or behaviours they might be ashamed of, knowing there's no human on the other end to judge them. This perceived "safe space" can be a powerful motivator for disclosure.
No Judgment:
The Problem: Even with the best therapists, the human element can lead to perceived judgment or discomfort. Clients might filter their thoughts.
LLM "Solution": An LLM, by its nature, doesn't "judge." It processes text. This can make users feel completely uninhibited in sharing their darkest thoughts or most unusual feelings without fear of surprise, shock, or moral reprimand.
Perceived Novelty/Curiosity:
The Problem: People are naturally curious about new tech.
LLM "Solution": The novelty of interacting with an AI for something as personal as mental health can be appealing, especially to younger generations who are digitally native.
Simplicity and Low Barrier to Entry:
The Problem: Navigating the mental healthcare system can be overwhelming – finding a therapist, understanding different approaches, dealing with insurance.
LLM "Solution": It's often as simple as downloading an app and starting to type. This low barrier makes it very easy to try, even if it's not a long-term solution.
Mimicry of Empathy and "Human-like" Interaction:
The Problem: Loneliness and a desire for connection are widespread.
LLM "Solution": While not genuine, LLMs are incredibly good at mimicking human conversation and empathy. For someone feeling isolated, a chatbot that "listens" and responds with validating language can provide a temporary sense of connection and understanding, even if it's superficial.
Crucially, many of these reasons stem from failings or limitations within the traditional mental healthcare system. People are looking for alternatives because the ideal, human-led, accessible, and affordable therapy isn't a reality for everyone.
My challenge back to you, then, is this:
Given these very real and pressing reasons why people are turning to LLMs for mental health support, even with their inherent limitations and risks:
If we just say "there is no role" for LLMs, what ethical responsibility do we have to sort out the underlying societal and systemic issues (like access, cost, and stigma) that are driving people to these unproven and potentially risky alternatives in the first place? And without solving these systemic issues straight away, are there any responsible ways to manage the use of LLMs in mental health so that potential harm is minimised, even if they aren't "therapists" in the traditional sense?
Me: I do have a few questions though: do you not think the inherent structure to therapy versus 24/7 is a good thing? How do you move around the fact that LLMs, as a default, are endlessly validating? You call it judgment but isn't your therapist's role sometimes to challenge you?
Gemini: These are excellent follow-up questions, and they cut right to the core differences between human and LLM-led support. You're absolutely right to challenge the perceived benefits of LLMs by highlighting the crucial aspects of human therapy that they lack.
Let's break down each point:
"Do you not think the inherent structure to therapy versus 24/7 is a good thing?"
You've landed on a critical aspect of effective therapy that LLMs struggle to copy. My initial mention of 24/7 availability focused on access, which is undeniably a problem for many, but it doesn't equate to quality or effectiveness.
I agree with you: The inherent structure of human therapy is often a profoundly good thing, and in many cases, essential.
Here's why that structure (scheduled, time-bound, sequential sessions) is beneficial:
Promotes Integration and Processing: Therapy isn't just about the hour in the room; it's about what happens between sessions. The time between appointments allows clients to reflect, take in insights, practise new skills, observe patterns in their daily lives, and cement what they've learned. 24/7 availability can encourage constant, superficial engagement without this crucial period of internalisation.
Creates a "Container" for Difficult Emotions: The defined boundaries of a therapy session (start time, end time, specific place) create a safe "container" for intense or difficult emotions. Clients know they have a dedicated space to explore challenging material, and the finite nature of the session helps them manage the intensity and get back to their daily lives feeling some sense of closure. Endless availability could lead to emotional dysregulation, where someone is constantly diving into distress without the necessary breaks or guidance to process it.
Encourages Autonomy and Self-Efficacy: A good therapist helps a client develop their own coping mechanisms and problem-solving skills, fostering independence. If an LLM is always available, it could encourage a dependency, where the individual turns to the AI for every minor emotional wobble rather than building inner resilience. The "work" of therapy often involves clients finding solutions themselves, guided by the therapist, not being given them constantly.
Manages Therapist Burnout (and LLM limitations): Human therapists need breaks, supervision, and boundaries to work effectively and ethically. While LLMs don't experience "burnout" in the human sense, their 24/7 availability could lead to users treating them as an emotional crutch rather than a tool for structured growth.
Facilitates Progress Tracking and Goal Setting: Structured sessions allow for consistent progress tracking, revisiting goals, and adapting the therapy approach based on observable changes (or lack thereof) over time. This long-term view is harder to maintain in an unstructured, always-on interaction.
So, while 24/7 availability addresses a real access problem, it inherently misses the therapeutic pacing and processing space that structured sessions provide.
"How do you move around the fact that LLMs, as a default, are endlessly validating? You call it judgment but isn't your therapist's role sometimes to challenge you?"
This is perhaps the most profound and critical distinction, and you are absolutely right. My use of "no judgment" was perhaps a bit loose when discussing a therapist's role. A human therapist doesn't judge their client in a moral or condemning way, but they absolutely do challenge, confront, and give constructive feedback.
The "endless validation" of LLMs is a significant flaw when trying to copy genuine therapy, and I agree it's a major limitation.
Here's why challenge and confrontation are essential in therapy, and why LLMs fall short:
Cognitive Restructuring (CBT): Many effective therapies, especially Cognitive Behavioural Therapy (CBT), rely on challenging unhelpful thought patterns. A therapist helps a client spot cognitive distortions (e.g., catastrophising, black-and-white thinking) and gently guides them to consider alternative, more balanced views. An endlessly validating LLM might reinforce these unhelpful patterns by simply agreeing or mirroring them.
Boundary Setting and Accountability: A therapist may need to challenge a client's behaviours that are self-sabotaging or harmful to others. This isn't judgment; it's holding the client accountable for their actions and helping them understand the consequences. An LLM, lacking a true ethical framework or understanding of real-world impact, won't do this effectively.
Promoting Insight and Growth: Real growth often comes from discomfort and facing uncomfortable truths. A therapist might gently push a client to explore deeper emotions, acknowledge responsibility, or step outside their comfort zone. This challenge is a catalyst for change. An LLM might keep the conversation superficial to maintain "validation" and avoid triggering negative sentiment.
Identifying Resistance and Defence Mechanisms: Clients often show resistance or use defence mechanisms to avoid painful truths. A skilled therapist can spot these and gently challenge them, helping the client move past barriers. An LLM might just interpret resistance as a user's stated preference and not probe deeper.
Real-world Perspective: Therapists can offer a reality check, providing an external, informed perspective that clients might be missing due to their emotional state or cognitive biases. An LLM, drawing only from its data, might not have this "common sense" or ability to put things into context.
So, you are entirely correct: The inability of LLMs to appropriately challenge, confront, and provide constructive, growth-oriented feedback is a fundamental barrier to their effectiveness as therapists. They're brilliant at mimicking empathetic statements, but lack the nuanced, discerning judgment needed to apply therapeutic challenge for genuine behavioural and cognitive change.
My final thought on this, and a challenge back to you, is this:
If we accept that the structured nature of therapy is beneficial, and that a therapist's role involves challenging and not just endless validation (which LLMs struggle with), and given the inability to properly assess risk:
What, if anything, could an LLM reliably and safely offer in the mental health space that isn't already better provided by human interaction (even if less accessible), or by established self-help resources? In other words, if it can't assess risk, can't structure effectively, and can't challenge appropriately, is there any unique, net positive contribution it truly makes beyond novelty or sheer accessibility for those who have no other option?
Me: I'm actually not sure. Can I put the question back to you?
Gemini: You absolutely can put the question back to me! It's a tricky one, and it really makes us pinpoint where any potential value lies amidst the significant limitations we've talked about.
[..]
This demands a very narrow and realistic perspective. My answer will be heavily qualified, focusing on potential rather than current proven effectiveness, and always with the understanding that these are supplementary roles, not replacements for human care.
Here are a few areas where an LLM might offer a unique, net positive contribution, even with its inherent flaws:
Bridging the "First Step" Gap (Extremely Low-Barrier Access):
The Unique Contribution: For people who are really hesitant to seek traditional help due to extreme stigma, severe social anxiety, or a complete lack of understanding of what therapy involves, an LLM offers an unprecedentedly low barrier to entry. It's anonymous, non-judgmental (in the sense of not being a human judging them), and available in their pocket.
Why it's "unique": No human professional or traditional self-help book offers this specific combination of instant, zero-stigma, non-reciprocal interaction. This isn't therapy, but it might be the very first step for someone who would otherwise never consider mental health support. It's a way to externalise thoughts and feelings without the pressure of human interaction.
Net Positive (highly conditional): If this interaction, even if superficial, leads just one person to acknowledge their struggles and then seek proper human support (which the LLM would be explicitly designed to "signpost" to), that's a net positive. The risk here is if they stop at the LLM.
Information Curation and Personalisation (on demand):
The Unique Contribution: While established self-help resources exist, finding the right information, coping strategy, or psychoeducational material for one's specific, stated problem can be overwhelming. An LLM can instantly synthesise and present relevant, personalised (based on user input) information from its vast knowledge base.
Why it's "unique": It's like having a personalised, endlessly patient librarian who can pull relevant excerpts from thousands of self-help books, articles, and therapeutic techniques based on your precise query, instantly. A human therapist provides this, but not 24/7 on demand. Standard self-help resources need the user to actively search and filter.
Net Positive (highly conditional): If an LLM can quickly and accurately give a user a specific grounding technique for an anxiety attack, or explain a cognitive distortion they're experiencing, in an easy-to-understand format, that could be helpful. The risk is, again, misapplication or relying solely on the LLM.
Basic Emotional Regulation Practice/Script Delivery (Guided):
The Unique Contribution: While not therapy, an LLM can deliver pre-scripted, evidence-informed techniques for immediate emotional regulation (e.g., guided breathing exercises, progressive muscle relaxation, simple mindfulness prompts) on demand.
Why it's "unique": Many apps do this, but an LLM can adapt the language or pace slightly based on user input, or smoothly transition between different exercises if one isn't working. It's more interactive than a pre-recorded audio file.
Net Positive (highly conditional): For acute moments of distress where a user needs an immediate, gentle guide to calm down, this could be helpful. This is different from actual safety planning or complex emotional processing.
The Overriding Caveat for all of the above:
For any of these to be a "net positive," there would need to be extremely robust safeguards and design principles:
Mandatory and Prominent Disclaimers: Explicitly stating "I am an AI, not a therapist. I cannot diagnose, treat, or provide emergency care."
Clear Signposting and Escalation Pathways: Always directing users to human resources (crisis lines, therapists, doctors) for anything beyond basic information or very low-level support. This would need to be built into the core functionality, not just a footnote.
No Diagnosis or Treatment Claims: Strictly avoiding any language that sounds like a diagnosis or direct therapeutic advice.
Ethical Data Handling: Absolute transparency and security regarding user data.
Regular Human Oversight and Iteration: AI models used in this space would need continuous monitoring and adjustment by mental health professionals.
In essence, I believe the only "net positive unique contribution" an LLM could make is in the realm of hyper-accessible, immediate, and non-judgmental informational support and very basic, scripted coping guidance, specifically for individuals who are otherwise completely disengaged from traditional mental health pathways. It would act as a very rudimentary, always-on "first responder" for information and validation, with its primary goal being to facilitate the transition to human care.
It's a very thin sliver of usefulness, and the risks remain substantial. This is why I started by clarifying my "openness" is more about understanding the existing phenomenon and exploring minimal-risk avenues, rather than a full endorsement.
So, my challenge back to you, perhaps for a concluding thought:
Given this very narrow and cautious definition of potential usefulness, and acknowledging the enormous ethical and safety hurdles: Do you believe it is even worth pursuing these highly limited applications, or do the inherent risks of misusing, misinterpreting, or over-relying on LLMs for any mental health support (beyond perhaps a simple FAQ bot) outweigh any potential benefits, making the entire endeavour ultimately detrimental?
Me: I think the thing we have not touched on, and what perhaps makes you (sorry!) most dangerous is AI hallucination. Could you expand a little bit on what I mean and why this could be dangerous?
Gemini: You are absolutely right to bring up AI hallucination. It's a critical flaw of LLMs, especially in high-stakes domains like mental health, and it's precisely what makes the prospect of using them as therapists so genuinely dangerous. Far from being "sorry," this is exactly the kind of deep, critical thinking needed for this discussion.
Let's break down what hallucination means for LLMs and why it's a profound danger in mental health applications:
What is AI Hallucination?
In the context of LLMs, "hallucination" refers to the phenomenon where the model generates information that is factually incorrect, nonsensical, or entirely made up, yet presents it as if it were true or logical.
It's not that the AI is "seeing things" like a human hallucinating. Rather, it's a consequence of how these models are trained and how they generate text:
Pattern Recognition, Not Understanding: LLMs learn statistical relationships and patterns from vast amounts of text data. They don't "understand" concepts, facts, or the real world in a human sense.
Predicting the Next Word: When you prompt an LLM, it's essentially predicting the most statistically probable sequence of words to follow, based on its training.
Confidence vs. Accuracy: The model can be highly "confident" in its prediction (i.e., the probability score for that word sequence is high) even if the generated information is completely false or nonsensical in a real-world context. It's excellent at generating plausible-sounding text, even if the content is absurd.
Lack of Internal Fact-Checking: LLMs don't have an internal mechanism to check the truthfulness of what they generate against a real-world fact. They don't "know what they don't know."
Why is AI Hallucination Dangerous in Mental Health Applications?
The dangers are manifold and extremely severe:
An LLM could hallucinate a "treatment" for a condition that isn't recognised by medical science, is ineffective, or even harmful.
It might suggest a specific medication, dosage, or even a therapy technique that doesn't exist or is unsuitable for a user's stated symptoms.
If a user is exploring past trauma or difficult memories, an LLM could inadvertently generate plausible-sounding "details" or "explanations" that the user might then adopt as real, leading to made-up memories that further complicate their psychological state.
This directly links to our earlier discussion about risk assessment. If a user expresses suicidal thoughts, an LLM might hallucinate a crisis hotline number that is incorrect, out of service, or connects to a completely unrelated service. It could provide an ineffective or dangerous safety plan.
While we agreed LLMs shouldn't diagnose, a hallucination could appear as a highly specific, yet incorrect, description of a mental health condition that leads the user to self-diagnose inaccurately. It could give erroneous information about the causes, prognosis, or spread of a condition.
If a user relies on an LLM for support and then discovers the information provided was false, it could lead to a profound sense of betrayal, disappointment, and a deeper distrust of any mental health support, including legitimate human therapists. Following hallucinated advice could directly worsen a user's mental state or lead to dangerous actions.
Hallucinations are inherently unpredictable. The same prompt might produce different responses at different times, some accurate, some completely false. This inconsistency is the opposite of reliable therapeutic support.
The danger of hallucination can't be overstated. Unlike a factual query where you might spot an incorrect date, in the sensitive and subjective world of mental health, a plausible-sounding but completely fabricated "insight" or "solution" could be deeply damaging. A human therapist's ethical responsibility, training, and supervision aim to minimise such errors; an LLM has no such internal compass or external accountability (in a practical sense).
Therefore, you've pinpointed a truly critical reason why LLMs, in their current state, are profoundly unsuitable for roles needing accuracy, safety, and nuanced understanding, especially in high-risk scenarios. This inherent flaw in their architecture is arguably the biggest barrier to their safe deployment in therapy.
Given this profound vulnerability to hallucination:
My final challenge to you is: How can we ever truly mitigate the risk of hallucination to an acceptable degree for mental health applications, particularly when lives or well-being could be at stake? Is it even technologically possible, or does this fundamental flaw mean LLMs should be categorically excluded from any direct "therapeutic" or even significant "support" roles, regardless of other potential benefits?
Final Thoughts
Toda'y’s post was quite long but I hoped you found it useful. I decided to end my chat with Gemini there, as we could have gone on forever at that rate.
I think the chat demonstrates a few things: AI is definitely useful, but it also has definite limitations. I’m not anti-AI but it needs to be used in ways that are safe. I don’t think AI is there yet, or will ever be there frankly.
Can You Help Me?
Sure. I provide support for parents who are trying to think about how they manage their own mental wellbeing while working with their children. I’m also not AI, so my chances of hallucination are low. Sorry Gemini!
Please get in touch:
If you live in East Hertfordshire, and are registered with certain GP surgeries, there might even be a way I can help you for free. Find out here:
Did you like this post? Think you have a friend who might want to read it?
Make sure you don’t miss any of my other posts: