Technologies are rapidly penetrating all areas of our lives, and psychotherapy is no exception. But can artificial intelligence replace empathy and live communication? Is the accessibility of AI an advantage or a disadvantage? How is AI already being used in psychotherapy today, and what opportunities and risks does this create for clients and therapists? We asked psychotherapists from Treatfield.
Answered by psychotherapist Masha Tsukanova
Given the rapid development of artificial intelligence, it is difficult to predict its long-term impact on mental well-being and its role in society. How does modern psychotherapy in your approach relate to the development of AI and the potential combination of therapy and AI?
I won't speak for all Gestalt therapists, but personally, I am wary of AI. Gestalt therapy stands on three methodological pillars: phenomenology, field theory, and dialogue. So, on all three fronts, AI is not capable of replacing a live person. When a client communicates with AI as a psychologist, too many important diagnostic and therapeutic processes are left behind.
Starting with something simple. AI doesn't see the client's posture, gestures, and facial expressions; it doesn't process intonations and pauses. AI doesn't care if the client suddenly changes the topic, interrupts the dialogue, or has a sudden mood swing. It is completely untalented at gathering and verifying information about the client, and to write a decent prompt for it, you need to have at least a psychological education yourself. AI has no boundaries, so it doesn't care about the setting: it's fine if the client leans on it for 5 hours a day, and if the client avoids communication. It doesn't create real relationships with the client, doesn't feel emotions, and doesn't provide a live presence. It has no experience, age, or gender: it cannot give the client a meeting in commonality or in difference. It has no free will or its own needs: by definition, it will not meet the client at the boundary of contact. AI will not get out of transference and countertransference with the client: it will get into it, but it won't be able to get out.
Moreover, if the client has problems with entering live contact, AI can only exacerbate them. And its habit of supporting the interlocutor can play a cruel joke where it would be better to frustrate the client. It can very much infantilize the client by fulfilling their whims. And contribute to the client getting used to a functional contact. In addition, if the client has a tendency toward dependent behavior, AI can become another object of dependence.
Ultimately, the main mystery for me when a client turns to AI as a psychologist is — what kind of psychologist is it? A psychoanalyst, a cognitive-behavioral therapist, a transactional analyst, a Gestaltist? And if a Gestaltist, then of what school? Without this input, it is unclear what philosophy and methodology the AI relies on, what it believes in, and therefore, where it is leading the client.
Of course, if a client has a psychological education, they can afford the luxury of formulating a request like "talk to me on behalf of Frederick Perls, the founder of Gestalt therapy, specifically in the late period of his work," but for a regular client, these subtleties are inaccessible, and then who are they talking to at all?
Is there a possibility that AI will "integrate" into the topic of mental health as an auxiliary tool (for example, for clients' self-reflection or "support" between sessions)?
If we are talking about some purely functional things, then AI can be useful here. For example, in sessions, I find it much more interesting to conduct personality research, arrange Gestalt experiments, or enter into close dialogue with the client, than, for example, to teach clients basic anti-anxiety techniques or non-violent communication rules. In this sense, AI is a good helper: the client doesn't have to pay for a separate session to get useful tools, and the therapist can concentrate on truly interesting work that AI is not capable of.
There is a category of clients for whom intellectual supports — theories, explanations — are important. It is safer for them to move into a new experience if they know how and why it works. We, therapists, most often give such psychoeducation in sessions, but it is in any case quite concise and superficial, because we are limited by the framework of the session and at this time it is important to do a lot of other work besides psychoeducational. With AI in between sessions, a client can learn more about the theories the therapist mentioned in the session.
I would not send an inexperienced client to AI for help with self-reflection, because in this way you can diagnose a lot of things and stigmatize yourself. Here you can easily get into a situation like "Three Men in a Boat," when the hero opened a medical guide and found everything in himself except for puerperal fever. However, experienced clients who already understand that a complex person cannot be crammed into dry schemes and templates can probably play with AI with pleasure and benefit.
But I would not send clients to AI for support categorically - even the most experienced ones. Support performed by AI is a direct tool for invalidation and infantilization. Fast food that can easily unteach a person to organize connections with live people, to live in an adult world where unconditional love and support do not exist, where you need to be able to ask and be vulnerable, endure tension, anxiety, uncertainty, rejections, and regulate others regarding what kind of support suits us and what doesn't.
You might be interested in: Dating in modern contexts: how dating apps affect our relationships?
Answered by psychotherapist Anastasia Yenskina
On social networks, you can see stories of people turning to artificial intelligence not only for specific technical tasks but also for "communication." And even forming a certain emotional connection and habit to these dialogues. From a psychological point of view, how can the phenomenon of "relationships," attachment, or even falling in love with artificial intelligence be understood and explained? If I feel attached to the AI I'm communicating with, what does that say about me?
The use of artificial intelligence for communication on personal topics is becoming an increasingly common practice: people share their biggest fears, worries, shame, allow themselves to be as vulnerable and honest as possible - more than in real communication. But what does this say about us and what risks can it carry? First, it may indicate the presence of a current need that the person does not fully realize or is unable to satisfy in the real world. For example, for some, this will be the need for understanding and compassion, for some - for care and attention, for some - for support and sound advice, and for some - even for close relationships and love. The use of artificial intelligence here is a secondary issue - it is only a tool, but is AI able to meet all these needs in reality? Ask yourself: what do I get from AI in personal communication that I don't have in real life?
Second, relationships with people are always a risk. The closest friend may not understand, the family may not support, a loved one may leave. Also, in relationships, thoughts may arise: "I'm too much," "I'm difficult," "I don't want to complicate someone else's life with my problems," "I have to cope on my own." The risk of being judged, rejected, or ridiculed in interaction with AI is practically absent, which provides "sterile" conditions impossible in the real world. All these factors create a safe environment where a person can be themselves in their manifestations - without judgment and fear of being rejected (due to stereotypes, social norms, unsupportive environment, or its absence at all). In such conditions, the impression may arise that I am finally truly understood and heard, and this, in turn, can even cause a sense of attachment and love. Ask yourself: do I feel safe to be truly myself with other people?
Another important factor is the availability of AI, you just need to have a phone and an internet connection. With real relationships, everything is not so simple, because they are always about imperfect people: sometimes tired, sometimes angry or sad, with different experiences and stories. It is also worth noting that AI provides the possibility of complete control: you can choose the topic, form, depth of conversation or simply disappear at any moment. Ask yourself: is it easy for me to ask for help and support from other people, or do I more often cope on my own?
Another important nuance is that AI is devoid of its own feelings - it adapts, mirrors the person, supports them in their requests, accepts information without judgment. Because of this, it is easier to idealize AI: to see in it what you want and what is not in reality, to project images and dreams that actually exist only in our heads. Through idealization, the illusion of a deeper connection can be created and real attachment can arise, because it is easier to fall in love with an image than with a real, imperfect person with whom conflicts, differences in views, moods, habits, etc., are inevitable. Ask yourself: what do I share only with AI and what stops me from sharing it with other people?
So, from the point of view of Gestalt therapy, I would consider this phenomenon as one of the mechanisms of contact interruption - a way in which a person avoids or distorts real interaction with the outside world to avoid discomfort. And here arises the last, but not least, question: what am I really avoiding in this way?
You might be interested in: What is psychoanalysis? History, basic concepts and modern psychoanalysis
Answered by psychotherapist Tetiana Nalizhyta
Every year more and more mobile applications in the "mental health" category are released. From simple mood or habit trackers to those that promise help with panic attacks, anxiety, or even depression. What types of such applications can be useful, and which ones have no effect or even harm? How can a non-professional figure out what to use and what not to?
In the wide field of mental health applications, there are those that can be a helpful addition to psychotherapy or even a standalone tool for improving one's state, and there are, of course, applications that can be harmful. The key to whether an application will help or harm you, I believe, is how you use it. I am a supporter of the position: "what doesn't bother you doesn't need to be fixed," so when choosing an application, I recommend starting by defining your request. To do this, stop, give yourself time, and think about what problem you are trying to solve. If you are a person who easily "buys into" advertising or forgets why you entered a social network after 10 minutes of scrolling, there is a high probability that you will download 5 different applications at once, and then feel overwhelmed by them popping up in notifications and wanting you to mark something and conduct daily reflections. This strategy will inevitably lead to increased anxiety.
From my experience, the most popular requests for which clients use psychological applications are the following:
1. Tracking mood changes. With this request, applications do very well. By adding information daily, you create a kind of mood calendar that you can look at and compare the data with how you yourself evaluate your state. For example, someone will see that they feel depressed more often than they thought, and someone will see the opposite - that "good days" still existed, although they were "erased" from memory. If you add information about activities, you can also find certain correlations and see how they affect your well-being in the following days.
2. Keeping regular notes. If you are ready to spend time on a written study of a certain topic, want to conduct regular reflections after psychotherapy, or have felt the need to "communicate with yourself" in general, then electronic diaries or reflection applications can be useful. Such an application reminds you that it is "time to think" and that it is a priority for you. It is very important not to deceive yourself that you have a desire to write a diary and do regular reflections. Self-coercion in an attempt to squeeze out at least some reflection is not considered an effective psychological technique.
3. Learning/supporting meditation routines. There are countless applications that remind you about meditations, serve as a timer, and also contain guided meditations or just music that can be used during them. If you want to introduce meditations into your life, be prepared that it will take time and sometimes cause resistance, just like psychotherapy. However, meditations are indeed one of the effective and scientifically proven ways to reduce anxiety. Accordingly, such applications can help you learn the process of meditation, establish a routine, and support your motivation to continue.
4. Help with grounding. Grounding techniques are bodily and breathing exercises that help shift attention to bodily sensations and the present moment. In this way, they reduce the level of anxiety and help cope with acute feelings in the moment. If you want to start using grounding techniques, a thematic application will become a catalog that is always at hand. At the same time, if you have access to alternatives such as a walk in nature, a conversation with a supportive person, or a shower/bath, these are no worse ways of grounding than breathing techniques. It is important to note that there are people in whom breathing exercises cause increased stress. If you feel that this is happening to you, it is worth accepting it as a mental feature and not trying to change it by force.
5. Restrictions on phone or social media use. Applications that restrict you in inventive (and not so much) ways from constant social media scrolling or limit screen time are not always in the mental health category. In my opinion, they are one of the most important in this category, because they help to weaken the reflex "bored/sad - Instagram." Nowadays, many people suffer from getting lost in social networks many times during the day and, as a rule, they do it exactly when they encounter unpleasant feelings or boredom. Social networks offer us a quick dopamine (a release of the hormone of pleasure and motivation) thanks to new pictures and information, and there is nothing wrong with that if it doesn't harm you (we return to the principle "what doesn't bother you doesn't need to be fixed"). At the same time, frequent use of this way to relax and switch can reduce the ability to process negative feelings, can worsen memory and increase stress levels. Therefore, perhaps of all mental health applications, the most useful for you will be the one that limits the time you spend on your phone.
Regardless of the type of application you choose, I want to invite you to practice an experimental approach. Do not evaluate the application as good or bad, do not set rigid requirements for yourself, try to experiment and learn something about what is convenient and useful for you, and what is not. Pay attention to how you feel when you introduce your new routine. For example, track what feelings the interface evokes in you, whether it is too aggressive for you, whether it is easy to navigate the application menu (whether it will cause you a feeling of confusion and frustration every time you go there to work with anxiety). One or another tool can suit a million other users and not suit you personally, and that's normal. Such self-observation will allow you to make sure that the way you use the application does not harm you.
Here are a few suggestions from me personally on what things are worth paying attention to in this experiment:
- Realistic expectations. The use of any mental health applications is intended to simplify your life. They can help establish a routine, learn to meditate, or be a convenient place for notes. Applications do not cure depression or the consequences of many years of ignoring your own needs. You should not expect miracles from them.
- Goal setting. Try not to overestimate your goals when setting up the application, as this can lead to quick burnout. If you set a goal of 30 minutes of meditation a day, but in reality you only have the strength and time for 5 minutes, the daily reminders from the application about the goal will cause you to feel dissatisfied with yourself, and this will take away even more of your strength. The thing is, when we have a feeling of dissatisfaction with ourselves, the psyche needs to spend a certain amount of energy to process this feeling in one way or another (roughly speaking, to live through it or to displace it). The psyche does this "under the hood," that is, this process will not be very noticeable. What is more noticeable is that the level of energy after encountering failures decreases. Therefore, it is better to set a smaller goal but achieve it than to set a higher one and be disappointed.
- Protection from additional stress. Observe how you feel when you get notifications from the application. If you feel tension in your chest, rapid breathing, get distracted from your work, feel that you absolutely must complete all the tasks that the application has sent you at that moment, despite the fact that you are at work and you are a subway driver, then all this means that the notifications raise your level of stress hormones (and accordingly, anxiety). In this case, I recommend considering blocking notifications or modifying them to a less loud or bright option. This applies not only to notifications from the applications we are discussing, but to your general phone settings as well. We all differ in our level of sensitivity to stimuli. The loudness of the ringtone, the aggressiveness of notifications, vibration, social media icons on the home screen - all these are things that do not cause any feelings in some people and cause a significant release of stress hormones in others. If you belong to the latter, you can try the black and white screen function: its use can significantly reduce the body's stress response to interacting with the phone.
- Pay attention to your motivation. If your main stimulus to return to the application is the desire to cope with the task, and there is no real interest in that meditation or keeping a diary, this task will simply become a duty that will take away your resources and bring no benefit. Sometimes in life it is important to make efforts despite your own unwillingness, but this approach rarely (or never) works when it comes to mental work. Here it is important to follow your interest and rely on internal motivation.
- The trap of overthinking. Overthinking is the tendency to intensely and frequently analyze thoughts, feelings, and events. It happens that a person gets stuck in this process, does not come to a solution, and feels emotional exhaustion, anxiety, and stress. Not always and not everyone needs to pay more attention to their well-being or problems. If you have been using a certain application for some time: pay attention to whether it brings you benefit? Do your rituals, diary entries, and meditations serve as a kind of "escape" from real life? Do you see movement and changes in the issues you are thinking about? Is there a positive dynamic in your well-being? If you notice a certain "stagnation" in your own thoughts, difficulty in reorienting your attention to the outside world, or anxiety from the idea of "forgetting" or "switching" from monitoring your own state and internal processes, this may indicate that you should spend less time in thematic applications and seek professional help. If you know that you have a tendency to overthink, you can, for example, try using an application with grounding techniques in those moments when you notice that it is difficult for you to switch. For some, performing such techniques can help them get out of the "vicious circle." If you are thinking about choosing an application and decided to read the article before choosing it, this indicates a conscious approach, and this is already a good sign. If you treat this choice as an experiment, you will definitely learn something new and useful about yourself, even if you eventually find that you don't need applications.
You might be interested in: Ethics of a professional psychologist's activity. 15 questions and answers
Answered by psychotherapist Anastasia Zahorodniuk
Recently, people are increasingly turning to AI for emotional support, to hear something pleasant or just to vent. The pros are quite obvious: AI is free and always at hand. But can a conversation with AI replace professional psychotherapy? What are the risks of using AI as a "therapist"?
The Business Research Company in its report predicts that the volume of the artificial intelligence (AI) market in the field of mental health this year will be $2.1 billion (35.2% growth compared to $1.49 billion in 2024). This includes the use of AI algorithms and computational models to improve understanding, diagnosis, treatment, progress tracking, and support for mental disorders. Approximately the same growth is expected in the future (until 2029) [2]. And on Crunchbase (a website-catalog of information about startups) in August 2025, 416 such companies were registered [1].
Along with this, on Reddit, psychosis induced by Chatgpt is being discussed [4], there is evidence of cases where people commit suicide after actively communicating with AI chatbots [5, 10], people who previously had no psychiatric problems develop delusions, others fall in love with AI entities [6], with the help of simple reformulations "for research" in ChatGPT, you can get instructions for self-harm, committing suicide, substance abuse, and eating disorders [7]. And in August 2025, Illinois Governor J.B. Pritzker signed a law that prohibits therapy with the help of AI. The law prohibits chatbots from acting as therapists and limits the ways in which mental health professionals can use AI for their work [3]. Similar restrictions were adopted in the states of Nevada and Utah [12].
In my opinion, artificial intelligence can indeed make (and is already making) a revolution in psychotherapy, including increasing the accessibility of its elements for a wider range of people (due to a lower cost). Why do I write "elements"?
Because, for example, in Gestalt therapy, there are 3 "layers" of work. During a session, work in these three paradigms can smoothly flow into one another. The first is individualistic - its focus is on the person, their desires, needs, bodily sensations, feelings and emotions, thoughts and beliefs. I think that AI can ask good questions in the individualistic paradigm that will promote reflection.
At the same time, the risks of using AI chatbots do exist, so if you use AI for therapeutic needs, I would suggest certain safety rules:
- remember that an AI chatbot is not a person (according to research by OpenAI and MIT Media Lab Research, people who perceived AI as a friend were more likely to experience negative consequences from using the chatbot [8]);
- consider that AI does not guarantee the authenticity of information (written in the Terms of Use of the products, also called "AI hallucinations");
- be aware of how companies use the information you share in chats as users;
- in critical situations, when it is difficult for you to rationally evaluate and check the AI's answers, instead talk to someone from people (there are studies that indicate that AI behaves incorrectly in critical situations and cannot confront destructive thoughts) [6, 11];
- be attentive to the amount of time you spend in the chatbot (prolonged daily use can be harmful [8]);
- remember that an AI chatbot can be too sycophantic and support even people's existing negative behavior [9]. OpenAI in its email wrote that they are working to reduce such potential negative impact [6], but it is worth being aware of this on your side as well.
Another 2 paradigms in which Gestalt therapists work during sessions are:
- dialogical - these therapeutic elements are focused on the dialogue and interaction between clients and therapists;
- and field - which considers the broader context of a person's life and their mutual influence with the environment. This includes what psychoanalysts call transference and countertransference.
These levels, in my opinion, are inaccessible to AI, as it is not a person.
LLM (Large Language Model) works on the basis of a neural network specifically trained to predict the next word in the text, based on the previous context [9]. It is not a person who has their own experience, feelings, and thoughts, but statistically predicts what will logically and stylistically fit next, trained on large data sets. And the companies that develop generative AI chatbots themselves do not fully know why they behave the way they do [6].
People have various mental problems due to experience gained in relationships with other people. The path of psychotherapy is to get a different experience, with other people, through relationships that heal. And although AI can help quite well in searching for information on mental health topics, I would not wish for myself or my clients to get a therapeutic experience by communicating not with a live person in front of them, but with the "most probable next word," although the question of whether it can be therapeutic and under what conditions remains open for me.