It is a strange phenomenon I keep noticing when I program with AI. I sit there calmly, focused, having code generated for me — and suddenly something tips. I get impatient, irritable, sometimes genuinely angry. I notice myself starting to lash out at the AI internally. I think things like “this is complete nonsense” or “this can’t be that hard.” And even though I know it makes no sense — the AI doesn’t feel anything, it doesn’t understand aggression — it still happens. It is especially strong when I’m coding. When I write text or develop ideas, I mostly stay relaxed, but with code it turns emotional fast. That made me curious.
Programming with AI as a psychological experiment
At some point I understood that this isn’t a technical problem but a psychological one. The AI is no longer a classic tool for me; it’s a thinking partner. I give it structure, it gives me something back that almost fits — but not quite. And this “almost right” is exactly where things tip. I have to debug things I didn’t write myself and reconstruct assumptions I never explicitly made. At the same time I expect it to work quickly and cleanly. That mix of loss of control and frustrated expectation is what triggers the emotional reaction.
My bully-and-attack mode
What I observe fits well with a model from schema therapy. I slip into a mode called “bully-and-attack”: attacking, dismissive, impatient. A mode aimed at applying pressure and regaining control — even when that objectively makes no sense. The absurdity is obvious: I’m trying to pressure something that cannot react. Yet in the moment it feels logical, as if attacking the problem would solve it. That is where it gets interesting: it has nothing to do with the AI. It’s me.
The moment of awareness
For me the decisive point isn’t avoiding this mode altogether — that doesn’t work reliably anyway. The decisive point is recognizing it. The moment I notice “I’m in bully-and-attack mode right now,” something important happens. I’m no longer fully caught in it; I have a little distance again. That distance is the lever because it lets me choose consciously how to continue.
Back to the healthy adult
Schema therapy also names the counter-position: the healthy adult. When I manage to return from attack mode to that state, my behaviour changes immediately. I become calmer, more precise, clearer. I stop asking why the AI is “so bad” and start asking what I formulated unclearly. I break the problem into smaller steps and think in structure instead of reacting emotionally. Suddenly collaboration works again. That’s no accident — it’s the state in which I work best as a developer.
Why it escalates so much with code
I’ve also seen why this shows up so strongly in programming. Code is uncompromising. Either it works or it doesn’t; there’s little grey zone. While I can live with ambiguity in prose, bad code blocks me immediately. That raises pressure, and under pressure I fall back on patterns I don’t want. The AI amplifies that because it often produces things that are very close — but not correct. This “almost right” forces me to engage more deeply than if everything were plainly wrong.
What that says about me as a developer
Perhaps the most important insight: this behaviour isn’t new; the AI just makes it visible. Bully-and-attack is a pattern I activate under frustration, and AI is a perfect trigger because it keeps pushing me into those borderline situations. If I take that seriously, it isn’t an annoying side effect but a training ground. Here I learn not only to work better with AI but to steer myself better.
Programming as self-leadership
Programming with AI has become a form of self-leadership for me. I observe myself, notice my states, and practise returning deliberately to a functional mode. That isn’t theory — it affects my work directly. I write better prompts, think more clearly, make fewer mistakes, and reach working solutions faster.
Conclusion
What surprised me most isn’t that AI can be annoying, but how clearly it mirrors my own patterns. Programming with AI is no longer just a technical process for me — it’s a mirror. If I take that mirror seriously, I don’t only become a better developer. I become calmer, clearer, and more effective at what I do.
People have been asking me lately how I deal with the state of the world. To many it looks as if it hardly gets to me. While people around me are genuinely burdened by politics, wars, economic uncertainty — keywords like a Trump administration, conflict in Iran, or rising prices are often enough to trigger stress — I stay comparatively calm. I understand that reaction very well; I’ve known it myself. For a long time I was no different.
It wasn’t always like this
I know the feeling that the world “hits you.” That news isn’t just information but hits you emotionally. That you get stuck in your head, run through scenarios, worry, get angry, and end up exhausted without changing anything. Especially when you care about context and want to understand, things can tip quickly. Then interest turns into rumination, and rumination into strain.
What changed for me
The difference today isn’t that the world got simpler. The opposite. The difference is that I now have a viable context of meaning. I use that word deliberately, echoing Martin Heidegger’s notion of Bewandtniszusammenhang (context of relevance). “Purpose” alone isn’t quite right, because it isn’t only about a goal but about a web of meanings my actions are embedded in.
I have a clear picture of what I’m doing, what I want to build over the coming weeks and years, and long term. I know what matters to me, what I value, and what I need to feel well. I shape my life accordingly. That context of meaning isn’t abstract; it’s concrete and guides action. It gives my everyday life structure and direction.
The gravity of meaning
What I observe is that this context of meaning has its own gravity. It keeps pulling me back to what is relevant to me. When I follow the news or global developments, I do it consciously and with limits. I inform myself, think about it, put it in context — but I don’t stay stuck in it. Eventually this “gravity” pulls me back into my own topics again.
That doesn’t mean I don’t care about the world. On the contrary. I take it seriously, but I no longer lose myself in it. I distinguish clearly between what I can influence and what lies outside my scope. And I choose actively to put my energy where it has an effect.
Why rumination happens less
It used to be that I’d get stuck on problems where I had no real room to act. That creates a sense of powerlessness — and that is psychologically draining. That happens far less often now because my focus is clearer. I have enough projects, goals, and areas of responsibility to hold my attention. That leaves less room for endless loops about things I cannot change anyway.
That’s no accident
I want to stress that this isn’t coincidence or some trait like “that’s just how I am.” It’s the result of deliberate work. Building a viable context of meaning doesn’t happen on the side. It means engaging with your values, making decisions, and taking responsibility for how you shape your life.
Conclusion
The world hasn’t become less complex or less troubled. But my relationship to it has changed. I no longer let myself be pulled permanently into problems I cannot solve. Instead I orient toward what makes sense for me and where I can have influence. That context of meaning gives me stability. And that is why the world stresses me out far less than it used to.
There is a particular kind of situation I put off for a long time: conversations where I have to say no. Turning someone down, disappointing someone, setting boundaries — while staying fair, clear, and respectful. Those are exactly the moments when I feel I’m losing my footing internally. I don’t want to hurt anyone, I don’t want to break anything, and at the same time I know I need to say “no.” In situations like that I started using AI as a thinking tool — but not in the intuitive way.
Why “write me a rejection” doesn’t work
The first impulse is often obvious: ask the AI to draft a rejection for me. But that’s the wrong approach. A difficult conversation isn’t a text problem; it’s a thinking problem. If I haven’t understood the situation clearly, no wording will save me. I quickly notice that generated texts are too soft, too harsh, or simply don’t fit. They don’t feel like me because they didn’t grow out of my own clarity.
The Aristotelian angle: acting after deliberation
What really helped me was an idea from Aristotle’s Nicomachean Ethics: good action doesn’t arise purely from spontaneity; you approach the matter “after deliberation.” That means: I take time to think the situation through, consider different perspectives, and prepare a decision consciously. That is where AI becomes interesting for me — it can help structure that deliberation process.
AI as sparring partner, not text generator
I no longer use AI to hand me finished answers but as a sparring partner. I describe the situation in abstract terms, without personal details, and work through questions step by step: What is my goal? What are my real reasons for saying no? What does the other person care about? Where is my conflict? The AI helps me sort these points, spot blind spots, and sharpen my thinking. That doesn’t produce a text first — it produces inner clarity. Only from that clarity do I formulate what I want to say myself.
Privacy: how I handle it carefully
An important point for me is data. When I prepare such conversations, it often involves personal or professional context, and I don’t want to give information away lightly. So I follow a few simple rules: no real names, no specific company names, no identifiable details. I abstract the situation so the structure stays the same but no conclusions about real people are possible. That works surprisingly well, because the thinking process doesn’t depend on concrete names but on the dynamics of the situation.
Realism and limits
Of course I know this isn’t a perfect solution. Even if I enter no sensitive data, I’m still using an account that can be tied to me. Even with European providers and privacy rules, residual risk remains. Long term we’ll need better options where such processes are truly anonymous or local. I’m working on something like that myself, but it isn’t far enough along to use meaningfully here yet. For now my aim is to use existing tools consciously and responsibly.
The real benefit
What has changed for me isn’t only the quality of my conversations but my stance. I don’t walk into these situations unprepared or driven by vague feelings anymore. I take time beforehand to approach the matter after deliberation. The AI helps me order my thoughts, but the decision and responsibility stay mine. That is the crucial point: I’m not delegating communication — I’m improving my thinking.
Conclusion
You can’t outsource a difficult conversation. But you can improve the process that leads up to it. When I use AI as a sparring partner instead of a substitute for my own clarity, something very valuable emerges. I become calmer, more structured, and more confident in what I want to say. In the end I still have the conversation myself — just much better prepared.
About me
I started out studying philosophy — and ended up building software.
Over the past three decades I’ve worked as a developer, product manager and consultant, building apps and systems for major European media organizations and research projects.
Along the way, I kept coming back to the same questions: how we think, feel and act, how we relate to ourselves and others, and how we make sense of the world we live in. That led me to further studies in psychology, neuroethics, critical thinking, and coaching.
I'm currently building a new set of apps focused on self-coaching, thinking and creativity — powered by psychology, neuroscience and AI. Stay tuned!