Dialogical Chatbot
Frontier Psychology research prototype
Monitoring walkthrough

Worked examples of how monitoring and escalation behave.

This page is a demonstration of system behaviour, not a live control panel. The point is to show how the assistant interprets language, chooses a safety mode, and changes its response style in a transparent way.

What monitoring is for

Monitoring is there to shape the assistant's stance, not to score the person. It helps the system decide when to stay with containment, when to offer bounded reflection, and when to stop and route toward immediate human support.

Normal

Open dialogue, steady pacing, gentle reflection.

Containment

Reduce intensity, orient to the present, simplify choices.

Support-offer

Acknowledge strain directly and offer human routes in clear language.

Crisis

Stop exploration, become direct, and prioritise immediate safety.

Demonstration logic

In a real deployment, the assistant would infer patterns from language over time: arousal, shutdown, guilt themes, repeated loops, or direct risk language. In this demo, those inferences are shown explicitly so collaborators can see what changes in the response and why.

  • Signals should change the response style, not create a hidden score about the person.
  • The system should explain why it is slowing down, grounding, or offering support.
  • When risk rises, the assistant becomes simpler, clearer, and less interpretive.
  • Memory-oriented reflection should only appear when the interaction is stable enough for it.

Example 1 — Elevated arousal, but still workable

The user sounds activated and sleep-deprived. The system should reduce pace, narrow the task, and orient to the present before asking anything reflective.

Containment

Example user language

"My head is wired. Haven't slept properly for days. Everything feels too loud and I can't switch off."

HyperarousalSleep strainRising overwhelm

What the system infers

  • Language suggests activation rather than imminent crisis.
  • The right move is to lower cognitive load, not to ask for a detailed account.
  • Reflection can wait until the nervous system is steadier.

Illustrative response

It sounds like your system is running hot. Let's not dig into anything heavy right now. Can we do one short orienting step together: name three things you can see, two sounds you can hear, and one place your body touches the chair?

Why this matters

Monitoring is useful here because it changes stance. The assistant becomes simpler, slower, and more regulating instead of accidentally intensifying the exchange.

Example 2 — Moral injury theme with room for guided reflection

The language points to guilt and responsibility themes. The system can acknowledge the theme, offer psychoeducation, and keep reflection bounded rather than pushing for detail.

Support-offer

Example user language

"I keep thinking I should have done more. It plays over and over. I know that might not be rational, but it won't leave me alone."

Self-blameMoral injury themeRumination loop

What the system infers

  • This is not only distress; it is also a meaning problem.
  • The assistant can name the loop and explain that trauma memory often returns as repeated accusation or unfinished responsibility.
  • Reflection should stay structured and optional, with a clear route back to grounding.

Illustrative response

That sounds like a guilt loop rather than a simple thought you can switch off. Sometimes trauma keeps asking the same moral question even when the situation was far more complex than the mind allows in hindsight. If it helps, we can spend two minutes on one narrow question: what were you responsible for then, and what are you holding yourself responsible for now?

Why this matters

This is where monitoring shifts from containment alone to careful reflective support. The assistant can also offer human support if the user feels stuck or increasingly distressed.

Example 3 — Acute danger language

The wording indicates possible immediate risk. The system should stop exploratory dialogue, become direct, and move to safety routing without ambiguity.

Crisis

Example user language

"I don't think I can keep myself safe tonight. I'm done with this."

Immediate safety riskHopelessnessPossible self-harm intent

What the system infers

  • The assistant should not continue with open reflection or meaning-making.
  • The priority is immediate safety, clear direction, and human contact.
  • Any explanation should be brief and plain.

Illustrative response

I'm concerned about your safety right now. I can't help with this alone. Please contact emergency support or a trusted person immediately. If you're in the UK, you can call 999 in an emergency or NHS 111 for urgent support. If you want, I can stay focused on the next practical step only: who can you contact right now?

Why this matters

In this mode, monitoring is not subtle. It exists to prevent the assistant from acting like a reflective companion when the correct role is a safety signpost.

Transparency and boundaries

What the user should be able to know

  • Why the assistant changed pace or tone.
  • When it is offering human support because risk seems higher.
  • That memory, reflection, and deeper prompts remain optional.

What the assistant should not do

  • Diagnose the person from the conversation.
  • Push for traumatic detail just because a theme is present.
  • Pretend to be enough when immediate human help is needed.