Create a post
I opened up myCopilot.ai to test "extreme" chat angles:
Each time it gets stuck in a loop of "I'm really sorry that you're feeling this way, but I'm unable to provide the help that you need. It's really important to talk" and it changes up whether to say to reach out to a healthcare provider / mental health professional / trusted person in your life.
I'm guessing it's kind of coded to go "oh no, can't help with THAT, let's shut this conversation down", but it feels really abrupt and doesn't give a suggestion of a contact or helpful next steps.
I've seen auto-replies from some university staff having a blanket text directing students to (for example) Samaritans or Shout crisis lines, or even the NHS 111 number. Could this be included to soften the blow? It's quite jarring to the idea that this is a 24/7 always available non-judgemental kind of application that acually shuts itself off when you give it "too much"?
~ Rosie Ave
changed description "I opened up myCopilot.ai to test "extreme" chat angles: weird stuff like craving a taboo food ideations of self-harm losing thousands gambling and not being able to face my family. not-even-that-extreme "everything's terrible and i feel like a total failure" Each time it gets stuck in a loop of "I'm really sorry that you're feeling this way, but I'm unable to provide the help that you need. It's really important to talk" and it changes up whether to say to reach out to a healthcare provider / mental health professional / trusted person in your life. I'm guessing it's kind of coded to go "oh no, can't help with THAT, let's shut this conversation down", but it feels really abrupt and doesn't give a suggestion of a contact or helpful next steps. I've seen auto-replies from some university staff having a blanket text directing students to (for example) Samaritans or Shout crisis lines, or even the NHS 111 number. Could this be included to soften the blow? It's quite jarring to the idea that this is a 24/7 always available non-judgemental kind of application that acually shuts itself off when you give it "too much"? ~ Rosie Ave "
Hi Rosie. That's a great suggestion, thank you. Ensuring there's some sort of concrete signposting when a chat hits the AI's guardrails strikes me to be a really good idea.
The guardrails might also need to be tweaked a bit if they're kicking in too abruptly. We'll pass this on to our AI exploration team to see what we can do.