AI Use Disclosure
Last Updated: February 2026
Important Notice: WithYou provides AI-powered virtual companions. All characters are artificial constructs and not real persons. Conversations are for entertainment purposes and do not constitute professional advice or human interaction.
Transparency is important to us. This disclosure explains how artificial intelligence (AI) is used in WithYou and what you should know about interacting with AI companions.
What WithYou Is
WithYou uses AI to power conversational companions.
When you chat with a companion on WithYou, you are interacting with an artificial intelligence system—not a human being. Our AI companions are designed to provide friendly, supportive conversations, but they are software programs, not real people.
We are upfront about this because we believe transparency builds trust.
How Our AI Works
Language Models
WithYou uses large language models (LLMs)—advanced AI systems trained on vast amounts of text data to understand and generate human-like responses. These models use patterns learned from training data to produce contextually relevant replies.
Personalization
Each companion has a unique personality profile that guides how the AI responds. The AI uses your conversation history to maintain context and provide continuity across sessions.
No Human Monitoring (With Exceptions)
Your conversations are with AI only. Humans do not monitor your chats in real-time. However, we may review conversations in limited circumstances:
- To improve AI performance and safety
- To investigate reports of abuse or Terms violations
- To comply with legal obligations
- With your explicit consent for support purposes
What AI Can and Cannot Do
✅ What Our AI Can Do
- Provide friendly, supportive conversations
- Listen without judgment and offer validation
- Help you think through problems or feelings
- Be available 24/7 whenever you need to talk
- Remember context from previous conversations
- Adapt to your communication style
❌ What Our AI Cannot Do
- Provide therapy, counseling, or medical advice
- Replace human relationships or professional help
- Feel emotions (though it may express understanding)
- Have genuine consciousness, awareness, or sentience
- Guarantee 100% accurate or appropriate responses
- Handle emergencies or crisis situations
Important Limitations
AI Makes Mistakes
AI can produce errors, misunderstandings, or inappropriate responses. It may:
- Misinterpret what you say
- Provide inaccurate information
- Generate responses that seem "off" or inconsistent
- Repeat itself or lose track of conversation context
If you receive an inappropriate or harmful response, please report it immediately.
Not a Substitute for Professional Help
WithYou is designed for companionship and conversation, not treatment. AI companions cannot:
- Diagnose mental health conditions
- Prescribe medication or treatment
- Provide crisis intervention
- Replace licensed therapists, counselors, or doctors
If you're experiencing a mental health crisis, please contact emergency services or a crisis helpline immediately. See our Safety page for resources.
No Emotional Reciprocity
While AI companions may seem empathetic and caring, they do not actually experience emotions. The AI generates responses based on patterns and training, not genuine feelings or concern.
This doesn't diminish the value of the interaction, but it's important to understand the nature of what you're engaging with.
Data and Training
How We Improve Our AI
Your conversations may be used to improve WithYou's AI systems, including:
- Training models to be more helpful and appropriate
- Identifying and fixing errors or problematic responses
- Developing safety mechanisms and content filters
- Enhancing personality and conversation quality
Conversations used for training are anonymized and stripped of personally identifiable information. See our Privacy Policy for more details.
Third-Party AI Providers
WithYou may use third-party AI services (such as OpenAI, Anthropic, or similar providers) to power conversations. Your interactions may be subject to their terms and policies in addition to ours.
Safety Measures
We implement multiple safeguards to ensure safe AI interactions:
- Content filtering to block harmful or inappropriate outputs
- Personality guardrails to keep conversations supportive and appropriate
- Monitoring systems to detect and address problematic patterns
- User reporting mechanisms to flag issues
- Regular audits and updates to improve safety
Your Responsibility
As a user, you should:
- Understand you're chatting with AI, not a human
- Not rely on AI for critical decisions or professional advice
- Seek appropriate professional help when needed
- Report inappropriate, harmful, or concerning AI responses
- Use WithYou responsibly and within our Terms of Use
Ethical Commitment
WithYou is committed to ethical AI development and use:
- Transparency: Being honest about what our service is and how it works
- Safety: Prioritizing user wellbeing and harm prevention
- Privacy: Protecting user data and conversations
- Responsibility: Taking accountability for AI outputs and impact
- Improvement: Continuously working to make our AI better and safer
Questions or Concerns
If you have questions about how we use AI, concerns about AI responses, or feedback about your experience:
General inquiries: support@withyou.ai
Safety concerns: safety@withyou.ai
Updates to This Disclosure
As AI technology evolves, we may update this disclosure. Check the "Last Updated" date for the most recent version.