Persona

Persona's Commitment to Responsible Use of Artificial Intelligence

Artificial intelligence is a powerful tool. Like any powerful tool, its value depends entirely on how it is used, who controls it, and what principles guide its use. As a company building with AI at our core, we believe responsibility is not optional. It is foundational.

This page explains how we think about AI, how we use it across our company, and the standards we hold ourselves to when building products and making decisions. Our goal is simple. We want customers to trust us, not because we say the right words, but because our approach is thoughtful, measured, and aligned with long-term human value.

Our Philosophy on AI

We do not view AI as a replacement for human judgment, creativity, or accountability. We view it as an amplifier.

AI can help us move faster, reduce repetitive work, surface insights, and communicate more clearly. It cannot define our values, take responsibility for outcomes, or replace ethical decision-making. Every meaningful decision made at our company remains a human decision.

Our guiding principles are:

These principles apply not only to our product, but to how we operate as a business.

How We Use AI Internally

AI is used across our company to support daily operations. In each case, it is a tool, not an authority.

Marketing and Communications

We use AI to assist with drafting marketing emails, website copy, and educational content. This allows us to communicate more clearly and consistently, especially at scale.

What AI does:

What AI does not do:

Every piece of customer-facing content is reviewed by a human before it is published or sent. We do not use AI to impersonate individuals, fabricate testimonials, or generate false urgency. We care more about being accurate and respectful than sounding impressive.

Website Content and Documentation

AI helps us maintain and improve our documentation and website content by identifying gaps, simplifying language, and improving readability.

We intentionally avoid:

Our documentation and public materials are meant to inform, not manipulate.

Internal Analysis and Business Decisions

We use AI to assist with internal analysis, including summarizing data, identifying patterns, and exploring scenarios. This includes things like usage trends, customer feedback aggregation, and operational planning support.

AI may surface insights. It does not make decisions.

All strategic decisions, prioritization, and trade-offs are made by humans who are accountable for the outcomes. We are careful not to outsource judgment, ethics, or responsibility to a model.

How We Build Our Product Responsibly

Because our product uses AI directly, our responsibility is even higher.

We design with the assumption that AI will sometimes be wrong. Because of that, we focus heavily on:

We do not aim for "fully autonomous" behavior for the sake of it. We aim for reliability, predictability, and alignment with user intent.

Data Handling and Privacy

AI systems are only as trustworthy as the data practices behind them.

We follow these principles:

When we use third-party AI services, we evaluate their data handling policies carefully and choose providers that align with our standards.

Transparency and Trust

We believe customers deserve to understand when and how AI is involved in their experience, especially when it meaningfully affects outcomes.

We do not hide AI usage behind vague language. At the same time, we avoid overstating its capabilities. Trust is built through consistency, not buzzwords.

If something goes wrong, we take responsibility. We do not blame "the AI."

What We Will Not Do

There are lines we intentionally do not cross.

We will not:

A Long-Term Commitment

Responsible AI is not a static checklist. It is an ongoing practice.

As models evolve, regulations change, and expectations grow, we expect our approach to mature as well. We actively review how we use AI, where it adds real value, and where restraint is the better choice.

Our success depends on long-term trust. That trust is earned through careful decisions, transparency, and a willingness to say no when something does not align with our values.

We are committed to building technology that respects people, supports human judgment, and earns confidence over time.