AI Usage Policy

How PingAura uses AI models, what we do and do not allow, and how we handle your content.

Last updated: 17 April 2026

This policy describes how PingAura uses AI models across the Services, how your content is handled, and what we expect from you in return. It sits alongside our Terms of Service and Privacy Policy.

1. Where we use AI

AI models power features across PingAura, including:

  • Visibility runs across AI answer engines.
  • Site and page AI-readiness audits and scoring.
  • Article generation, briefs, and content editing.
  • Prompt research and topic clustering.
  • The AI Coworker chat, including tool calls that read from your connected data sources.
  • Summaries, explanations, and recommendations across the dashboard.

2. Model providers we rely on

We route prompts to the model best suited for each task, including OpenAI, Anthropic, Google (Gemini, AI Overview, AI Mode), Perplexity, DeepSeek, and xAI. We use each provider's business or enterprise tier, which contractually prohibits them from using your content to train general-purpose models. Providers may retain prompts and outputs for a short period for safety and abuse monitoring, as set out in their own policies.

3. How we handle your content

  • You own your Inputs (what you submit) and your Outputs (what the Services generate for you).
  • We do not use your Inputs or Outputs to train foundation models, and our providers do not either under the tiers we use.
  • We may use aggregated, de-identified metrics — for example, how often a feature is used or how a model class performs — to improve our own product.
  • Prompts, context, and results are logged to operate and debug the Services. Access to those logs is restricted to a small number of engineers under our access controls, and they are retained for the periods listed in our Privacy Policy.

4. Accuracy, hallucination, and human review

AI models can generate confident answers that are incorrect, out of date, or biased. Visibility scores, audit findings, generated articles, and Coworker responses are decision-support outputs, not statements of fact. Treat Outputs as a starting point and verify anything that matters before acting on it.

Do not rely solely on Outputs for decisions in regulated or high-stakes contexts, including legal, financial, medical, employment, or safety decisions. A qualified human should review and approve those decisions.

5. What AI does not do

  • Publish, send, or share content outside PingAura without your explicit action.
  • Write to production systems, push code, change billing, or take destructive actions on your connected integrations without your confirmation.
  • Access data you have not connected. The AI Coworker only calls the tools and data sources enabled for your account.

6. What you must not do

You may not use PingAura's AI features to:

  • Generate content that sexually exploits or endangers minors, or that depicts non-consensual sexual activity.
  • Plan, promote, or carry out violence, terrorism, or the development of weapons, including CBRN and cyber-weapons.
  • Produce malware, phishing, or other content designed to compromise computer or network systems.
  • Run disinformation campaigns, impersonate real people, fabricate evidence, or manipulate elections.
  • Generate defamatory content, targeted harassment, or content that incites discrimination on protected grounds.
  • Produce content that infringes copyright, trade marks, or other third-party rights.
  • Attempt to bypass a model provider's safety systems, content filters, or usage policies.

The acceptable-use policies of our underlying model providers also apply to your use of PingAura. We may investigate suspected misuse and take any action we reasonably believe is appropriate, including limiting, suspending, or terminating access.

7. Transparency

Outputs in the dashboard and the AI Coworker are clearly labelled as AI-generated. Where a feature blends deterministic rules with model output, we indicate which parts are AI-assisted. We maintain traces of model usage (including prompts, tools called, and token counts) to debug issues and to investigate reports of misuse.

8. Reporting issues

If you believe a PingAura Output is harmful, infringing, or significantly inaccurate, please email [email protected] with a link to the Output and a short description. We investigate reports promptly and take corrective action where needed.

9. Changes to this policy

We may update this policy as our product and the AI landscape evolve. Material changes will be dated at the top of the page and, for existing customers, notified by email or in-product notice.