AI Usage Policy

How PingAura uses generative AI with human oversight, transparency, and accountability.

Last updated: January 2026

1. Overview

PingAura uses generative AI to speed up ideation, outlining, drafting, and structured data prep. Humans stay accountable for accuracy, context, compliance, and approvals. We follow a "transparency-first" approach so customers know how AI is used in our workflows.

2. Transparency & Disclosure

  • We disclose when AI materially shapes research, outlines, or draft copy. Human review is always required before publication.
  • We describe how AI was used and which systems were involved when it is relevant to the content or experience.
  • Significant updates are logged with dates; we keep revision notes for material changes.

3. What AI Does (and Does Not) Do

AI assists with: research acceleration, outlining, draft refinement, schema generation, QA prompts, and idea exploration.

AI does not: publish autonomously, approve interviews, set prices, modify production code, or bypass human editorial and compliance review.

4. Human Review & Governance

  • Every AI-assisted output is reviewed and approved by a human.
  • Sensitive topics (e.g., legal, financial, medical) require subject-matter review and, when needed, legal sign-off.
  • We log who reviewed content, what changes were made, and the approval timestamp.

5. Evidence & Source Handling

  • Assertions require primary sources, datasets, or reproducible methodology. AI-generated text is treated as a lead, not a fact.
  • Data, citations, and claims are verified by humans before use.

6. Privacy & Data Protection

  • We avoid sending personal or sensitive data to AI systems unless necessary and permitted; when sent, we apply minimization and security safeguards.
  • We use providers with appropriate data protections and adhere to our Privacy Policy.

7. Quality, Bias, and Safety Controls

  • We check AI outputs for factual accuracy, bias, safety, and suitability for the intended audience.
  • We reject outputs that include harmful, discriminatory, or non-compliant content.

8. Versioning

Material changes to this policy are dated and summarized. We keep a record of revisions for transparency.

9. Contact

If you have questions or feedback about our AI practices, please contact us through our support channels. We review inquiries promptly and update this policy as our practices evolve.