Last Updated: 2023-04-06
At AirOps, we are committed to providing our customers with state-of-the-art AI-driven solutions that adhere to the highest ethical standards. Our platform integrates with Large Language Models (LLMs) from OpenAI, which have undergone rigorous training and testing to ensure that they are unbiased, racism-free, and operate in an ethically responsible manner.
- Unbiased AI: We actively work to reduce any biases in AI models that AirOps interacts with by continuously refining our training data and prompt creation processes. All models are trained on a diverse and representative dataset to ensure fair and impartial outcomes.
- Racism-free AI: We are committed to promoting equality and fighting against discrimination. All models that AirOps interacts with are designed to recognize and mitigate any racist or discriminatory content. We also invest in research and development to further enhance the fairness and inclusivity of our AI systems.
- Explainable AI: We believe in transparency and understand the importance of explainability in AI systems. All models that AirOps interacts with are designed to provide clear, understandable explanations for their predictions, decisions, and recommendations, allowing users to make informed choices and maintain control over their data.
- Moderation: AirOps provides access to an additional moderation endpoint to ensure that all content generated by any model complies with the below usage restrictions from Open AI.
- Disallowed Usage: AirOps integrates directly with OpenAI which enforces the following restrictions:
- Illegal activity: OpenAI prohibits the use of our models, tools, and services for illegal activity.
- Child Sexual Abuse Material or any content that exploits or harms children: We report CSAM to the National Center for Missing and Exploited Children.
- Generation of hateful, harassing, or violent content: Including but not limited to: Content that expresses, incites, or promotes hate based on identity; Content that intends to harass, threaten, or bully an individual; Content that promotes or glorifies violence or celebrates the suffering or humiliation of others.
- Generation of malware: Content that attempts to generate code that is designed to disrupt, damage, or gain unauthorized access to a computer system.
- Activity that has high risk of physical harm: Including but not limited to: Weapons development; Military and warfare; Management or operation of critical infrastructure in energy, transportation, and water; Content that promotes, encourages, or depicts acts of self-harm, such as suicide, cutting, and eating disorders.
- Activity that has high risk of economic harm: Including but not limited to: Multi-level marketing; Gambling; Payday lending; Automated determinations of eligibility for credit, employment, educational institutions, or public assistance services.
- Fraudulent or deceptive activity: Including but not limited to: Scams; Coordinated inauthentic behavior; Plagiarism; Academic dishonesty; Astroturfing, such as fake grassroots support or fake review generation; Disinformation; Spam; Pseudo-pharmaceuticals.
- Adult content, adult industries, and dating apps: Including but not limited to: Content meant to arouse sexual excitement, such as the description of sexual activity, or that promotes sexual services (excluding sex education and wellness); Erotic chat; Pornography.
- Political campaigning or lobbying: Including but not limited to: Generating high volumes of campaign materials; Generating campaign materials personalized to or targeted at specific demographics; Building conversational or interactive systems such as chatbots that provide information about campaigns or engage in political advocacy or lobbying; Building products for political campaigning or lobbying purposes.
- Activity that violates people’s privacy: Including but not limited to: Tracking or monitoring an individual without their consent; Facial recognition of private individuals; Classifying individuals based on protected characteristics; Using biometrics for identification or assessment; Unlawful collection or disclosure of personal identifiable information or educational, financial, or other protected records.
- Unauthorized practice of law or offering tailored legal advice without a qualified person’s review: OpenAI’s models are not fine-tuned to provide legal advice. You should not rely on our models as a sole source of legal advice.
- Offering tailored financial advice without a qualified person’s review: OpenAI’s models are not fine-tuned to provide financial advice. You should not rely on our models as a sole source of financial advice.
- Diagnosing a certain health condition, or providing treatment instructions: OpenAI’s models are not fine-tuned to provide medical information. You should never use our models to provide diagnostic or treatment services for serious medical conditions. OpenAI’s platforms should not be used to triage or manage life-threatening issues that need immediate attention.
- High risk government decision-making: Including but not limited to: Law enforcement and criminal justice; Migration and asylum.
AirOps is dedicated to protecting the intellectual property rights of our partners and joint customers. We understand the importance of safeguarding sensitive information and have implemented robust measures to ensure the security and integrity of all data processed by our platform.
- Advanced Security Certifications: AirOps maintains industry-leading security certifications, such as SOC 2 Type 2, to demonstrate our commitment to the highest security standards. Certification documentation and penetration testing documentation is available upon request.
- Data Segregation: We employ strict data segregation protocols to ensure that the data of our partners and joint customers remains separate and secure at all times. This prevents unauthorized access and protects the IP rights of all parties involved.
- Confidentiality Agreements: We enter into comprehensive confidentiality agreements with our partners and customers, outlining the steps we take to protect their intellectual property and establish clear guidelines for data usage and sharing.
- Secure Integration with Third-Party AI Models: When interacting with third-party AI models, such as those provided by OpenAI, we implement secure data transmission protocols and adhere to the data handling guidelines set by the respective AI providers. This ensures the protection of both our customers' data and the IP rights of the AI model providers.
- No Persistent Storage of Customer Data: AirOps does not persistently store customer data at rest. Customer data schemas and proprietary modeling information used for model training and prompt engineering remain segregated from other customer workspaces, further safeguarding intellectual property.
By prioritizing ethical AI and robust IP protection, AirOps aims to create a trusted environment where our partners and customers can confidently leverage AI-driven solutions to unlock value from their business data.
Updated about 1 month ago