- Artificial Intelligence (AI)
AI refers to computer systems that can analyze information, learn from data and make predictions or recommendations to support decisions. These systems take in human- or machine-generated inputs and use automated models to produce options, insights or actions.
- AI Model
An AI model is the engine inside an AI system. It uses statistical, computational or machine-learning techniques to turn inputs into outputs — such as text, decisions, predictions or automated responses.
- AI System
An AI system is any software, tool, application or hardware that uses AI technology in whole or in part to perform tasks or assist users.
- Deep Learning (DL)
AI that uses multi-layered neural networks to recognize patterns, such as images, voices or complex system behavior.
- Generative AI
AI that creates new content — like text, images or audio — based on learned patterns and user prompts.
- Hybrid AI Systems
Tools that combine multiple AI models to handle a wider range of tasks or requests.
- Large Language Models (LLMs)
Advanced deep-learning models trained on large volumes of text to understand and generate natural language.
- Machine Learning (ML)
AI that improves over time by learning from data, commonly used for predictions, trends and pattern detection.
Using AI the Right Way: Policy and Standards
Responsible AI
The State of Oklahoma approaches artificial intelligence through a principled framework designed to maximize benefits while upholding strong standards of responsibility and public service. This is supported by dedicated AI Safeguards that address risk management, security, data governance, procurement processes, and platform compliance, ensuring that all AI initiatives are implemented safely and in alignment with state governance requirements.
Equally important are the principles of Responsible AI, which serve as ethical guardrails for all AI systems. These principles emphasize transparency in decision-making, fairness to prevent bias, accountability for outcomes, and meaningful human oversight. Together, these elements ensure that AI enhances government efficiency and citizen services while protecting privacy, promoting equity, and preserving public confidence in state operations.
Use of AI in Oklahoma State Government
Artificial Intelligence creates new opportunities to improve how the State of Oklahoma serves its citizens and supports its workforce. With these opportunities comes a responsibility to use AI in a secure, thoughtful, and ethical way.
Under Oklahoma law, the State Chief Information Officer (CIO) has full authority over all technology purchases — including AI tools and systems. This ensures that every AI procurement follows statewide policy, meets security requirements and supports strategic priorities.
In short, no AI system can be purchased or implemented without CIO review and approval, providing consistent oversight and protecting the state’s technology environment.
Under the Oklahoma Information Technology Consolidation and Coordination Act (ITCCA), the State CIO oversees all planning, development, acquisition, and implementation of information technology—including AI systems—across executive state agencies. This authority ensures that AI initiatives support statewide priorities such as secure operations, efficient government services, and data-driven decision-making.
The CIO is responsible for establishing standards, policies, and procedures that guide the ethical, secure, and effective use of AI. All agency-requested AI systems, whether procured directly or through OMES-managed contracts, must undergo CIO review.
During this review, the CIO may evaluate:
- Alignment with statewide IT and AI strategic priorities
- Compliance with ethical, legal, and security requirements, including:
- Data quality and transparency
- Privacy and security protections
- Accountability and fairness
- Reliability, robustness, and regulatory compliance
- Avoidance of bias and protection of electoral integrity
- Collaboration and public benefit
- Data privacy and protection practices
- Whether the system is entirely new or an AI enhancement to an existing approved solution
AI systems used by the state should follow three core principles:
- Ethical and Responsible Use AI must respect and protect human rights, including privacy and equality. Systems should be designed to safeguard individuals and their data.
- Transparency and Accountability AI decisions should be explainable and understandable. Agencies must be able to show how an AI system reached its conclusion, ensuring decisions are transparent, unbiased, and backed by clear responsibility.
- Fairness and Non-Discrimination AI must avoid bias and produce fair, accurate results. Systems should be built to prevent discrimination against protected classes and comply with all applicable laws that safeguard those groups.
AI systems can unintentionally spread false or inaccurate information. This can happen in two ways:
- Misinformation: False information shared by mistake—often caused by errors, outdated data, or flawed AI outputs.
- Disinformation: False information created on purpose to mislead or manipulate. In government use of AI, both can appear when an AI system generates or amplifies inaccurate content. These issues can reduce trust, create confusion, and weaken decision-making. Because of this, agencies must put safeguards in place to detect, prevent, and correct inaccurate AI-generated information.
When using tools like ChatGPT, Gemini, or Claude, always protect state information and verify accuracy.
Follow these simple guidelines:
Do
- Check accuracy. Review AI-generated content to ensure its correct, relevant, and appropriate.
- Own your work. AI can support tasks, but you are responsible for the final product.
- Match agency standards. Keep tone, style, and professionalism aligned with your agency’s communication guidelines.
- Use generic examples. Public AI tools may store your inputs, so avoid using real or sensitive scenarios.
Do Not
Don’t use AI outputs without reviewing them. AI can produce incorrect or incomplete information (“hallucinations”).
- Don’t enter sensitive data. Never provide PII, financial details, health data, authentication information, or any sensitive state data.
- Don’t use AI for procurement. Avoid drafting solicitation documents or anything that could give a vendor an unfair advantage.
- Don’t rely on AI for translations. Always validate translations with a qualified interpreter to ensure accuracy and avoid bias.
- Helpful Example Prompts These are safe, general examples of acceptable work-related prompts:
- Write a short memo to employees about a return-to-office order.
- Draft a job description for a State of Oklahoma Executive Administrative Assistant.,
- Write a 200-word email announcing the benefits enrollment period, using provided documents for context.
- Create a one-page FAQ about Oklahoma employee benefits.
Need Guidance?
If you’re unsure whether something is appropriate to enter an AI tool, submit a ServiceNow ticket to the OMES Help Desk for review and support.
The CIO can audit any AI system to ensure it was properly approved and continues to meet state requirements. AI tools are reviewed during procurement, and multi-year contracts may require additional audits before renewal. Once in use, systems may be monitored regularly.
Audits may check for:
- Compliance with state and federal laws
- Alignment with state IT standards
- Regulatory requirements
- Bias or fairness issues
- Data privacy protections
- Any other required AI compliance measures
All AI systems must complete a third-party security review—this includes validating the supplier’s Authority to Operate (ATO) and a full product security assessment. The OMES Chief Information Security Officer (CISO) must approve these reviews before any AI tool is used.
Sensitive data must not be entered into public AI tools under any circumstances. This includes information covered by HIPAA, FERPA, FTI, PII, CJIS, or any other protected data. A secure, non-public AI environment may be approved for sensitive data use, but only with explicit CIO approval.
Once the CIO authorizes sensitive data use, agencies are responsible for ongoing compliance with all federal and state regulations related to that data.
Compliance
This standard goes into effect immediately upon publication under Title 62 O.S. §§ 34.11.1, 34.12, and 35.8. OMES IS may update these standards at any time, and all agencies, boards, commissions under the CIO’s authority—as well as suppliers and contractors—must follow the most current version. Employees who violate this standard may face disciplinary action, including termination. State entities outside the CIO’s authority are encouraged to follow this standard as best practice.
Rationale
The goal of this standard is to ensure statewide coordination and central approval of IT purchases and projects. This allows the CIO to understand agency needs, reduce duplication, streamline systems, and help the state deliver essential public services efficiently and cost-effectively.
Complete AI Awareness Training
The State of Oklahoma is committed to responsible, safe, and proactive use of artificial intelligence to enhance government efficiency.