Skip to main content

Artificial Intelligence (AI) Data Security Guidelines

CU Boulder Guidelines

Implementation of AI is subject to all applicable CU Boulder policies, standards and guidelines. If you have any questions about how to assess the implications of AI for your use case, please email security@colorado.edu⁠⁠.

As the uses and ramifications of AI as a technology tool continue to evolve and be better understood, so too are the effects it will have on data security in higher education. This page is intended to guide our community as you consider using AI on the CU Boulder campus. Below are some guidelines for limiting data security risk while using generative AI tools:

  • Use AI tools available and recommended for use on the CU Boulder campus found on the Office of Vice Chancellor for IT and Chief Information Officer website. If you have any questions about access to these tools, please email oithelp@colorado.edu.
  • AI tool use is only approved for data classified as Public (see CU's Data Classification Standard for more information). The standard CU Boulder IT security contractual provisions may not be in place by default. If you intend to share Confidential or Highly Confidential information with an AI-based service (or any service for that matter), the service needs to be reviewed for compliance with CU’s data security requirements via ICT Review. Information shared with Generative AI tools using default settings is not private and could expose proprietary or sensitive information to unauthorized parties.
  • Review AI tool output to verify integrity: the output should be accurate and not incomplete, incorrect, or biased.
  • AI-generated computer code should NOT be used for institutional IT systems and services unless it has been reviewed by a human for accuracy, security (ensuring no vulnerabilities are introduced), and efficiency.
  • Faculty, staff, and students using these tools must read and understand the AI tool vendor’s policies and become familiar with the limitations of use and how the vendor plans to store, process, or handle CU’s data. For example, if using ChatGPT, you will need to read and comply with OpenAI’s Policies because there are some limitations regarding how the product may be used.

Additional Resources

With any new technology, there are always risks when utilizing it. If you decide to utilize or explore AI tools, you will need to consider the potential positive and/or negative impacts it will have on the campus, your department, and your personal life. A great framework to help you determine whether an AI tool is reliable and/or truthful is the NIST AI Risk Management framework. Below is a high-level overview of the framework.

Valid & ReliableTrustworthy AI produces accurate results within expected timeframes. 
SafeTrustworthy AI produces results that conform to safety expectations for the environment in which the AI is used (e.g., healthcare, transportation, etc.). 
Fair & Bias is ManagedBias can manifest in many ways; standards and expectations for bias minimization should be defined prior to using AI. 
Secure & ResilientSecurity is judged according to the standard triad of confidentiality, integrity, and availability. Resilience is the degree to which the AI can withstand and recover from attack. 
Transparent & AccountableTransparency refers to the ability to understand information about the AI system itself, as well as understanding when one is working with AI-generated (rather than human-generated) information. Accountability is the shared responsibility of the creators/vendors of the AI as well as those who have chosen to implement AI for a particular purpose. 
Explainable & InterpretableThese terms relate to the ability to explain how an output was generated, and how to understand the meaning of the output. 
Privacy EnhancedThis refers to privacy from both a legal and ethical standpoint. This may overlap with some of the previously listed attributes. 

*Please Note: Appendix B of the NIST framework discusses risks unique to AI. It recommends you review these risks to understand how AI risk differentiates from other more familiar technological risks.

To assist you in your consideration and implementation of these guidelines, reading material on the topic of AI concerns can be found on the following webpages: