Leslie’s (2019) Guide includes a primer on AI ethics which provides readers with the conceptual resources and practical tools to steward the responsible design and implementation of AI projects.
AI ethics is a set of values, principles, and techniques that employ widely accepted standards of right and wrong to guide moral conduct in the development and use of AI technologies.
The field of AI ethics has largely emerged as a response to the range of individual and societal harms that the misuse, abuse, poor design, or negative unintended consequences of AI systems may cause. For more information, see Potential Issues of Using Generative AI.
These values, principles, and techniques are intended both to motivate morally acceptable practices and to prescribe the basic duties and obligations necessary to produce ethical, fair, and safe AI applications.
Source: Leslie, D. (2019). Understanding artificial intelligence ethics and safety: A guide for the responsible design and implementation of AI systems in the public sector. The Alan Turing Institute. https://doi.org/10.5281/zenodo.3240529
The Artificial Intelligence (AI) Guide provides an introduction to this evolving field for faculty, fellows, residents, postdocs, students, and staff. Due to the rapid advancement of this emerging technology, information in the Guide may become outdated at times.
For information on Artificial Intelligence (AI) Data Security and Privacy, see Artificial Intelligence (AI) Data Security and Privacy - Information Resources (utsouthwestern.net), VPN/On Campus access only. NOTE: this Guide supplements but does not supersede information provided by UT Southwestern or University of Texas policies and guidelines.