AI Organizations Release Best Practices for Deploying Large Language Models

Best Practices for Large Language Models 2022

“Large language models (LLMs) represent a major advance in artificial intelligence and, in particular, toward the goal of human-like artificial general intelligence,” wrote Blaise Agüera y Arcas, Vice President and Fellow at Google Research, in a recent article.

Referring to LLMs as “foundation models,” The Economist outlined in a recent story how LLMs are “turbo-charging AI progress” and exhibit “abilities their creators did not foresee.”

According to Shobita Parthasarathy, Professor of Public Policy and Director of the Science, Technology, and Public Policy Program at the University of Michigan, LLMs have the ability to recognize, summarize, translate, predict, and generate human languages on the basis of very large text-based datasets, and are likely to “provide the most convincing computer-generated imitation of human language yet.”

More specifically, LLMs can — with limited to no supervision — write convincing essays, create charts and websites from text descriptions, hold up their end of conversation as customer-service chatbots or video-game characters, and generate computer code, among other tasks.

These models, however, also have shortcomings; they can generate “racist, sexist, and bigoted text, as well as superficially plausible content that, upon further inspection, is factually inaccurate, undesirable, or unpredictable,” pointed out Alex Tamkin, PhD student in Computer Science at Stanford, and Deep Ganguli, member of Technical Staff at Anthropic.

In their 2021 article, How Large Language Models Will Transform Science, Society, and AI, the aforementioned authors emphasized the need to develop norms and principles for deploying LLMs. “Those currently on the cutting edge […] have a unique ability and responsibility to set norms and guidelines that others may follow,” they said.

Best Practices

A year later and Cohere, OpenAI, and AI21 Labs publish a preliminary set of best practices for the responsible development and deployment of LLMs from the perspective of model developers.

According to these guidelines, LLM providers should do the following:

  • Publish usage guidelines and terms of use of LLMs in a way that prohibits material harm to individuals, communities, and society such as through spam, fraud, or astroturfing. Usage guidelines should also specify domains where LLM use requires extra scrutiny and prohibit inappropriate high-risk use-cases, such as classifying people based on protected characteristics.
  • Build systems and infrastructure to enforce usage guidelines — this may include rate limits, content filtering, application approval prior to production access, monitoring for anomalous activity, and other mitigations.
  • Proactively mitigate harmful model behavior through (i) comprehensive model evaluation, (ii) minimizing potential sources of bias in training corpora, and (iii) techniques to minimize unsafe behavior (e.g., learning from human feedback).
  • Document known weaknesses and vulnerabilities, such as bias or ability to produce insecure code, because, in some cases, no degree of preventive action can completely eliminate the potential for unintended harm. Documentation should also include model and use-case-specific safety best practices.
  • Build teams with diverse backgrounds and solicit broad input. Diverse perspectives are needed to characterize and address how language models will operate in the diversity of the real world where, if unchecked, they may reinforce biases or fail to work for some groups.
  • Publicly disclose lessons learned regarding LLM safety and misuse to enable widespread adoption and help with cross-industry iteration on best practices.
  • Treat all labor in the language model supply chain with respect; for example, LLM providers should have high standards for the working conditions of those reviewing model outputs in-house and hold vendors to well-specified standards.
SlatorCon Zurich 2023 | CHF 960

SlatorCon Zurich 2023 | CHF 960

An exclusive in-person event limited to 120 language industry leaders examining the impact of rapid technology advancements.

Buy Tickets

Register Now

“The future of human–machine interaction is full of possibility and promise, but any powerful technology needs careful deployment,” said the authors. These best practices will help LLMs providers “mitigate the risks of this technology in order to achieve its full promise to augment human capabilities,” they added.

Advancing Public Discussion

The compiling of best practices represents a first step toward building a community that can address the global challenges posed by AI progress. “We’re sharing these principles in hopes that other LLM providers may learn from and adopt them, and to advance public discussion on LLM development and deployment,” explained the authors.

Having recognized the importance of engaging more voices from academia, industry, and civil society to develop more detailed principles and community norms, the authors encourage other LLM providers or anyone working on mitigating LLM risk to get in touch with them.

Finally, given that the commercial uses of LLMs and accompanying safety considerations are new and evolving, the authors promised to continually update the compilation in collaboration with the broader AI community.

Editor’s Note: The featured image was created using the prompt “best practices in deploying large language models” via an AI model on HuggingFace called DALL·E mini. DALL·E mini is an open-source implementation of Open AI’s 12-billion parameter version of GPT-3 trained to generate images from text descriptions.