Description
LLMs can learn new tasks on the fly based on information provided in a prompt (e.g., examples or instructions) without their underlying code changing. However, how this in-context learning actually works is not well understood. This makes it hard to predict how an LLM might behave in new situations or if it could bypass safety measures.