Provably Transformers Harness Multi-Concept Word Semantics for Efficient In-Context Learning.

Published in NeurIPS, 2024

Download paper here