In-Context Retrieval Augmented Language Models. TACL.
Ori Ram*, Yoav Levine*, Itay Dalmedigos, Dor Muhlgay, Amnon Shashua, Kevin Leyton-Brown, Yoav Shoham.
Originally released on Feb’23.
Generating Benchmarks for Factuality Evaluation of Language Models. Preprint.
Dor Muhlgay, Ori Ram, Inbal Magar, Yoav Levine, Nir Ratner, Yonatan Belinkov, Omri Abend, Kevin Leyton-Brown, Amnon Shashua, Yoav Shoham.
Originally released on Jul’23.
Standing on the Shoulders of Giant Frozen Language Models. Preprint.
Yoav Levine, Itay Dalmedigos, Ori Ram, Yoel Zeldes, Daniel Jannai, Dor Muhlgay, Yoni Osin, Opher Lieber, Barak Lenz, Shai Shalev-Shwartz, Amnon Shashua, Kevin Leyton-Brown, and Yoav Shoham.
Originally released on Apr’22.
The Inductive Bias of In-Context Learning: Rethinking Pretraining Example Design. Spotlight paper, ICLR 2021.
Yoav Levine, Noam Wies, Daniel Jannai, Dan Navon, Yedid Hoshen, and Amnon Shashua.
Originally released on Oct’21.
Which Transformer Architecture Fits my Data? A Vocabulary Bottleneck in Self-Attention. ICML 2021.
Noam Wies, Yoav Levine, Daniel Jannai, and Amnon Shashua.
Originally released on May’21.
PMI-Masking: Principled Masking of Correlated Spans. Spotlight paper, ICLR 2021.
Yoav Levine, Barak Lenz, Opher Lieber, Omri Abend, Kevin Leyton-Brown, Moshe Tennenholtz, and Yoav Shoham.
Originally released on Oct’20.
In-Context Retrieval Augmented Language Models. TACL.
Ori Ram*, Yoav Levine*, Itay Dalmedigos, Dor Muhlgay, Amnon Shashua, Kevin Leyton-Brown, Yoav Shoham.
Originally released on Feb’23.
Generating Benchmarks for Factuality Evaluation of Language Models. Preprint. Dor Muhlgay, Ori Ram, Inbal Magar, Yoav Levine, Nir Ratner, Yonatan Belinkov, Omri Abend, Kevin Leyton-Brown, Amnon Shashua, Yoav Shoham. Originally released on Jul’23.
Standing on the Shoulders of Giant Frozen Language Models. Preprint.
Yoav Levine, Itay Dalmedigos, Ori Ram, Yoel Zeldes, Daniel Jannai, Dor Muhlgay, Yoni Osin, Opher Lieber, Barak Lenz, Shai Shalev-Shwartz, Amnon Shashua, Kevin Leyton-Brown, and Yoav Shoham. Originally released on Apr’22.
The Inductive Bias of In-Context Learning: Rethinking Pretraining Example Design. Spotlight paper, ICLR 2021.
Yoav Levine, Noam Wies, Daniel Jannai, Dan Navon, Yedid Hoshen, and Amnon Shashua.
Originally released on Oct’21.
Which Transformer Architecture Fits my Data? A Vocabulary Bottleneck in Self-Attention. ICML 2021.
Noam Wies, Yoav Levine, Daniel Jannai, and Amnon Shashua.
Originally released on May’21.
PMI-Masking: Principled Masking of Correlated Spans. Spotlight paper, ICLR 2021.
Yoav Levine, Barak Lenz, Opher Lieber, Omri Abend, Kevin Leyton-Brown, Moshe Tennenholtz, and Yoav Shoham.
Originally released on Oct’20.