UP | HOME

Qin and Eisner 2021 – Learning How to Ask: Querying LMs with Mixtures of Soft Prompts

Qin2021

1. background

Language models like BERT have memorized a lot of facts about the world. You can extract this knowledge by giving it cloze style prompts, e.g., "__ is the first president of the U.S."

2. key idea

The prompts we give when we query language models aren't necessarily optimized for extracting the type of information we want. What if the prompts we gave were optimized for the express purpose of getting the facts we want as an answer.

3. methods and baselines

Given a relation, e.g. "x born-in-location y", learn the optimal prompt for that relation. (Really they learn a set of prompts, the results of which are weighted). Prompt the model using this optimal prompt.

Compare with the plain old text prompt "__x was born in __y"

4. results

the soft prompts get the correct answer more often

5. bibliography

Created: 2024-07-15 Mon 01:28