Summarising ideas (usually methods) from some of the NLP papers at ICML.

-> Single Page View Here

Can’t comment much about trends as they are mostly one to two papers in each sub NLP category. Although a common thread is building off your own work. Many times I wondered why a paper used some particular less well-known model or method and the answer almost always can be found in the list of authors. Apparently happens alot in ICML in general.

Relying on GPT3 prompt paradigm

Fancy Inference

Probably Novel

Empirical ++

Architecture

Intepretability



High Level non NLP Impressions

Neural Architecture papers appear to be at the level of controllable multi-task mixture of expert stuff. People are trying to combine meta-learning and multi-task and subspace learning.

Approximate Inference I feel like there were few variational inference advances, and also very few Bayesian NN at the conference. Variational Inference papers seemed to be mostly for specific architectures, and people seem to be working on sampling methods again.

NN Theory Empirical NN papers without theory were more popular at poster sessions than the two-layer NN theory proof papers. Simply more digestable/convincing/practical?!

Robustness, fairness, differential privacy Very popular. Big in the main conference and also in the workshops.


Acknowledgements

Conference attendance was generously supported by my advisor Kevin Duh. Also shout out to my ICML twin David Mueller who has his more ML focused coverage —> here!