Papers-Notes

  1. Demystifying CLIP data
  2. Learning Transferable Visual Models From Natural Language Supervision
  3. LoRA: Low-Rank Adaptation of Large Language Models
  4. FrugalGPT: How to use LLM while reducing cost and improving performance
  5. Mathematics of Deep Learning
  6. Wasserstein GAN
  7. Why and How of Nonnegative Matrix Factorization
  8. DenseNet
  9. Learning Generative Models with Sinkhorn Divergences
  10. Improving GANs Using Optimal Transport
  11. Mask R-CNN
  12. Fully Convolutional Networks for Semantic Segmentation
  13. Improving Sequence-To-Sequence Learning Via Optimal Transport
  14. Memory-Efficient Implementation of DenseNets
  15. Attention Is All You Need
  16. Analyzing and Improving Representations with the Soft Nearest Neighbor Loss
  17. Optimal Transport for Domain Adaptation
  18. Large Scale Optimal Transport and Mapping Estimation
  19. Autoencoding Variational Bayes
  20. Label Efficient Learning of Transferable Representations across Domains and Tasks
  21. Stacked What-Where Auto-Encoders
  22. Unsupervised Data Augmentation for Consistency Training
  23. Towards Federated Learning at Scale: System Design
  24. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
  25. Notification Volume Control and Optimization System at Pinterest
  26. Class-Balanced Loss Based on Effective Number of Samples
  27. Modeling Task Relationships in Multi-task Learning with Multi-gate Mixture-of-Experts

Demystifying CLIP data

References


Learning Transferable Visual Models From Natural Language Supervision

References


LoRA: Low-Rank Adaptation of Large Language Models

References


FrugalGPT: How to use LLM while reducing cost and improving performance

\[max \:\: {\mathbb{E}_{(q,a) \in (QxA)} [r(a, \hat{a}(s,q))]} \:\: with \:\: \mathbb{E}_{(q,a) \in (QxA)} [c(s,q)] \leq b\]

References


Mathematics of Deep Learning

References


Wasserstein GAN

References


Why and How of Nonnegative Matrix Factorization

\[x_j \approx \sum_{k=1}^{r}\ w_kh_j(k) \;\text{for some weights}\; h_j \in \mathbb{R}^{r}\]

References


Densely Connected Convolutional Networks

References


Learning Generative Models with Sinkhorn Divergences

\[d^{\lambda}_M(r,c) = min_{P\in U(r,c)}\ \sum_{i\,j}\ P_{i\,j} \ M_{i\,j} - \frac{1}{\lambda} h(P)\]

References


Improving GANs Using Optimal Transport

\[\mathcal{D}^2_{MED}(p, g) = 2\mathbb{E}[\mathbb{W}_c(\mathbf{X}, \mathbf{Y})] - \mathbb{E}[\mathbb{W}_c(\mathbf{X}, \mathbf{X'})] - \mathbb{E}[\mathbb{W}_c(\mathbf{Y}, \mathbf{Y'})]\]

where \(\mathbf{X}, \mathbf{X'}\) aare individually sampled mini-bathces from distribution \(\textit{p}\) and \(\mathbf{Y}, \mathbf{Y'}\) are independent mini-bathces from \(\textit{g}\)

References


Mask R-CNN

\[L = L_{cls} + L_{box} + L_{mask}\]

References


Fully Convolutional Networks for Semantic Segmentation

References


Improving Sequence-To-Sequence Learning Via Optimal Transport

\[\mathcal{L} = \mathcal{L}_{MLE} + \gamma_1 \ \mathcal{L}_{copy} + \gamma_2 \ \mathcal{L}_{seq}\]

References


Memory-Efficient Implementation of DenseNets

References


Attention Is All You Need

References


Analyzing and Improving Representations with the Soft Nearest Neighbor Loss

References


Optimal Transport for Domain Adaptation

References


Large Scale Optimal Transport and Mapping Estimation

References


Autoencoding Variational Bayes

References


Label Efficient Learning of Transferable Representations across Domains and Tasks

References


Stacked What-Where Auto-Encoders

References


Unsupervised Data Augmentation for Consistency Training

\[\min_{\theta} \mathcal{J}_{UDA}(\theta) = \mathbb{E}_{x \in U} \mathbb{E_{\hat{x} \sim q(\hat{x}|x)}} [\mathcal{D}(p_{\hat{\theta}} (y \; \| \; x) \; \| \; p_{\theta}(y \; \| \; \hat{x}))]\]

where \(q(\hat{x} \| x)\) is a data augmentation transformation

References


Towards Federated Learning at Scale: System Design

References


BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

References


Notification Volume Control and Optimization System at Pinterest

References


Class-Balanced Loss Based on Effective Number of Samples

\[E_n = (1 - \beta^n) / (1 - \beta), where \beta = (N - 1)/N\] \[CB(p, y) = \frac{1}{E_n} \mathcal{L}(p,y) = \frac{1 - \beta}{1 - \beta^{n_y}}\mathcal{L}(p,y)\]

References


Modeling Task Relationships in Multi-task Learning with Multi-gate Mixture-of-Experts

\[y_k = h^k (f^k(x)), \quad where \quad f^k (x) = \sum^n_{i=1} g^k(x)_i f_i(x)\]

References