Skip to content

Latest commit

 

History

History
21 lines (12 loc) · 710 Bytes

README.md

File metadata and controls

21 lines (12 loc) · 710 Bytes

Domain Transformer: Predicting Samples of Unseen, Future Domains

This approach predicts samples with labels(!) from unseen domains. Other approaches (e.g., from unsupervised domain adapation) can only label samples from (unseen) domains. It learns a transformer that can transform between target domains.

The paper was accepted at IJCNN, 2022.

Paper: 'Domain Transformer: Predicting Samples of Unseen, Future Domains' by Johannes Schneider, IJCNN, 2022,

PDF: https://arxiv.org/abs/2106.06057

Licence: Use it however you like, but cite the paper :-)

Usage:

Source code is in Pytorch. Computation takes a while. Run "runExp.py"