In this work, we proposed a GOaL-Oriented Multi-task BERT-based dialogue state tracker (GOLOMB) inspired by architectures for reading comprehension question answering systems. The model "queries" dialogue history with descriptions of slots and services as well as possible values of slots. This allowed to transfer slot values in multi-domain dialogues and had a capability to scale to unseen slot types. Our model achieved a joint goal accuracy of 53.97% on the SGD dataset, outperforming the baseline model.