Marcos Treviso
Instituto de Telecomunicações, Lisbon, Portugal
I am currently a post-doc researcher within the UTTER Project in Lisbon. My interests lie in the area of explainability and efficiency of NLP models.
I successfully defended my Ph.D. thesis in July 2023, with Distinction and Honour, the highest honours given by my university. I was advised by Prof. André Martins at IST / University of Lisbon. My PhD was funded by the ERC DeepSPIN Project.
My thesis focused on the role of sparsity for interpretability in NLP, in particular for the task of Machine Translation Quality Estimation. During my Ph.D., I also contributed to works on Efficient Transformers and Continuous Attention Mechanisms, co-authoring several papers in top-tier venues such as NeurIPS and ACL.
Before that, I interned at Unbabel, where I helped to develop OpenKiwi, a MT Quality Estimation tool that received the best Demo Paper Award at ACL19 and was the backbone of our winning submission to the WMT-QE19 shared task.
I obtained my M.Sc. in Computer Science and Computational Mathematics at the University of São Paulo (USP) under the supervision of Prof. Sandra M. Aluísio. My research involved using convolutional and recurrent neural nets with CRFs to detect sentence boundaries and speech disfluences in speech transcripts.