
2025-05-30 09:55:00
This https://arxiv.org/abs/2503.04779 has been replaced.
initial toot: https://mastoxiv.page/@arXiv_csPL_…
This https://arxiv.org/abs/2503.04779 has been replaced.
initial toot: https://mastoxiv.page/@arXiv_csPL_…
Semantics-Aware Human Motion Generation from Audio Instructions
Zi-An Wang, Shihao Zou, Shiyao Yu, Mingyuan Zhang, Chao Dong
https://arxiv.org/abs/2505.23465
This https://arxiv.org/abs/2501.13114 has been replaced.
initial toot: https://mastoxiv.page/@arXiv_mat…
Recursive Difference Categories and Topos-Theoretic Universality
Andreu Ballus Santacana
https://arxiv.org/abs/2505.22931 https://arx…
Last week, we continued our #ISE2025 lecture on distributional semantics with the introduction of neural language models (NLMs) and compared them to traditional statistical n-gram models.
Benefits of NLMs:
- Capturing Long-Range Dependencies
- Computational and Statistical Tractability
- Improved Generalisation
- Higher Accuracy
@…
This https://arxiv.org/abs/2502.18200 has been replaced.
initial toot: https://mastoxiv.page/@arXiv_ees…
An instance of FreeCHR with refined operational semantics
Sascha Rechenberger, Thom Fr\"uhwirth
https://arxiv.org/abs/2505.22155 https://
The complexity of deciding characteristic formulae modulo nested simulation
Luca Aceto, Antonis Achilleos, Aggeliki Chalki, Anna Ingolfsdottir
https://arxiv.org/abs/2505.22277
Bridging the Gap Between Semantic and User Preference Spaces for Multi-modal Music Representation Learning
Xiaofeng Pan, Jing Chen, Haitong Zhang, Menglin Xing, Jiayi Wei, Xuefeng Mu, Zhongqian Xie
https://arxiv.org/abs/2505.23298
This https://arxiv.org/abs/2505.02548 has been replaced.
initial toot: https://mastoxiv.page/@arXiv_mat…
In the #ISE2025 lecture today we were introducing our students to the concept of distributional semantics as the foundation of modern large language models. Historically, Wittgenstein was one of the important figures in the Philosophy of Language stating thet "The meaning of a word is its use in the language."
Next stop in our NLP timeline is 2013, the introduction of low dimensional dense word vectors - so-called "word embeddings" - based on distributed semantics, as e.g. word2vec by Mikolov et al. from Google, which enabled representation learning on text.
T. Mikolov et al. (2013). Efficient Estimation of Word Representations in Vector Space.
…