Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@lysander07@sigmoid.social
2025-05-09 08:41:35

Building on the 90s, statistical n-gram language models, trained on vast text collections, became the backbone of NLP research. They fueled advancements in nearly all NLP techniques of the era, laying the groundwork for today's AI.
F. Jelinek (1997), Statistical Methods for Speech Recognition, MIT Press, Cambridge, MA
#NLP

Slide from Information Service Engineering 2025, LEcture 02, Natural Language PRocessing 01, A Brief History of NLP, NLP timeline. The timeline is located in the middle of the slide from top to bottom. The pointer on the timeline indicates 1990s. On the left, the formula for conditional probability of a word, following a given series of words, is given as a formula. Below, an AI generated portrait of William Shakespeare is displayed with 4 speech buubles, representing artificially generated tex…
@arXiv_qfinST_bot@mastoxiv.page
2025-06-10 09:54:32

The Hype Index: an NLP-driven Measure of Market News Attention
Zheng Cao, Wanchaloem Wunkaew, Helyette Geman
arxiv.org/abs/2506.06329

@lysander07@sigmoid.social
2025-05-11 13:16:51

Next stop in our NLP timeline is 2013, the introduction of low dimensional dense word vectors - so-called "word embeddings" - based on distributed semantics, as e.g. word2vec by Mikolov et al. from Google, which enabled representation learning on text.
T. Mikolov et al. (2013). Efficient Estimation of Word Representations in Vector Space.

Slide from the Information Service Engineering 2025 lecture, lecture 02, Natural Language Processing 01, NLP Timeline. The timeline is in the middle of the slide from top to bottom, indicating a marker at 2013. On the left, a diagram is shown, displaying vectors  for "man" and "woman" in a 2D diagram. An arrow leades from the point of "man" to the point of "woman". Above it, there is also the point marked for "king" and the same difference vector is transferred from "man - > woman" to "king - ?…
@arXiv_csCY_bot@mastoxiv.page
2025-06-03 07:21:21

Optimizing Storytelling, Improving Audience Retention, and Reducing Waste in the Entertainment Industry
Andrew Cornfeld, Ashley Miller, Mercedes Mora-Figueroa, Kurt Samuels, Anthony Palomba
arxiv.org/abs/2506.00076

@lysander07@sigmoid.social
2025-05-07 09:59:49

With the advent of ELIZA, Joseph Weizenbaum's first psychotherapist chatbot, NLP took another major step with pattern-based substitution algorithms based on simple regular expressions.
Weizenbaum, Joseph (1966). ELIZA—a computer program for the study of natural language communication between man and machine. Com. of the ACM. 9: 36–45.

Slide from the Information Service Enguneering 2025 lecture slidedeck, lecture 02, Natural language processing 01, Excursion: A Brief History of NLP, NLP timeline
On the right side of the image, a historic text terminal screenshot of a starting ELIZA dialogue is depicted. The timeline in the middle of the picture (from top to bottom) indicates the year 1966. The text left of the timeline says: ELIZA was an early natural language processing computer program created from 1964 to 1966 at the MIT A…
@arXiv_csGR_bot@mastoxiv.page
2025-06-03 07:24:23

Silence is Golden: Leveraging Adversarial Examples to Nullify Audio Control in LDM-based Talking-Head Generation
Yuan Gan, Jiaxu Miao, Yunze Wang, Yi Yang
arxiv.org/abs/2506.01591

@arXiv_csIR_bot@mastoxiv.page
2025-06-03 07:21:20

Query Drift Compensation: Enabling Compatibility in Continual Learning of Retrieval Embedding Models
Dipam Goswami, Liying Wang, Bart{\l}omiej Twardowski, Joost van de Weijer
arxiv.org/abs/2506.00037

@arXiv_csDL_bot@mastoxiv.page
2025-06-05 07:17:26

Enhancing Automatic PT Tagging for MEDLINE Citations Using Transformer-Based Models
Victor H. Cid, James Mork
arxiv.org/abs/2506.03321

@arXiv_econGN_bot@mastoxiv.page
2025-06-03 16:34:48

This arxiv.org/abs/2504.15448 has been replaced.
initial toot: mastoxiv.page/@arXiv_eco…