Tootfinder

Opt-in global Mastodon full text search. Join the index!

@lysander07@sigmoid.social
2025-05-17 07:38:59

In our #ISE2025 lecture last Wednesday, we learned how in n-gram language models via Markov assumption and maximum likelihood estimation we can predict the probability of the occurrence of a word given a specific context (i.e. n words previous in the sequence of words).
#NLP

Slide from the Information Service Engineering 2025 lecture, 03 Natural Language Processing 02, 2.9, Language MOdels:
Title: N-Gram Language Model
The probability of a sequence of words can be computed via contitional probability and the Bayes Rule (including the chain rule for n words). Approximation is performed via Markov assumption (dependency only on the n last words), and the Maximum Likelihood estimation (approximating the probabilities of a sequence of words by counting and normalising …
@JGraber@mastodon.social
2025-08-08 17:26:41

#Python Friday #290: Record Audio With PyAudio - #ai #nlp #audio

@JGraber@mastodon.social
2025-07-04 13:12:18

#Python Friday #286: Advanced Text-to-Speech With Coqui TTS - #ai #nlp

@JGraber@mastodon.social
2025-05-30 09:24:29

#Python Friday #281: Language Detection in Python - #ai #nlp

@JGraber@mastodon.social
2025-06-27 11:31:13

#Python Friday #285: Intermediate Text-to-Speech With Pyttsx3 #ai #nlp

@JGraber@mastodon.social
2025-06-20 06:46:08

#Python Friday #284: Basic Text-to-Speech With Google Translate #ai #nlp