"En el caso de las izquierdas, hay algunas izquierdas que cuando se trata de Cuba miran para otro lado sin entender que Cuba tiene muchas funciones en la batalla cultural de las izquierdas del hemisferio occidental."
"In the case of the left, there are certain left sections that, when Cuba comes up, look the other way, without understanding that Cuba has multiple functions in the cultural battle of the Western hemisphere left."
Iramis R. Cšrdenas:
As salty as I am about it, there's also another way to think about this. For anyone who still has connections to folks on the right (which is perhaps unlikely for anyone on this server, I digress), the cult that has consumed them thrives on isolation and grievance.
The words "you were right" have the potential to cut through the programming and open up an opportunity for reconnection. The modern conspiratorial cult of the Right has been built partially around people who were told they were wrong or were crazy. In the vast majority of cases, they were wrong and even when they were right they completely misunderstood why, but we'll skip that for now. Liberals making fun of them (even the times when they definitely earned it) has pushed them further and further into their ideological hole.
The thing about those words, "you were right," in this context is that the way they offer reconnection also requires them to take one little step of betraying their ideology to accept them. So they must choose between maintaining allegiance to a pedophile or finally getting to feel superior after years of living in an illusion of persecution.
Under the ideology of the Right, admitting one is wrong is a weakness. It is admitting defeat. They have to "own the libs" by saying things, things that they know aren't true, in order to feel dominant. But these things are often so absurd that they end up being made fun of, feeling even more weak and pathetic, reinforcing their fear and alienation.
Offering what they're looking for can offer a way out, but only if they're willing to start to recognize the thing they've supported for what it is.
And they were right about some things. They were right that Bill Gates was a terrible person. I've had plenty of liberals defend him based on his philanthropy washing, but he's awful and always has been. The Epstein links make that blatant. They intuitively recognized him and didn't trust him, even if they were wildly off base about *how and why* he shouldn't be trusted... Even if their correct mistrust was leveraged into one of the most destructive conspiracy theories ever (vaccine denial and COVID vaccine avoidance).
They were right about Bill Clinton. He was always shady as fuck. Sure, the people who attacked him at the time turned out to be even more shady but that's not the point right now. He was connected to Epstein and that was always creepy as fuck.
And the Epstein thing was an open secret that liberals ignored for a long time. It was seen as some weird thing that right wing nutjobs believed about the Clintons. But it was true. Not all of it, and there has always been an antisemitic element to the right wing interpretation or Epstein stuff, but his whole pedophile conspiracy was always kind of real.
The whole "Illuminati"/deep state thing is a vast oversimplification, an attempt to make comprehensible an incredibly complex set of interlocking and emergent behaviors. But Epstein did very much want to remake the world, to create a new world order, and he absolutely played a part in it.
The Right wing nutjobs talked about global authoritarianism, Blackhawks flying over American cities, masked men with guns disarming and executing legal gun owners in the streets. That's all happening right now.
The "FEMA concentration camps" are not actually that far off. ICE and FEMA are sister agencies, both under DHS. I'd be more than happy to call that one "close enough" in order to hear some MAGA admit that ICE is, in fact, building concentration camps.
There was always a huge millennialist element to these things. They tended to be connected to "the antichrist." It was absurd, especially for me as someone who no longer identifies as a Christian. But I'll even acquiess that to a degree. The "the number of the Beast" is 666. That's just the sum of the Hebrew spelling of "Nero." Revelations focuses a lot on Nero coming back to life after his death. His death that involved a head wound, thus the line from Revelation 13:3:
> And I saw one of his heads as if it had been mortally wounded, and his deadly wound was healed. And all the world marveled and followed the beast.
The parallels between Trump and Nero are easy to draw, and Trump's ear wound feels pretty on-the-nose for this. I don't believe in "prophecy" in this way. I think that there are patterns, and useful patterns can become encoded in beleif systems. But I will, again, happily call this one "close enough" for anyone on that side willing to also acknowledge it. I'm happy to meet on that common ground, because anyone who accepts it must recognize that their duty is to fight against it.
A lot of these correct nuggets are embedded in a framework of religious extremism and antisemitism. The vast majority of the beliefs holding these together are wildly wrong and incredibly toxic. But by giving some room to feel validated, listened to, understood, can give some room to admit things that were wrong.
Cult de-programming starts with an opening. People have to talk through their own thoughts, hear their own inconsistencies. Guiding questions can help them untangle these things for themselves. And it all starts by having enough room to feel safe, to not feel cornered, to not feel stupid. Admitting mistakes means being vulnerable, and the MAGA cult is built on fear. It's built on exploiting vulnerability and locking it away.
De-programming takes a long time. It's not easy. It takes patience. But every person who comes out does so with a powerful perspective, a deep understanding, that can be turned back against it. The best people at getting people out of cults are former members. Some of the most dedicated antifa are former fascists who understood their mistakes and dedicate their lives to fixing them.
German court convicts alleged mastermind behind global investment scam network https://therecord.media/german-court-convicts-alleged-mastermind-scam-network
🔁 Good tests are the first feedback loop — #AIAgents iterate until the tests go green
🥞 Any stack is your stack — conceptual understanding transfers even without hands-on framework experience
🤖 Agents are not just for coding — UX, infra, ops, analytics, ad campaigns, even accounting workflows
🧠 The context bottleneck is in your head — not in the model
🏗️ Build over Buy …
Crosslisted article(s) found for cs.CL. https://arxiv.org/list/cs.CL/new
[1/2]:
- Bridge-RAG: An Abstract Bridge Tree Based Retrieval Augmented Generation Algorithm With Cuckoo Fi...
Li, Liu, Zong, Tao, Dai, Ren, Liu, Jiang, Yang
https://arxiv.org/abs/2603.26668 https://mastoxiv.page/@arXiv_csIR_bot/116322781593134028
- SRAG: RAG with Structured Data Improves Vector Retrieval
Shalin Shah, Srikanth Ryali, Ramasubbu Venkatesh
https://arxiv.org/abs/2603.26670 https://mastoxiv.page/@arXiv_csIR_bot/116322784870180864
- LITTA: Late-Interaction and Test-Time Alignment for Visually-Grounded Multimodal Retrieval
Seonok Kim
https://arxiv.org/abs/2603.26683 https://mastoxiv.page/@arXiv_csIR_bot/116322841916406330
- Agentic AI for Human Resources: LLM-Driven Candidate Assessment
Yuksel, Anees, Elneima, Hewavitharana, Al-Badrashiny, Sawaf
https://arxiv.org/abs/2603.26710 https://mastoxiv.page/@arXiv_csIR_bot/116322937601675587
- SEAR: Schema-Based Evaluation and Routing for LLM Gateways
Zecheng Zhang, Han Zheng, Yue Xu
https://arxiv.org/abs/2603.26728 https://mastoxiv.page/@arXiv_csDB_bot/116322627580095245
- SleepVLM: Explainable and Rule-Grounded Sleep Staging via a Vision-Language Model
Guifeng Deng, Pan Wang, Jiquan Wang, Shuying Rao, Junyi Xie, Wanjun Guo, Tao Li, Haiteng Jiang
https://arxiv.org/abs/2603.26738 https://mastoxiv.page/@arXiv_csCV_bot/116322739676378309
- Aesthetic Assessment of Chinese Handwritings Based on Vision Language Models
Chen Zheng, Yuxuan Lai, Haoyang Lu, Wentao Ma, Jitao Yang, Jian Wang
https://arxiv.org/abs/2603.26768 https://mastoxiv.page/@arXiv_csCV_bot/116323078149576728
- Learning to Select Visual In-Context Demonstrations
Eugene Lee, Yu-Chi Lin, Jiajie Diao
https://arxiv.org/abs/2603.26775 https://mastoxiv.page/@arXiv_csLG_bot/116322648878995047
- CRISP: Characterizing Relative Impact of Scholarly Publications
Hannah Collison, Benjamin Van Durme, Daniel Khashabi
https://arxiv.org/abs/2603.26791 https://mastoxiv.page/@arXiv_csDL_bot/116322621679820997
- GroupRAG: Cognitively Inspired Group-Aware Retrieval and Reasoning via Knowledge-Driven Problem S...
Xinyi Duan, Yuanrong Tang, Jiangtao Gong
https://arxiv.org/abs/2603.26807 https://mastoxiv.page/@arXiv_csIR_bot/116322959557860848
- In your own words: computationally identifying interpretable themes in free-text survey data
Jenny S Wang, Aliya Saperstein, Emma Pierson
https://arxiv.org/abs/2603.26930 https://mastoxiv.page/@arXiv_csCY_bot/116322780637316287
- Multilingual Stutter Event Detection for English, German, and Mandarin Speech
Felix Haas, Sebastian P. Bayerl
https://arxiv.org/abs/2603.26939 https://mastoxiv.page/@arXiv_csSD_bot/116322704289189130
- FormalProofBench: Can Models Write Graduate Level Math Proofs That Are Formally Verified?
Ravi, Ying, Nesterov, Krishnan, Uskuplu, Xia, Aswedige, Nashold
https://arxiv.org/abs/2603.26996 https://mastoxiv.page/@arXiv_csAI_bot/116322625941412681
- PHONOS: PHOnetic Neutralization for Online Streaming Applications
Waris Quamer, Mu-Ruei Tseng, Ghady Nasrallah, Ricardo Gutierrez-Osuna
https://arxiv.org/abs/2603.27001 https://mastoxiv.page/@arXiv_eessAS_bot/116322763598554193
- ChartNet: A Million-Scale, High-Quality Multimodal Dataset for Robust Chart Understanding
Jovana Kondic, et al.
https://arxiv.org/abs/2603.27064 https://mastoxiv.page/@arXiv_csCV_bot/116323214468792735
- daVinci-LLM:Towards the Science of Pretraining
Qin, Liu, Mi, Xie, Huang, Si, Lu, Feng, Wu, Liu, Luo, Hou, Guo, Qiao, Liu
https://arxiv.org/abs/2603.27164 https://mastoxiv.page/@arXiv_csAI_bot/116322653467105951
- LightMover: Generative Light Movement with Color and Intensity Controls
Zhou, Wang, Kim, Shu, Yu, Hold-Geoffroy, Chaturvedi, Wu, Lin, Cohen
https://arxiv.org/abs/2603.27209 https://mastoxiv.page/@arXiv_csCV_bot/116323263295656104
- Self-evolving AI agents for protein discovery and directed evolution
Tan, Zhang, Li, Yu, Zhong, Zhou, Dong, Hong
https://arxiv.org/abs/2603.27303 https://mastoxiv.page/@arXiv_csAI_bot/116322838641595927
- Inference-Time Structural Reasoning for Compositional Vision-Language Understanding
Amartya Bhattacharya
https://arxiv.org/abs/2603.27349 https://mastoxiv.page/@arXiv_csCV_bot/116323280006044500
- LLM Readiness Harness: Evaluation, Observability, and CI Gates for LLM/RAG Applications
Alexandre Cristov\~ao Maiorano
https://arxiv.org/abs/2603.27355 https://mastoxiv.page/@arXiv_csAI_bot/116322987708962414
- Heterogeneous Debate Engine: Identity-Grounded Cognitive Architecture for Resilient LLM-Based Eth...
Jakub Mas{\l}owski, Jaros{\l}aw A. Chudziak
https://arxiv.org/abs/2603.27404 https://mastoxiv.page/@arXiv_csAI_bot/116322999177460352
toXiv_bot_toot
Good Morning #Canada
It's time for our first post in the #CanadaRivers series with #25 in our countdown. The Churchill River, in Atlantic Canada, flows for 856 km from Lake Melville into the Atlantic Ocean. It drains a watershed that covers 79,800 km/2 and has an average volume of 1,580 square metres per second.
The power development at Churchill Falls has backed up the river and created the enormous Smallwood Reservoir. Farther upstream, a hydroelectric plant at the outfall from Menihek Lakes provides power for the former iron-mining town of Schefferville, Québec. With a heavy flow and large drop from the Labrador Plateau, the river has probably the greatest hydroelectric potential of any in North America. The Churchill Falls Generating Station deserves it's own post as it is a massive 5,428 MW underground hydro power plant.
Don't get used to calling it the Churchill River as there are recent campaigns to return to its traditional native name.
#CanadaIsAwesome #Geography
https://www.cbc.ca/news/canada/newfoundland-labrador/churchill-river-innu-name-change-mishta-shipu-1.6444142
Replaced article(s) found for cs.CL. https://arxiv.org/list/cs.CL/new
[3/5]:
- Can Small Language Models Handle Context-Summarized Multi-Turn Customer-Service QA? A Synthetic D...
Lakshan Cooray, Deshan Sumanathilaka, Pattigadapa Venkatesh Raju
https://arxiv.org/abs/2602.00665 https://mastoxiv.page/@arXiv_csCL_bot/116006686092324902
- SEAD: Self-Evolving Agent for Multi-Turn Service Dialogue
Dai, Gao, Zhang, Wang, Luo, Wang, Wang, Wu, Wang
https://arxiv.org/abs/2602.03548
- OmniRAG-Agent: Agentic Omnimodal Reasoning for Low-Resource Long Audio-Video Question Answering
Yifan Zhu, Xinyu Mu, Tao Feng, Zhonghong Ou, Yuning Gong, Haoran Luo
https://arxiv.org/abs/2602.03707
- GreekMMLU: A Native-Sourced Multitask Benchmark for Evaluating Language Models in Greek
Zhang, Konomi, Xypolopoulos, Divriotis, Skianis, Nikolentzos, Stamou, Shang, Vazirgiannis
https://arxiv.org/abs/2602.05150
- Using LLMs for Knowledge Component-level Correctness Labeling in Open-ended Coding Problems
Zhangqi Duan, Arnav Kankaria, Dhruv Kartik, Andrew Lan
https://arxiv.org/abs/2602.17542 https://mastoxiv.page/@arXiv_csCL_bot/116102514058414603
- MetaState: Persistent Working Memory Enhances Reasoning in Discrete Diffusion Language Models
Kejing Xia, Mingzhe Li, Lixuan Wei, Zhenbang Du, Xiangchi Yuan, Dachuan Shi, Qirui Jin, Wenke Lee
https://arxiv.org/abs/2603.01331 https://mastoxiv.page/@arXiv_csCL_bot/116165314672421581
- A Browser-based Open Source Assistant for Multimodal Content Verification
Milner, Foster, Karmakharm, Razuvayevskaya, Roberts, Porcellini, Teyssou, Bontcheva
https://arxiv.org/abs/2603.02842 https://mastoxiv.page/@arXiv_csCL_bot/116170368271004704
- Nw\=ach\=a Mun\=a: A Devanagari Speech Corpus and Proximal Transfer Benchmark for Nepal Bhasha ASR
Sharma, Shrestha, Poudel, Tiwari, Shrestha, Ghimire, Bal
https://arxiv.org/abs/2603.07554 https://mastoxiv.page/@arXiv_csCL_bot/116204797995674104
- Model Merging in the Era of Large Language Models: Methods, Applications, and Future Directions
Mingyang Song, Mao Zheng
https://arxiv.org/abs/2603.09938 https://mastoxiv.page/@arXiv_csCL_bot/116210189810004206
- AgentDrift: Unsafe Recommendation Drift Under Tool Corruption Hidden by Ranking Metrics in LLM Ag...
Zekun Wu, Adriano Koshiyama, Sahan Bulathwela, Maria Perez-Ortiz
https://arxiv.org/abs/2603.12564 https://mastoxiv.page/@arXiv_csCL_bot/116237800898328349
- GhanaNLP Parallel Corpora: Comprehensive Multilingual Resources for Low-Resource Ghanaian Languages
Gyamfi, Azunre, Moore, Budu, Asare, Owusu, Asiamah
https://arxiv.org/abs/2603.13793 https://mastoxiv.page/@arXiv_csCL_bot/116243544688031749
- sebis at ArchEHR-QA 2026: How Much Can You Do Locally? Evaluating Grounded EHR QA on a Single Not...
Ibrahim Ebrar Yurt, Fabian Karl, Tejaswi Choppa, Florian Matthes
https://arxiv.org/abs/2603.13962 https://mastoxiv.page/@arXiv_csCL_bot/116243646346563497
- ExPosST: Explicit Positioning with Adaptive Masking for LLM-Based Simultaneous Machine Translation
Yuzhe Shang, Pengzhi Gao, Yazheng Yang, Jiayao Ma, Wei Liu, Jian Luan, Jinsong Su
https://arxiv.org/abs/2603.14903 https://mastoxiv.page/@arXiv_csCL_bot/116243711232778054
- BanglaSocialBench: A Benchmark for Evaluating Sociopragmatic and Cultural Alignment of LLMs in Ba...
Tanvir Ahmed Sijan, S. M Golam Rifat, Pankaj Chowdhury Partha, Md. Tanjeed Islam, Md. Musfique Anwar
https://arxiv.org/abs/2603.15949 https://mastoxiv.page/@arXiv_csCL_bot/116249122231759766
- EngGPT2: Sovereign, Efficient and Open Intelligence
G. Ciarfaglia, et al.
https://arxiv.org/abs/2603.16430 https://mastoxiv.page/@arXiv_csCL_bot/116249228411487178
- HypeLoRA: Hyper-Network-Generated LoRA Adapters for Calibrated Language Model Fine-Tuning
Bartosz Trojan, Filip G\k{e}bala
https://arxiv.org/abs/2603.19278 https://mastoxiv.page/@arXiv_csCL_bot/116277612915482857
- Automatic Analysis of Collaboration Through Human Conversational Data Resources: A Review
Yi Yu, Maria Boritchev, Chlo\'e Clavel
https://arxiv.org/abs/2603.19292 https://mastoxiv.page/@arXiv_csCL_bot/116277620779254916
- Alignment Whack-a-Mole : Finetuning Activates Verbatim Recall of Copyrighted Books in Large Langu...
Xinyue Liu, Niloofar Mireshghallah, Jane C. Ginsburg, Tuhin Chakrabarty
https://arxiv.org/abs/2603.20957 https://mastoxiv.page/@arXiv_csCL_bot/116283538317671552
- KG-Hopper: Empowering Compact Open LLMs with Knowledge Graph Reasoning via Reinforcement Learning
Shuai Wang, Yinan Yu
https://arxiv.org/abs/2603.21440 https://mastoxiv.page/@arXiv_csCL_bot/116283595007808076
toXiv_bot_toot
Daleks, in the future, are teaming up with the heads of the other galaxies to overtake the Solar system and destruct time, and the Doctor's only got Steven (a pilot from the 24th Century) , Katerina (a slave girl from ancient Troy), and a local soldier to help.
The guardian of our Solar system has betrayed us to the Daleks! He's mined 50 years worth of Terrainium secretly from Uranus to power the core of the Dalek Time Destructor.
The Daleks say "Execute" when they have found someone guilty of negligence, vs just when they are a pest to be exterminated.
The doctor nips in, under disguise, to investigate the council, steals the Terranium and the president's ship, then gets the team stranded on the Solar system's prison planet.
The prisoners try and raid the ship but the Doctor has set a trap and electrocutes the invaders, just in time for them to fix the ship and escape.
Only one prisoner has stowed away on board.
[Then there's a episode still missing, in which apparently Katerina wrestles the prisoner into the air-lock and they are both spaced. The Doctor and Peter return to Earth to warn about the Daleks.]
They arrive on Earth (future earth remember, but all the computers have giant tape drives and knobs) as an experiment on mice is in progress.
I guess the experiment was to try and make mice turn into negative images screaming in slow-motion and then bounce up and down as they are transmitted through space many light years away. And the Doctor, Steven, and some security guard chasing them get sent along too. With the Daleks following on in their ships.
The Daleks exterminate the mice 😔
There's 8 ft tall invisible creatures on this planet so the mice were gonna be in trouble anyway. The Doctor beats them off with sticks before being apprehended by Daleks.
[Then there's four still-missing episodes in which the Doctor and Steven steal a Dalek ship, trick the Daleks with a fake Terrainium core, meet the Monk who attempts revenge, and celebrate Xmas on a silent film set. All with Daleks giving chase]
The security guard and the Monk are still with them in the next archived episode, when they are in a Egyptian tomb for some reason and the companions including the monk are captured.
The doctor faces the Daleks to negotiate his companions' return.
At the hostage exchange the Doctor hands over the core as the ancient Egyptians attack the Daleks. It's a slaughter of course. All the Egyptians die, but they made a good distraction and the Doctor skips off.
He's knicked the Monk's Tardis' directional compass so the Monk goes to who knows what random place now.
The Doctor aims to try and materialize the Tardis at the point the Daleks are likely to use that Terranium, to take over the galaxy and destruct time, but seems like the Tarids fails.
[And then there's another two still-missing ones in which the security guard ages to death in a time-mishap, and an entire planet is wiped of all life to thwart the Daleks. The Doctor and Steven lament the senseless deaths of the three of them that they cared about.]
Crikey. I guess they used to bounce around in time and space more during a story when it was twelve 20 minute episodes. That Prison Planet was there only to be landed upon, have the Doctor electrocute some people, and then leave with a stowaway. The 8ft tall invisible creatures are in like 2 scenes.
Incredible body counts. Just absolute carnage compared to most New Who.
The background of mega-death while the protagonists lament the death of only their own reminds me of the way the contemporary news will focus on one marooned soldier over the deaths of hundreds. Humanize only their own.
The Monk is a good candidate for a return. He's got this great Frankie Howerd like mischievous campness. Exited this story with a randomizer on his tardis vowing revenge.
#watching #tv #doctorWho #TheDaleksMasterPlan