Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@hex@kolektiva.social
2026-02-28 10:20:01

As salty as I am about it, there's also another way to think about this. For anyone who still has connections to folks on the right (which is perhaps unlikely for anyone on this server, I digress), the cult that has consumed them thrives on isolation and grievance.
The words "you were right" have the potential to cut through the programming and open up an opportunity for reconnection. The modern conspiratorial cult of the Right has been built partially around people who were told they were wrong or were crazy. In the vast majority of cases, they were wrong and even when they were right they completely misunderstood why, but we'll skip that for now. Liberals making fun of them (even the times when they definitely earned it) has pushed them further and further into their ideological hole.
The thing about those words, "you were right," in this context is that the way they offer reconnection also requires them to take one little step of betraying their ideology to accept them. So they must choose between maintaining allegiance to a pedophile or finally getting to feel superior after years of living in an illusion of persecution.
Under the ideology of the Right, admitting one is wrong is a weakness. It is admitting defeat. They have to "own the libs" by saying things, things that they know aren't true, in order to feel dominant. But these things are often so absurd that they end up being made fun of, feeling even more weak and pathetic, reinforcing their fear and alienation.
Offering what they're looking for can offer a way out, but only if they're willing to start to recognize the thing they've supported for what it is.
And they were right about some things. They were right that Bill Gates was a terrible person. I've had plenty of liberals defend him based on his philanthropy washing, but he's awful and always has been. The Epstein links make that blatant. They intuitively recognized him and didn't trust him, even if they were wildly off base about *how and why* he shouldn't be trusted... Even if their correct mistrust was leveraged into one of the most destructive conspiracy theories ever (vaccine denial and COVID vaccine avoidance).
They were right about Bill Clinton. He was always shady as fuck. Sure, the people who attacked him at the time turned out to be even more shady but that's not the point right now. He was connected to Epstein and that was always creepy as fuck.
And the Epstein thing was an open secret that liberals ignored for a long time. It was seen as some weird thing that right wing nutjobs believed about the Clintons. But it was true. Not all of it, and there has always been an antisemitic element to the right wing interpretation or Epstein stuff, but his whole pedophile conspiracy was always kind of real.
The whole "Illuminati"/deep state thing is a vast oversimplification, an attempt to make comprehensible an incredibly complex set of interlocking and emergent behaviors. But Epstein did very much want to remake the world, to create a new world order, and he absolutely played a part in it.
The Right wing nutjobs talked about global authoritarianism, Blackhawks flying over American cities, masked men with guns disarming and executing legal gun owners in the streets. That's all happening right now.
The "FEMA concentration camps" are not actually that far off. ICE and FEMA are sister agencies, both under DHS. I'd be more than happy to call that one "close enough" in order to hear some MAGA admit that ICE is, in fact, building concentration camps.
There was always a huge millennialist element to these things. They tended to be connected to "the antichrist." It was absurd, especially for me as someone who no longer identifies as a Christian. But I'll even acquiess that to a degree. The "the number of the Beast" is 666. That's just the sum of the Hebrew spelling of "Nero." Revelations focuses a lot on Nero coming back to life after his death. His death that involved a head wound, thus the line from Revelation 13:3:
> And I saw one of his heads as if it had been mortally wounded, and his deadly wound was healed. And all the world marveled and followed the beast.
The parallels between Trump and Nero are easy to draw, and Trump's ear wound feels pretty on-the-nose for this. I don't believe in "prophecy" in this way. I think that there are patterns, and useful patterns can become encoded in beleif systems. But I will, again, happily call this one "close enough" for anyone on that side willing to also acknowledge it. I'm happy to meet on that common ground, because anyone who accepts it must recognize that their duty is to fight against it.
A lot of these correct nuggets are embedded in a framework of religious extremism and antisemitism. The vast majority of the beliefs holding these together are wildly wrong and incredibly toxic. But by giving some room to feel validated, listened to, understood, can give some room to admit things that were wrong.
Cult de-programming starts with an opening. People have to talk through their own thoughts, hear their own inconsistencies. Guiding questions can help them untangle these things for themselves. And it all starts by having enough room to feel safe, to not feel cornered, to not feel stupid. Admitting mistakes means being vulnerable, and the MAGA cult is built on fear. It's built on exploiting vulnerability and locking it away.
De-programming takes a long time. It's not easy. It takes patience. But every person who comes out does so with a powerful perspective, a deep understanding, that can be turned back against it. The best people at getting people out of cults are former members. Some of the most dedicated antifa are former fascists who understood their mistakes and dedicate their lives to fixing them.

@Techmeme@techhub.social
2026-01-26 22:10:47

A group of YouTubers with a combined 6.2M subscribers adds Snap to a class action lawsuit, alleging the company trained its AI systems on their video content (Sarah Perez/TechCrunch)
techcrunch.com/2026/01/26/yout

@pavelasamsonov@mastodon.social
2026-02-26 16:35:04

Seeking a shortcut to becoming exceptional is the best guarantee of remaining mid forever.
“The organizations believe that through a combination of online learning and AI tutoring, average performers can become exceptional in a compressed amount of time.”

@Mediagazer@mstdn.social
2026-01-27 09:10:39

A look at Plucky Wire, a website for local newsrooms to find and share stories with each other for republication; it is now being used by over 200 publishers (Sarah Scire/Nieman Lab)
niemanlab.org/2026/01/a-scrapp

@arXiv_csLG_bot@mastoxiv.page
2026-02-25 16:08:08

Replaced article(s) found for cs.LG. arxiv.org/list/cs.LG/new
[4/6]:
- Neural Proposals, Symbolic Guarantees: Neuro-Symbolic Graph Generation with Hard Constraints
Chuqin Geng, Li Zhang, Mark Zhang, Haolin Ye, Ziyu Zhao, Xujie Si
arxiv.org/abs/2602.16954 mastoxiv.page/@arXiv_csLG_bot/
- Multi-Probe Zero Collision Hash (MPZCH): Mitigating Embedding Collisions and Enhancing Model Fres...
Ziliang Zhao, et al.
arxiv.org/abs/2602.17050 mastoxiv.page/@arXiv_csLG_bot/
- MASPO: Unifying Gradient Utilization, Probability Mass, and Signal Reliability for Robust and Sam...
Fu, Lin, Fang, Zheng, Hu, Shao, Qin, Pan, Zeng, Cai
arxiv.org/abs/2602.17550 mastoxiv.page/@arXiv_csLG_bot/
- A Theoretical Framework for Modular Learning of Robust Generative Models
Corinna Cortes, Mehryar Mohri, Yutao Zhong
arxiv.org/abs/2602.17554 mastoxiv.page/@arXiv_csLG_bot/
- Multi-Round Human-AI Collaboration with User-Specified Requirements
Sima Noorani, Shayan Kiyani, Hamed Hassani, George Pappas
arxiv.org/abs/2602.17646 mastoxiv.page/@arXiv_csLG_bot/
- NEXUS: A compact neural architecture for high-resolution spatiotemporal air quality forecasting i...
Rampunit Kumar, Aditya Maheshwari
arxiv.org/abs/2602.19654 mastoxiv.page/@arXiv_csLG_bot/
- Augmenting Lateral Thinking in Language Models with Humor and Riddle Data for the BRAINTEASER Task
Mina Ghashami, Soumya Smruti Mishra
arxiv.org/abs/2405.10385 mastoxiv.page/@arXiv_csCL_bot/
- Watermarking Language Models with Error Correcting Codes
Patrick Chao, Yan Sun, Edgar Dobriban, Hamed Hassani
arxiv.org/abs/2406.10281 mastoxiv.page/@arXiv_csCR_bot/
- Learning to Control Unknown Strongly Monotone Games
Siddharth Chandak, Ilai Bistritz, Nicholas Bambos
arxiv.org/abs/2407.00575 mastoxiv.page/@arXiv_csMA_bot/
- Classification and reconstruction for single-pixel imaging with classical and quantum neural netw...
Sofya Manko, Dmitry Frolovtsev
arxiv.org/abs/2407.12506 mastoxiv.page/@arXiv_quantph_b
- Statistical Inference for Temporal Difference Learning with Linear Function Approximation
Weichen Wu, Gen Li, Yuting Wei, Alessandro Rinaldo
arxiv.org/abs/2410.16106 mastoxiv.page/@arXiv_statML_bo
- Big data approach to Kazhdan-Lusztig polynomials
Abel Lacabanne, Daniel Tubbenhauer, Pedro Vaz
arxiv.org/abs/2412.01283 mastoxiv.page/@arXiv_mathRT_bo
- MoEMba: A Mamba-based Mixture of Experts for High-Density EMG-based Hand Gesture Recognition
Mehran Shabanpour, Kasra Rad, Sadaf Khademi, Arash Mohammadi
arxiv.org/abs/2502.17457 mastoxiv.page/@arXiv_eessSP_bo
- Tightening Optimality gap with confidence through conformal prediction
Miao Li, Michael Klamkin, Russell Bent, Pascal Van Hentenryck
arxiv.org/abs/2503.04071 mastoxiv.page/@arXiv_statML_bo
- SEED: Towards More Accurate Semantic Evaluation for Visual Brain Decoding
Juhyeon Park, Peter Yongho Kim, Jiook Cha, Shinjae Yoo, Taesup Moon
arxiv.org/abs/2503.06437 mastoxiv.page/@arXiv_csCV_bot/
- How much does context affect the accuracy of AI health advice?
Prashant Garg, Thiemo Fetzer
arxiv.org/abs/2504.18310 mastoxiv.page/@arXiv_econGN_bo
- Reproducing and Improving CheXNet: Deep Learning for Chest X-ray Disease Classification
Daniel J. Strick, Carlos Garcia, Anthony Huang, Thomas Gardos
arxiv.org/abs/2505.06646 mastoxiv.page/@arXiv_eessIV_bo
- Sharp Gaussian approximations for Decentralized Federated Learning
Soham Bonnerjee, Sayar Karmakar, Wei Biao Wu
arxiv.org/abs/2505.08125 mastoxiv.page/@arXiv_statML_bo
- HoloLLM: Multisensory Foundation Model for Language-Grounded Human Sensing and Reasoning
Chuhao Zhou, Jianfei Yang
arxiv.org/abs/2505.17645 mastoxiv.page/@arXiv_csCV_bot/
- A Copula Based Supervised Filter for Feature Selection in Diabetes Risk Prediction Using Machine ...
Agnideep Aich, Md Monzur Murshed, Sameera Hewage, Amanda Mayeaux
arxiv.org/abs/2505.22554 mastoxiv.page/@arXiv_statML_bo
- Synthesis of discrete-continuous quantum circuits with multimodal diffusion models
Florian F\"urrutter, Zohim Chandani, Ikko Hamamura, Hans J. Briegel, Gorka Mu\~noz-Gil
arxiv.org/abs/2506.01666 mastoxiv.page/@arXiv_quantph_b
toXiv_bot_toot

Four weeks into a war that was going to take four days,
and that has so far cost the US about $30-40bn
and Israel $300m a day,
Washington is further away from a diplomatic agreement with Iran than it was in May 2025.
Not only has the war failed to persuade Iran to agree to dismantle its nuclear programme in the comprehensive and irreversible way the US demanded in a 15-point paper that it tabled on 23 May last year,
Washington is now having to negotiate to reopen…

@arXiv_mathDG_bot@mastoxiv.page
2026-02-27 14:33:34

Replaced article(s) found for math.DG. arxiv.org/list/math.DG/new
[1/1]:
- On the modified $J$-equation
Ryosuke Takahashi
arxiv.org/abs/2207.04953
- Surfaces with flat normal connection in 4-dimensional space forms
Naoya Ando, Ryusei Hatanaka
arxiv.org/abs/2501.15780
- Regularized $\zeta_{\Delta}(1)$ for Polyhedra
Alexey Yu. Kokotov, Dmitrii V. Korikov
arxiv.org/abs/2502.03351 mastoxiv.page/@arXiv_mathDG_bo
- General Chen-Ricci inequalities for Riemannian submersions and Riemannian maps
Ravindra Singh, Kiran Meena, Kapish Chand Meena
arxiv.org/abs/2509.15281 mastoxiv.page/@arXiv_mathDG_bo
- Some configuration results for area-minimizing cones
Yongsheng Zhang
arxiv.org/abs/2510.17240 mastoxiv.page/@arXiv_mathDG_bo
- Real Bers embedding on the line: Fisher-Rao linearization, Schwarzian curvature, and scattering c...
Hy Lam
arxiv.org/abs/2602.07373 mastoxiv.page/@arXiv_mathDG_bo
- Explicit Hamiltonian representations of meromorphic connections and duality from different perspe...
Mohamad Alameddine, Olivier Marchal
arxiv.org/abs/2406.19187 mastoxiv.page/@arXiv_mathph_bo
- An alternative solvability criterion for the Dirichlet problem for the minimal surface equation a...
Ari J. Aiolfi, Giovanni da Silva Nunes, Jaime Ripoll, Lisandra Sauer, Rodrigo Soares
arxiv.org/abs/2508.09806 mastoxiv.page/@arXiv_mathAP_bo
- Gromov's Compactness Theorem for the Intrinsic Timed-Hausdorff Distance
Mauricio Che, Raquel Perales, Christina Sormani
arxiv.org/abs/2510.13069 mastoxiv.page/@arXiv_mathMG_bo
- Nearly optimal spectral gaps for random Belyi surfaces
Yang Shen, Yunhui Wu
arxiv.org/abs/2511.02517 mastoxiv.page/@arXiv_mathSP_bo
toXiv_bot_toot

@Techmeme@techhub.social
2026-01-27 16:25:54

The Allen Institute for AI launches SERA, open-source coding agents including 32B- and 8B-parameter models designed to adapt to private codebases (Kyt Dotson/SiliconANGLE)
siliconangle.com/2026/01/27/ai

@arXiv_csLG_bot@mastoxiv.page
2026-02-25 16:07:47

Replaced article(s) found for cs.LG. arxiv.org/list/cs.LG/new
[2/6]:
- Performance Asymmetry in Model-Based Reinforcement Learning
Jing Yu Lim, Rushi Shah, Zarif Ikram, Samson Yu, Haozhe Ma, Tze-Yun Leong, Dianbo Liu
arxiv.org/abs/2505.19698 mastoxiv.page/@arXiv_csLG_bot/
- Towards Robust Real-World Multivariate Time Series Forecasting: A Unified Framework for Dependenc...
Jinkwan Jang, Hyungjin Park, Jinmyeong Choi, Taesup Kim
arxiv.org/abs/2506.08660 mastoxiv.page/@arXiv_csLG_bot/
- Wasserstein Barycenter Soft Actor-Critic
Zahra Shahrooei, Ali Baheri
arxiv.org/abs/2506.10167 mastoxiv.page/@arXiv_csLG_bot/
- Foundation Models for Causal Inference via Prior-Data Fitted Networks
Yuchen Ma, Dennis Frauen, Emil Javurek, Stefan Feuerriegel
arxiv.org/abs/2506.10914 mastoxiv.page/@arXiv_csLG_bot/
- FREQuency ATTribution: benchmarking frequency-based occlusion for time series data
Dominique Mercier, Andreas Dengel, Sheraz Ahmed
arxiv.org/abs/2506.18481 mastoxiv.page/@arXiv_csLG_bot/
- Complexity-aware fine-tuning
Andrey Goncharov, Daniil Vyazhev, Petr Sychev, Edvard Khalafyan, Alexey Zaytsev
arxiv.org/abs/2506.21220 mastoxiv.page/@arXiv_csLG_bot/
- Transfer Learning in Infinite Width Feature Learning Networks
Clarissa Lauditi, Blake Bordelon, Cengiz Pehlevan
arxiv.org/abs/2507.04448 mastoxiv.page/@arXiv_csLG_bot/
- A hierarchy tree data structure for behavior-based user segment representation
Liu, Kang, Iyer, Malik, Li, Wang, Lu, Zhao, Wang, Liu, Liu, Liang, Yu
arxiv.org/abs/2508.01115 mastoxiv.page/@arXiv_csLG_bot/
- One-Step Flow Q-Learning: Addressing the Diffusion Policy Bottleneck in Offline Reinforcement Lea...
Thanh Nguyen, Chang D. Yoo
arxiv.org/abs/2508.13904 mastoxiv.page/@arXiv_csLG_bot/
- Uncertainty Propagation Networks for Neural Ordinary Differential Equations
Hadi Jahanshahi, Zheng H. Zhu
arxiv.org/abs/2508.16815 mastoxiv.page/@arXiv_csLG_bot/
- Learning Unified Representations from Heterogeneous Data for Robust Heart Rate Modeling
Zhengdong Huang, Zicheng Xie, Wentao Tian, Jingyu Liu, Lunhong Dong, Peng Yang
arxiv.org/abs/2508.21785 mastoxiv.page/@arXiv_csLG_bot/
- Monte Carlo Tree Diffusion with Multiple Experts for Protein Design
Liu, Cao, Jiang, Luo, Duan, Wang, Sosnick, Xu, Stevens
arxiv.org/abs/2509.15796 mastoxiv.page/@arXiv_csLG_bot/
- From Samples to Scenarios: A New Paradigm for Probabilistic Forecasting
Xilin Dai, Zhijian Xu, Wanxu Cai, Qiang Xu
arxiv.org/abs/2509.19975 mastoxiv.page/@arXiv_csLG_bot/
- Why High-rank Neural Networks Generalize?: An Algebraic Framework with RKHSs
Yuka Hashimoto, Sho Sonoda, Isao Ishikawa, Masahiro Ikeda
arxiv.org/abs/2509.21895 mastoxiv.page/@arXiv_csLG_bot/
- From Parameters to Behaviors: Unsupervised Compression of the Policy Space
Davide Tenedini, Riccardo Zamboni, Mirco Mutti, Marcello Restelli
arxiv.org/abs/2509.22566 mastoxiv.page/@arXiv_csLG_bot/
- RHYTHM: Reasoning with Hierarchical Temporal Tokenization for Human Mobility
Haoyu He, Haozheng Luo, Yan Chen, Qi R. Wang
arxiv.org/abs/2509.23115 mastoxiv.page/@arXiv_csLG_bot/
- Polychromic Objectives for Reinforcement Learning
Jubayer Ibn Hamid, Ifdita Hasan Orney, Ellen Xu, Chelsea Finn, Dorsa Sadigh
arxiv.org/abs/2509.25424 mastoxiv.page/@arXiv_csLG_bot/
- Recursive Self-Aggregation Unlocks Deep Thinking in Large Language Models
Siddarth Venkatraman, et al.
arxiv.org/abs/2509.26626 mastoxiv.page/@arXiv_csLG_bot/
- Cautious Weight Decay
Chen, Li, Liang, Su, Xie, Pierse, Liang, Lao, Liu
arxiv.org/abs/2510.12402 mastoxiv.page/@arXiv_csLG_bot/
- TeamFormer: Shallow Parallel Transformers with Progressive Approximation
Wei Wang, Xiao-Yong Wei, Qing Li
arxiv.org/abs/2510.15425 mastoxiv.page/@arXiv_csLG_bot/
- Latent-Augmented Discrete Diffusion Models
Dario Shariatian, Alain Durmus, Umut Simsekli, Stefano Peluchetti
arxiv.org/abs/2510.18114 mastoxiv.page/@arXiv_csLG_bot/
- Predicting Metabolic Dysfunction-Associated Steatotic Liver Disease using Machine Learning Method...
Mary E. An, Paul Griffin, Jonathan G. Stine, Ramakrishna Balakrishnan, Soundar Kumara
arxiv.org/abs/2510.22293 mastoxiv.page/@arXiv_csLG_bot/
toXiv_bot_toot

@Mediagazer@mstdn.social
2026-01-26 22:05:45

A group of YouTubers with a combined 6.2M subscribers adds Snap to a class action lawsuit, alleging the company trained its AI systems on their video content (Sarah Perez/TechCrunch)
techcrunch.com/2026/01/26/yout