Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@trochee@dair-community.social
2026-01-29 03:11:05
Content warning: NYT Connections puzzle spoilers, plus AI grumbling

Did somebody overrun their token bill, or do you think the NYT operates an on-prem LLM that went offline?
It's 10p Eastern, they should have some results here in this (the only?) semi-benign application of the free-association machine

Screen cap of NYT puzzle results for connections

Today's Top Mistakes

Data for each mistake includes similar guesses. I haven't come up with A.I.-generated descriptions of what players may have been thinking yet.

MISTAKE

PCT. OF PLAYERS WITH THIS MISTAKE

RIB

MOCK

RAG

NEEDLE

20%

I didn't create a description for this guess.

RAG

SHAM

BUCKET

SOAP

I didn't create a description for these guesses.

18%

MOTOR

TIRE

NEEDLE

TONEARM

I didn't create a description for these guesses.

14%…
@hikingdude@mastodon.social
2025-12-30 14:27:53

Some more photos from my location scouting to the lake shore.
Obviously it was quite foggy and cold!
On location I was already pretty sure that these would be monochromes. but I struggled to really see motives at first. Luckily I went there just to take photos, not hiking.
Sometimes I just stood there for a while and was looking around to ... see whats around.
#photography

In the foreground, there is a calm body of water, possibly a lake, reflecting the muted light of the overcast sky. The water's surface is smooth, adding to the tranquil ambiance.

The background is filled with tall, leafless trees, their branches reaching into the foggy sky. The trees are partially covered in frost, adding texture and a wintry feel to the scene. The fog obscures the details of the trees in the distance, creating a sense of depth and mystery.

In the far background, you can fain…
This black-and-white photograph captures a serene and mystical winter landscape. The scene is dominated by a dense fog that blankets the area, creating a soft, dreamy atmosphere.

In the foreground, there is a calm body of water, possibly a lake or river, partially covered by the fog. The water's surface is smooth, reflecting the muted light of the overcast sky.

The background features a row of trees, their branches covered in a light layer of frost or snow. The trees stand tall and still, the…
@hex@kolektiva.social
2026-02-28 10:20:01

As salty as I am about it, there's also another way to think about this. For anyone who still has connections to folks on the right (which is perhaps unlikely for anyone on this server, I digress), the cult that has consumed them thrives on isolation and grievance.
The words "you were right" have the potential to cut through the programming and open up an opportunity for reconnection. The modern conspiratorial cult of the Right has been built partially around people who were told they were wrong or were crazy. In the vast majority of cases, they were wrong and even when they were right they completely misunderstood why, but we'll skip that for now. Liberals making fun of them (even the times when they definitely earned it) has pushed them further and further into their ideological hole.
The thing about those words, "you were right," in this context is that the way they offer reconnection also requires them to take one little step of betraying their ideology to accept them. So they must choose between maintaining allegiance to a pedophile or finally getting to feel superior after years of living in an illusion of persecution.
Under the ideology of the Right, admitting one is wrong is a weakness. It is admitting defeat. They have to "own the libs" by saying things, things that they know aren't true, in order to feel dominant. But these things are often so absurd that they end up being made fun of, feeling even more weak and pathetic, reinforcing their fear and alienation.
Offering what they're looking for can offer a way out, but only if they're willing to start to recognize the thing they've supported for what it is.
And they were right about some things. They were right that Bill Gates was a terrible person. I've had plenty of liberals defend him based on his philanthropy washing, but he's awful and always has been. The Epstein links make that blatant. They intuitively recognized him and didn't trust him, even if they were wildly off base about *how and why* he shouldn't be trusted... Even if their correct mistrust was leveraged into one of the most destructive conspiracy theories ever (vaccine denial and COVID vaccine avoidance).
They were right about Bill Clinton. He was always shady as fuck. Sure, the people who attacked him at the time turned out to be even more shady but that's not the point right now. He was connected to Epstein and that was always creepy as fuck.
And the Epstein thing was an open secret that liberals ignored for a long time. It was seen as some weird thing that right wing nutjobs believed about the Clintons. But it was true. Not all of it, and there has always been an antisemitic element to the right wing interpretation or Epstein stuff, but his whole pedophile conspiracy was always kind of real.
The whole "Illuminati"/deep state thing is a vast oversimplification, an attempt to make comprehensible an incredibly complex set of interlocking and emergent behaviors. But Epstein did very much want to remake the world, to create a new world order, and he absolutely played a part in it.
The Right wing nutjobs talked about global authoritarianism, Blackhawks flying over American cities, masked men with guns disarming and executing legal gun owners in the streets. That's all happening right now.
The "FEMA concentration camps" are not actually that far off. ICE and FEMA are sister agencies, both under DHS. I'd be more than happy to call that one "close enough" in order to hear some MAGA admit that ICE is, in fact, building concentration camps.
There was always a huge millennialist element to these things. They tended to be connected to "the antichrist." It was absurd, especially for me as someone who no longer identifies as a Christian. But I'll even acquiess that to a degree. The "the number of the Beast" is 666. That's just the sum of the Hebrew spelling of "Nero." Revelations focuses a lot on Nero coming back to life after his death. His death that involved a head wound, thus the line from Revelation 13:3:
> And I saw one of his heads as if it had been mortally wounded, and his deadly wound was healed. And all the world marveled and followed the beast.
The parallels between Trump and Nero are easy to draw, and Trump's ear wound feels pretty on-the-nose for this. I don't believe in "prophecy" in this way. I think that there are patterns, and useful patterns can become encoded in beleif systems. But I will, again, happily call this one "close enough" for anyone on that side willing to also acknowledge it. I'm happy to meet on that common ground, because anyone who accepts it must recognize that their duty is to fight against it.
A lot of these correct nuggets are embedded in a framework of religious extremism and antisemitism. The vast majority of the beliefs holding these together are wildly wrong and incredibly toxic. But by giving some room to feel validated, listened to, understood, can give some room to admit things that were wrong.
Cult de-programming starts with an opening. People have to talk through their own thoughts, hear their own inconsistencies. Guiding questions can help them untangle these things for themselves. And it all starts by having enough room to feel safe, to not feel cornered, to not feel stupid. Admitting mistakes means being vulnerable, and the MAGA cult is built on fear. It's built on exploiting vulnerability and locking it away.
De-programming takes a long time. It's not easy. It takes patience. But every person who comes out does so with a powerful perspective, a deep understanding, that can be turned back against it. The best people at getting people out of cults are former members. Some of the most dedicated antifa are former fascists who understood their mistakes and dedicate their lives to fixing them.

I remember watching the 2022 documentary
"Aşk, Mark ve Ölüm"
(Love, Deutschmarks and Death) by Cem Kaya
with awe and shame at how much I had not known.
The film looks at the difficult lives of Turkish workers in Germany through the music they produced.
One widely popular song, sung by Yüksel Özkasap, included the lines:
“Germany, everything about you is a lie. You have stolen my life and taken my love.”
The language textbook
"Türk…

@cosmos4u@scicomm.xyz
2025-12-30 01:50:34

The #Chandrayaan-3 propulsion module has been discovered again, at first mistaken for an asteroid on an exotic orbit but swiftly ID'd: groups.io/g/mpml/topic/1169850

@simoncox@seocommunity.social
2026-03-29 08:19:26

Worked on a small hobby site yesterday that I had not really touched since refactoring it onto #11ty in 2023 and realised I had made a ton of mistakes and had not finished parts of it off! Still not finished but built better using 3 .json files with data in instead of 60 .md files. And also stopped the build function on Netlify - just deploys the files now and in only 6 seconds from a sync to G…

@arXiv_csLG_bot@mastoxiv.page
2026-02-25 16:08:18

Replaced article(s) found for cs.LG. arxiv.org/list/cs.LG/new
[5/6]:
- Watermarking Degrades Alignment in Language Models: Analysis and Mitigation
Apurv Verma, NhatHai Phan, Shubhendu Trivedi
arxiv.org/abs/2506.04462 mastoxiv.page/@arXiv_csCL_bot/
- Sensory-Motor Control with Large Language Models via Iterative Policy Refinement
J\^onata Tyska Carvalho, Stefano Nolfi
arxiv.org/abs/2506.04867 mastoxiv.page/@arXiv_csAI_bot/
- ICE-ID: A Novel Historical Census Dataset for Longitudinal Identity Resolution
de Carvalho, Popov, Kaatee, Correia, Th\'orisson, Li, Bj\"ornsson, Sigur{\dh}arson, Dibangoye
arxiv.org/abs/2506.13792 mastoxiv.page/@arXiv_csAI_bot/
- Feedback-driven recurrent quantum neural network universality
Lukas Gonon, Rodrigo Mart\'inez-Pe\~na, Juan-Pablo Ortega
arxiv.org/abs/2506.16332 mastoxiv.page/@arXiv_quantph_b
- Programming by Backprop: An Instruction is Worth 100 Examples When Finetuning LLMs
Cook, Sapora, Ahmadian, Khan, Rocktaschel, Foerster, Ruis
arxiv.org/abs/2506.18777 mastoxiv.page/@arXiv_csAI_bot/
- Stochastic Quantum Spiking Neural Networks with Quantum Memory and Local Learning
Jiechen Chen, Bipin Rajendran, Osvaldo Simeone
arxiv.org/abs/2506.21324 mastoxiv.page/@arXiv_csNE_bot/
- Enjoying Non-linearity in Multinomial Logistic Bandits: A Minimax-Optimal Algorithm
Pierre Boudart (SIERRA), Pierre Gaillard (Thoth), Alessandro Rudi (PSL, DI-ENS, Inria)
arxiv.org/abs/2507.05306 mastoxiv.page/@arXiv_statML_bo
- Characterizing State Space Model and Hybrid Language Model Performance with Long Context
Saptarshi Mitra, Rachid Karami, Haocheng Xu, Sitao Huang, Hyoukjun Kwon
arxiv.org/abs/2507.12442 mastoxiv.page/@arXiv_csAR_bot/
- Is Exchangeability better than I.I.D to handle Data Distribution Shifts while Pooling Data for Da...
Ayush Roy, Samin Enam, Jun Xia, Won Hwa Kim, Vishnu Suresh Lokhande
arxiv.org/abs/2507.19575 mastoxiv.page/@arXiv_csCV_bot/
- TASER: Table Agents for Schema-guided Extraction and Recommendation
Nicole Cho, Kirsty Fielding, William Watson, Sumitra Ganesh, Manuela Veloso
arxiv.org/abs/2508.13404 mastoxiv.page/@arXiv_csAI_bot/
- Morphology-Aware Peptide Discovery via Masked Conditional Generative Modeling
Nuno Costa, Julija Zavadlav
arxiv.org/abs/2509.02060 mastoxiv.page/@arXiv_qbioBM_bo
- PCPO: Proportionate Credit Policy Optimization for Aligning Image Generation Models
Jeongjae Lee, Jong Chul Ye
arxiv.org/abs/2509.25774 mastoxiv.page/@arXiv_csCV_bot/
- Multi-hop Deep Joint Source-Channel Coding with Deep Hash Distillation for Semantically Aligned I...
Didrik Bergstr\"om, Deniz G\"und\"uz, Onur G\"unl\"u
arxiv.org/abs/2510.06868 mastoxiv.page/@arXiv_csIT_bot/
- MoMaGen: Generating Demonstrations under Soft and Hard Constraints for Multi-Step Bimanual Mobile...
Chengshu Li, et al.
arxiv.org/abs/2510.18316 mastoxiv.page/@arXiv_csRO_bot/
- A Spectral Framework for Graph Neural Operators: Convergence Guarantees and Tradeoffs
Roxanne Holden, Luana Ruiz
arxiv.org/abs/2510.20954 mastoxiv.page/@arXiv_statML_bo
- Breaking Agent Backbones: Evaluating the Security of Backbone LLMs in AI Agents
Bazinska, Mathys, Casucci, Rojas-Carulla, Davies, Souly, Pfister
arxiv.org/abs/2510.22620 mastoxiv.page/@arXiv_csCR_bot/
- Uncertainty Calibration of Multi-Label Bird Sound Classifiers
Raphael Schwinger, Ben McEwen, Vincent S. Kather, Ren\'e Heinrich, Lukas Rauch, Sven Tomforde
arxiv.org/abs/2511.08261 mastoxiv.page/@arXiv_csSD_bot/
- Two-dimensional RMSD projections for reaction path visualization and validation
Rohit Goswami (Institute IMX and Lab-COSMO, \'Ecole polytechnique f\'ed\'erale de Lausanne)
arxiv.org/abs/2512.07329 mastoxiv.page/@arXiv_physicsch
- Distribution-informed Online Conformal Prediction
Dongjian Hu, Junxi Wu, Shu-Tao Xia, Changliang Zou
arxiv.org/abs/2512.07770 mastoxiv.page/@arXiv_statML_bo
- Coupling Experts and Routers in Mixture-of-Experts via an Auxiliary Loss
Ang Lv, Jin Ma, Yiyuan Ma, Siyuan Qiao
arxiv.org/abs/2512.23447 mastoxiv.page/@arXiv_csCL_bot/
toXiv_bot_toot

@scottmiller42@mstdn.social
2026-01-29 23:13:58

If a person says they didn't support Harris Walz because of (progressive reasons), but now admit they didn't realize it would get this bad and they should have supported Harris in 2024... I can at least respect that they are able to see it was a mistake and now regret their choice.
The progressive nitwits in January 2026 that can take a look at all this and still not regret their choice to snub Harris... I just can't understand it.

t took only a few minutesbefore everyone in the church knew that another person had been shot.
I was sitting with Trygve Olsen, a big man in a wool hat and puffy vest,
who lifted his phone to show me a text with the news.
It was his 50th birthday, and one of the coldest days of the year.
I asked him whether he was doing anything special to celebrate.
“What should I be doing?” he replied.
“Should I sit at home and open presents?
This is where I’m supp…

@arXiv_csLG_bot@mastoxiv.page
2026-02-25 16:08:08

Replaced article(s) found for cs.LG. arxiv.org/list/cs.LG/new
[4/6]:
- Neural Proposals, Symbolic Guarantees: Neuro-Symbolic Graph Generation with Hard Constraints
Chuqin Geng, Li Zhang, Mark Zhang, Haolin Ye, Ziyu Zhao, Xujie Si
arxiv.org/abs/2602.16954 mastoxiv.page/@arXiv_csLG_bot/
- Multi-Probe Zero Collision Hash (MPZCH): Mitigating Embedding Collisions and Enhancing Model Fres...
Ziliang Zhao, et al.
arxiv.org/abs/2602.17050 mastoxiv.page/@arXiv_csLG_bot/
- MASPO: Unifying Gradient Utilization, Probability Mass, and Signal Reliability for Robust and Sam...
Fu, Lin, Fang, Zheng, Hu, Shao, Qin, Pan, Zeng, Cai
arxiv.org/abs/2602.17550 mastoxiv.page/@arXiv_csLG_bot/
- A Theoretical Framework for Modular Learning of Robust Generative Models
Corinna Cortes, Mehryar Mohri, Yutao Zhong
arxiv.org/abs/2602.17554 mastoxiv.page/@arXiv_csLG_bot/
- Multi-Round Human-AI Collaboration with User-Specified Requirements
Sima Noorani, Shayan Kiyani, Hamed Hassani, George Pappas
arxiv.org/abs/2602.17646 mastoxiv.page/@arXiv_csLG_bot/
- NEXUS: A compact neural architecture for high-resolution spatiotemporal air quality forecasting i...
Rampunit Kumar, Aditya Maheshwari
arxiv.org/abs/2602.19654 mastoxiv.page/@arXiv_csLG_bot/
- Augmenting Lateral Thinking in Language Models with Humor and Riddle Data for the BRAINTEASER Task
Mina Ghashami, Soumya Smruti Mishra
arxiv.org/abs/2405.10385 mastoxiv.page/@arXiv_csCL_bot/
- Watermarking Language Models with Error Correcting Codes
Patrick Chao, Yan Sun, Edgar Dobriban, Hamed Hassani
arxiv.org/abs/2406.10281 mastoxiv.page/@arXiv_csCR_bot/
- Learning to Control Unknown Strongly Monotone Games
Siddharth Chandak, Ilai Bistritz, Nicholas Bambos
arxiv.org/abs/2407.00575 mastoxiv.page/@arXiv_csMA_bot/
- Classification and reconstruction for single-pixel imaging with classical and quantum neural netw...
Sofya Manko, Dmitry Frolovtsev
arxiv.org/abs/2407.12506 mastoxiv.page/@arXiv_quantph_b
- Statistical Inference for Temporal Difference Learning with Linear Function Approximation
Weichen Wu, Gen Li, Yuting Wei, Alessandro Rinaldo
arxiv.org/abs/2410.16106 mastoxiv.page/@arXiv_statML_bo
- Big data approach to Kazhdan-Lusztig polynomials
Abel Lacabanne, Daniel Tubbenhauer, Pedro Vaz
arxiv.org/abs/2412.01283 mastoxiv.page/@arXiv_mathRT_bo
- MoEMba: A Mamba-based Mixture of Experts for High-Density EMG-based Hand Gesture Recognition
Mehran Shabanpour, Kasra Rad, Sadaf Khademi, Arash Mohammadi
arxiv.org/abs/2502.17457 mastoxiv.page/@arXiv_eessSP_bo
- Tightening Optimality gap with confidence through conformal prediction
Miao Li, Michael Klamkin, Russell Bent, Pascal Van Hentenryck
arxiv.org/abs/2503.04071 mastoxiv.page/@arXiv_statML_bo
- SEED: Towards More Accurate Semantic Evaluation for Visual Brain Decoding
Juhyeon Park, Peter Yongho Kim, Jiook Cha, Shinjae Yoo, Taesup Moon
arxiv.org/abs/2503.06437 mastoxiv.page/@arXiv_csCV_bot/
- How much does context affect the accuracy of AI health advice?
Prashant Garg, Thiemo Fetzer
arxiv.org/abs/2504.18310 mastoxiv.page/@arXiv_econGN_bo
- Reproducing and Improving CheXNet: Deep Learning for Chest X-ray Disease Classification
Daniel J. Strick, Carlos Garcia, Anthony Huang, Thomas Gardos
arxiv.org/abs/2505.06646 mastoxiv.page/@arXiv_eessIV_bo
- Sharp Gaussian approximations for Decentralized Federated Learning
Soham Bonnerjee, Sayar Karmakar, Wei Biao Wu
arxiv.org/abs/2505.08125 mastoxiv.page/@arXiv_statML_bo
- HoloLLM: Multisensory Foundation Model for Language-Grounded Human Sensing and Reasoning
Chuhao Zhou, Jianfei Yang
arxiv.org/abs/2505.17645 mastoxiv.page/@arXiv_csCV_bot/
- A Copula Based Supervised Filter for Feature Selection in Diabetes Risk Prediction Using Machine ...
Agnideep Aich, Md Monzur Murshed, Sameera Hewage, Amanda Mayeaux
arxiv.org/abs/2505.22554 mastoxiv.page/@arXiv_statML_bo
- Synthesis of discrete-continuous quantum circuits with multimodal diffusion models
Florian F\"urrutter, Zohim Chandani, Ikko Hamamura, Hans J. Briegel, Gorka Mu\~noz-Gil
arxiv.org/abs/2506.01666 mastoxiv.page/@arXiv_quantph_b
toXiv_bot_toot