Tootfinder

Opt-in global Mastodon full text search. Join the index!

@gwire@mastodon.social
2026-03-06 13:11:02

News articles complaining about the state of the navy, during a time of conflict, are like news articles that wait until heavy snow to complain about the lack of investment in snow plows.
The UK's previous government took a bet that they wouldn't actually need a full navy before the 2030s - hence the "frigate gap".

@DrPlanktonguy@ecoevo.social
2026-02-07 14:40:03

Weekend #Plankton Factoid 🦠🦐
Some calanoid copepods, including Calanus finmarchicus, spend 6-8 months of the winter in a dormancy state as juvenile copepodites deep under the thermocline at a depth of 500-1500m which reduces predation and energy use. This depth is determined by interaction of temperature and salinity of the water and the copepod's lipid content of waxy esters. They have …

image/jpeg a very transparent torpedo shaped crustacean is seen with its antennae folded along underneath its body. A large clear lipid sac is visible extending through the body dorsally. Photo by M. Runge.
@ascendor@social.tchncs.de
2026-01-07 16:39:00

Geschichte wird von Siegern geschrieben. 🤮
whitehouse.gov/j6/

@memeorandum@universeodon.com
2026-02-27 00:56:01

Kristi Noem: Deep State Officials Bugged My Phone, Computer (Neil Munro/Breitbart)
breitbart.com/national-securit
memeorandum.com/260226/p119#a2

@detondev@social.linux.pizza
2026-02-03 15:12:23

deep state humiliation ritual where they make u put white neurospicy late 30s queerdo in ur bio

@Techmeme@techhub.social
2026-01-02 12:55:32

Tech analyst Dan Wang reflects on the Chinese Communist Party vs. Silicon Valley, AI and manufacturing, and how China and the US are building the future (Dan Wang)
danwang.co/2025-letter/

@hex@kolektiva.social
2026-02-28 10:20:01

As salty as I am about it, there's also another way to think about this. For anyone who still has connections to folks on the right (which is perhaps unlikely for anyone on this server, I digress), the cult that has consumed them thrives on isolation and grievance.
The words "you were right" have the potential to cut through the programming and open up an opportunity for reconnection. The modern conspiratorial cult of the Right has been built partially around people who were told they were wrong or were crazy. In the vast majority of cases, they were wrong and even when they were right they completely misunderstood why, but we'll skip that for now. Liberals making fun of them (even the times when they definitely earned it) has pushed them further and further into their ideological hole.
The thing about those words, "you were right," in this context is that the way they offer reconnection also requires them to take one little step of betraying their ideology to accept them. So they must choose between maintaining allegiance to a pedophile or finally getting to feel superior after years of living in an illusion of persecution.
Under the ideology of the Right, admitting one is wrong is a weakness. It is admitting defeat. They have to "own the libs" by saying things, things that they know aren't true, in order to feel dominant. But these things are often so absurd that they end up being made fun of, feeling even more weak and pathetic, reinforcing their fear and alienation.
Offering what they're looking for can offer a way out, but only if they're willing to start to recognize the thing they've supported for what it is.
And they were right about some things. They were right that Bill Gates was a terrible person. I've had plenty of liberals defend him based on his philanthropy washing, but he's awful and always has been. The Epstein links make that blatant. They intuitively recognized him and didn't trust him, even if they were wildly off base about *how and why* he shouldn't be trusted... Even if their correct mistrust was leveraged into one of the most destructive conspiracy theories ever (vaccine denial and COVID vaccine avoidance).
They were right about Bill Clinton. He was always shady as fuck. Sure, the people who attacked him at the time turned out to be even more shady but that's not the point right now. He was connected to Epstein and that was always creepy as fuck.
And the Epstein thing was an open secret that liberals ignored for a long time. It was seen as some weird thing that right wing nutjobs believed about the Clintons. But it was true. Not all of it, and there has always been an antisemitic element to the right wing interpretation or Epstein stuff, but his whole pedophile conspiracy was always kind of real.
The whole "Illuminati"/deep state thing is a vast oversimplification, an attempt to make comprehensible an incredibly complex set of interlocking and emergent behaviors. But Epstein did very much want to remake the world, to create a new world order, and he absolutely played a part in it.
The Right wing nutjobs talked about global authoritarianism, Blackhawks flying over American cities, masked men with guns disarming and executing legal gun owners in the streets. That's all happening right now.
The "FEMA concentration camps" are not actually that far off. ICE and FEMA are sister agencies, both under DHS. I'd be more than happy to call that one "close enough" in order to hear some MAGA admit that ICE is, in fact, building concentration camps.
There was always a huge millennialist element to these things. They tended to be connected to "the antichrist." It was absurd, especially for me as someone who no longer identifies as a Christian. But I'll even acquiess that to a degree. The "the number of the Beast" is 666. That's just the sum of the Hebrew spelling of "Nero." Revelations focuses a lot on Nero coming back to life after his death. His death that involved a head wound, thus the line from Revelation 13:3:
> And I saw one of his heads as if it had been mortally wounded, and his deadly wound was healed. And all the world marveled and followed the beast.
The parallels between Trump and Nero are easy to draw, and Trump's ear wound feels pretty on-the-nose for this. I don't believe in "prophecy" in this way. I think that there are patterns, and useful patterns can become encoded in beleif systems. But I will, again, happily call this one "close enough" for anyone on that side willing to also acknowledge it. I'm happy to meet on that common ground, because anyone who accepts it must recognize that their duty is to fight against it.
A lot of these correct nuggets are embedded in a framework of religious extremism and antisemitism. The vast majority of the beliefs holding these together are wildly wrong and incredibly toxic. But by giving some room to feel validated, listened to, understood, can give some room to admit things that were wrong.
Cult de-programming starts with an opening. People have to talk through their own thoughts, hear their own inconsistencies. Guiding questions can help them untangle these things for themselves. And it all starts by having enough room to feel safe, to not feel cornered, to not feel stupid. Admitting mistakes means being vulnerable, and the MAGA cult is built on fear. It's built on exploiting vulnerability and locking it away.
De-programming takes a long time. It's not easy. It takes patience. But every person who comes out does so with a powerful perspective, a deep understanding, that can be turned back against it. The best people at getting people out of cults are former members. Some of the most dedicated antifa are former fascists who understood their mistakes and dedicate their lives to fixing them.

@servelan@newsie.social
2025-12-27 00:39:13

"the Journal argued that the recent decision by the Jim Beam distillery to halt production at its Claremont, Kentucky facility for all of 2026 can be chalked up to Trump's tariffs."
Trump is actively causing 'harm' to this deep-red state's key industry: WSJ - Alternet.org
alternet.org/trump-harm-red-st

A Democrat won a state legislative special election in a district that President Trump carried by 17 percentage points,
unnerving Republicans in Texas and beyond.
Taylor Rehmet,
a local union leader and first-time candidate,
defeated the Republican, Leigh Wambsganss,
by double digits
— 57 to 43
— in the historically conservative district.

@Techmeme@techhub.social
2026-02-14 22:36:04

India approves a $1.1B state-backed VC fund to finance high-risk areas like AI and advanced manufacturing, doubling down on an effort that debuted in 2016 (Jagmeet Singh/TechCrunch)
techcrunch.com/2026/02/14/indi

@michabbb@social.vivaldi.net
2026-02-17 23:56:46

- Customer vetting done in minutes, not weeks
- Synthesizes 50 data sources into one clear, actionable picture
- Delivers finished output: documents, decks, webpages, even full apps
📊 It's not just fast. Grep is state-of-the-art — top-ranked on the Deep Research Benchmark — so you get accuracy when it actually matters.
Whether you're a founder validating a market, an investor vetting a deal, or a strategist tracking competitors: serious work deserves serious …

@arXiv_csLG_bot@mastoxiv.page
2026-02-25 16:08:18

Replaced article(s) found for cs.LG. arxiv.org/list/cs.LG/new
[5/6]:
- Watermarking Degrades Alignment in Language Models: Analysis and Mitigation
Apurv Verma, NhatHai Phan, Shubhendu Trivedi
arxiv.org/abs/2506.04462 mastoxiv.page/@arXiv_csCL_bot/
- Sensory-Motor Control with Large Language Models via Iterative Policy Refinement
J\^onata Tyska Carvalho, Stefano Nolfi
arxiv.org/abs/2506.04867 mastoxiv.page/@arXiv_csAI_bot/
- ICE-ID: A Novel Historical Census Dataset for Longitudinal Identity Resolution
de Carvalho, Popov, Kaatee, Correia, Th\'orisson, Li, Bj\"ornsson, Sigur{\dh}arson, Dibangoye
arxiv.org/abs/2506.13792 mastoxiv.page/@arXiv_csAI_bot/
- Feedback-driven recurrent quantum neural network universality
Lukas Gonon, Rodrigo Mart\'inez-Pe\~na, Juan-Pablo Ortega
arxiv.org/abs/2506.16332 mastoxiv.page/@arXiv_quantph_b
- Programming by Backprop: An Instruction is Worth 100 Examples When Finetuning LLMs
Cook, Sapora, Ahmadian, Khan, Rocktaschel, Foerster, Ruis
arxiv.org/abs/2506.18777 mastoxiv.page/@arXiv_csAI_bot/
- Stochastic Quantum Spiking Neural Networks with Quantum Memory and Local Learning
Jiechen Chen, Bipin Rajendran, Osvaldo Simeone
arxiv.org/abs/2506.21324 mastoxiv.page/@arXiv_csNE_bot/
- Enjoying Non-linearity in Multinomial Logistic Bandits: A Minimax-Optimal Algorithm
Pierre Boudart (SIERRA), Pierre Gaillard (Thoth), Alessandro Rudi (PSL, DI-ENS, Inria)
arxiv.org/abs/2507.05306 mastoxiv.page/@arXiv_statML_bo
- Characterizing State Space Model and Hybrid Language Model Performance with Long Context
Saptarshi Mitra, Rachid Karami, Haocheng Xu, Sitao Huang, Hyoukjun Kwon
arxiv.org/abs/2507.12442 mastoxiv.page/@arXiv_csAR_bot/
- Is Exchangeability better than I.I.D to handle Data Distribution Shifts while Pooling Data for Da...
Ayush Roy, Samin Enam, Jun Xia, Won Hwa Kim, Vishnu Suresh Lokhande
arxiv.org/abs/2507.19575 mastoxiv.page/@arXiv_csCV_bot/
- TASER: Table Agents for Schema-guided Extraction and Recommendation
Nicole Cho, Kirsty Fielding, William Watson, Sumitra Ganesh, Manuela Veloso
arxiv.org/abs/2508.13404 mastoxiv.page/@arXiv_csAI_bot/
- Morphology-Aware Peptide Discovery via Masked Conditional Generative Modeling
Nuno Costa, Julija Zavadlav
arxiv.org/abs/2509.02060 mastoxiv.page/@arXiv_qbioBM_bo
- PCPO: Proportionate Credit Policy Optimization for Aligning Image Generation Models
Jeongjae Lee, Jong Chul Ye
arxiv.org/abs/2509.25774 mastoxiv.page/@arXiv_csCV_bot/
- Multi-hop Deep Joint Source-Channel Coding with Deep Hash Distillation for Semantically Aligned I...
Didrik Bergstr\"om, Deniz G\"und\"uz, Onur G\"unl\"u
arxiv.org/abs/2510.06868 mastoxiv.page/@arXiv_csIT_bot/
- MoMaGen: Generating Demonstrations under Soft and Hard Constraints for Multi-Step Bimanual Mobile...
Chengshu Li, et al.
arxiv.org/abs/2510.18316 mastoxiv.page/@arXiv_csRO_bot/
- A Spectral Framework for Graph Neural Operators: Convergence Guarantees and Tradeoffs
Roxanne Holden, Luana Ruiz
arxiv.org/abs/2510.20954 mastoxiv.page/@arXiv_statML_bo
- Breaking Agent Backbones: Evaluating the Security of Backbone LLMs in AI Agents
Bazinska, Mathys, Casucci, Rojas-Carulla, Davies, Souly, Pfister
arxiv.org/abs/2510.22620 mastoxiv.page/@arXiv_csCR_bot/
- Uncertainty Calibration of Multi-Label Bird Sound Classifiers
Raphael Schwinger, Ben McEwen, Vincent S. Kather, Ren\'e Heinrich, Lukas Rauch, Sven Tomforde
arxiv.org/abs/2511.08261 mastoxiv.page/@arXiv_csSD_bot/
- Two-dimensional RMSD projections for reaction path visualization and validation
Rohit Goswami (Institute IMX and Lab-COSMO, \'Ecole polytechnique f\'ed\'erale de Lausanne)
arxiv.org/abs/2512.07329 mastoxiv.page/@arXiv_physicsch
- Distribution-informed Online Conformal Prediction
Dongjian Hu, Junxi Wu, Shu-Tao Xia, Changliang Zou
arxiv.org/abs/2512.07770 mastoxiv.page/@arXiv_statML_bo
- Coupling Experts and Routers in Mixture-of-Experts via an Auxiliary Loss
Ang Lv, Jin Ma, Yiyuan Ma, Siyuan Qiao
arxiv.org/abs/2512.23447 mastoxiv.page/@arXiv_csCL_bot/
toXiv_bot_toot

@lpryszcz@genomic.social
2026-01-14 19:43:58

assumptionblanket.blogspot.com

@detondev@social.linux.pizza
2026-02-19 21:30:15

Stop speaking on Venezuela like they got cold hands clasped together in new child rape clubs as Deep State Siri lists potential 0.0001% profit gains off their oil. Its not financially justifiable at the current oil price or even a slightly higher one to go get that oil. None of these execs have anything close to the brainpower it takes to deliberately orchestrate American financial dominance. Elon wouldnt be a trillionaire if there were still firm consequences beyond base physical revolt. It…

Clay Parikh does not want to talk about who he is working with inside the White House.
However, in an extraordinary and extensive conversation with TPM, he had a lot to say about many other things,
including his concerns that the “deep state”
and a shadowy “cabal” is influencing our elections. 
“When they say ‘deep state,’ it’s deep and it’s everywhere,”
Parikh said in a phone call late Tuesday evening. 
Parikh, who, according to court documents unsealed th…

@memeorandum@universeodon.com
2026-02-08 16:05:42

The Secret History of the Deep State (James Rosen/New York Times)
nytimes.com/2026/02/08/opinion
memeorandum.com/260208/p22#a26

@arXiv_csLG_bot@mastoxiv.page
2026-02-25 10:41:01

PIME: Prototype-based Interpretable MCTS-Enhanced Brain Network Analysis for Disorder Diagnosis
Kunyu Zhang, Yanwu Yang, Jing Zhang, Xiangjie Shi, Shujian Yu
arxiv.org/abs/2602.21046 arxiv.org/pdf/2602.21046 arxiv.org/html/2602.21046
arXiv:2602.21046v1 Announce Type: new
Abstract: Recent deep learning methods for fMRI-based diagnosis have achieved promising accuracy by modeling functional connectivity networks. However, standard approaches often struggle with noisy interactions, and conventional post-hoc attribution methods may lack reliability, potentially highlighting dataset-specific artifacts. To address these challenges, we introduce PIME, an interpretable framework that bridges intrinsic interpretability with minimal-sufficient subgraph optimization by integrating prototype-based classification and consistency training with structural perturbations during learning. This encourages a structured latent space and enables Monte Carlo Tree Search (MCTS) under a prototype-consistent objective to extract compact minimal-sufficient explanatory subgraphs post-training. Experiments on three benchmark fMRI datasets demonstrate that PIME achieves state-of-the-art performance. Furthermore, by constraining the search space via learned prototypes, PIME identifies critical brain regions that are consistent with established neuroimaging findings. Stability analysis shows 90% reproducibility and consistent explanations across atlases.
toXiv_bot_toot

@arXiv_qbioNC_bot@mastoxiv.page
2025-12-11 08:29:01

NeuroSketch: An Effective Framework for Neural Decoding via Systematic Architectural Optimization
Gaorui Zhang, Zhizhang Yuan, Jialan Yang, Junru Chen, Li Meng, Yang Yang
arxiv.org/abs/2512.09524 arxiv.org/pdf/2512.09524 arxiv.org/html/2512.09524
arXiv:2512.09524v1 Announce Type: new
Abstract: Neural decoding, a critical component of Brain-Computer Interface (BCI), has recently attracted increasing research interest. Previous research has focused on leveraging signal processing and deep learning methods to enhance neural decoding performance. However, the in-depth exploration of model architectures remains underexplored, despite its proven effectiveness in other tasks such as energy forecasting and image classification. In this study, we propose NeuroSketch, an effective framework for neural decoding via systematic architecture optimization. Starting with the basic architecture study, we find that CNN-2D outperforms other architectures in neural decoding tasks and explore its effectiveness from temporal and spatial perspectives. Building on this, we optimize the architecture from macro- to micro-level, achieving improvements in performance at each step. The exploration process and model validations take over 5,000 experiments spanning three distinct modalities (visual, auditory, and speech), three types of brain signals (EEG, SEEG, and ECoG), and eight diverse decoding tasks. Experimental results indicate that NeuroSketch achieves state-of-the-art (SOTA) performance across all evaluated datasets, positioning it as a powerful tool for neural decoding. Our code and scripts are available at github.com/Galaxy-Dawn/NeuroSk.
toXiv_bot_toot

@detondev@social.linux.pizza
2025-12-10 14:39:26

Kimi Onoda, Japan's new Minister of State for Economic Security, is a 43 year old half-Irish ex-game industry PR femcel with an extensive history of defending her exclusive attraction to anime boys on twitter

I don't think it's twisted at all.

I'm a woman who likes men, and I'm not interested in 3D men.

That's all.
I apologize for rambling on. I just couldn't stay silent... I really wish I had more allies within the party...

From here on, this is completely my personal opinion, but fundamentally, people who truly love 2D wouldn't touch 3D at all. I myself have absolutely no interest in 3D and consider it out of bounds. Maybe that kind of feeling is something only those involved can understand.
"Hurry up and get married," "Have kids" I've been told this by voters since my 20s, but even at 40, I still sigh every time these words are thrown at me. At what age will I finally be free of this?

In the 3D world, I'm married to my country, and besides, I've said my private life is 2D-exclusive, haven't I!! I'll say it over and over: I'm 2D-exclusive!!
I've been saying this for a while now, but I don't consider 3D (real-life) people as romantic prospects. I'm dead serious, not joking. For me, the very act of someone seeing the "possibility of marriage" in me is inherently uncomfortable (quoted from a reply)-it's the same as if you were to suggest to a gay person that they marry someone of the opposite sex... If you can understand it that way, that would help. This isn't about sexual harassment or anything like that; it's a deep-seated discomf…
@arXiv_csLG_bot@mastoxiv.page
2026-02-25 10:36:21

On Electric Vehicle Energy Demand Forecasting and the Effect of Federated Learning
Andreas Tritsarolis, Gil Sampaio, Nikos Pelekis, Yannis Theodoridis
arxiv.org/abs/2602.20782 arxiv.org/pdf/2602.20782 arxiv.org/html/2602.20782
arXiv:2602.20782v1 Announce Type: new
Abstract: The wide spread of new energy resources, smart devices, and demand side management strategies has motivated several analytics operations, from infrastructure load modeling to user behavior profiling. Energy Demand Forecasting (EDF) of Electric Vehicle Supply Equipments (EVSEs) is one of the most critical operations for ensuring efficient energy management and sustainability, since it enables utility providers to anticipate energy/power demand, optimize resource allocation, and implement proactive measures to improve grid reliability. However, accurate EDF is a challenging problem due to external factors, such as the varying user routines, weather conditions, driving behaviors, unknown state of charge, etc. Furthermore, as concerns and restrictions about privacy and sustainability have grown, training data has become increasingly fragmented, resulting in distributed datasets scattered across different data silos and/or edge devices, calling for federated learning solutions. In this paper, we investigate different well-established time series forecasting methodologies to address the EDF problem, from statistical methods (the ARIMA family) to traditional machine learning models (such as XGBoost) and deep neural networks (GRU and LSTM). We provide an overview of these methods through a performance comparison over four real-world EVSE datasets, evaluated under both centralized and federated learning paradigms, focusing on the trade-offs between forecasting fidelity, privacy preservation, and energy overheads. Our experimental results demonstrate, on the one hand, the superiority of gradient boosted trees (XGBoost) over statistical and NN-based models in both prediction accuracy and energy efficiency and, on the other hand, an insight that Federated Learning-enabled models balance these factors, offering a promising direction for decentralized energy demand forecasting.
toXiv_bot_toot

@arXiv_csLG_bot@mastoxiv.page
2025-12-22 10:32:50

Spatially-informed transformers: Injecting geostatistical covariance biases into self-attention for spatio-temporal forecasting
Yuri Calleo
arxiv.org/abs/2512.17696 arxiv.org/pdf/2512.17696 arxiv.org/html/2512.17696
arXiv:2512.17696v1 Announce Type: new
Abstract: The modeling of high-dimensional spatio-temporal processes presents a fundamental dichotomy between the probabilistic rigor of classical geostatistics and the flexible, high-capacity representations of deep learning. While Gaussian processes offer theoretical consistency and exact uncertainty quantification, their prohibitive computational scaling renders them impractical for massive sensor networks. Conversely, modern transformer architectures excel at sequence modeling but inherently lack a geometric inductive bias, treating spatial sensors as permutation-invariant tokens without a native understanding of distance. In this work, we propose a spatially-informed transformer, a hybrid architecture that injects a geostatistical inductive bias directly into the self-attention mechanism via a learnable covariance kernel. By formally decomposing the attention structure into a stationary physical prior and a non-stationary data-driven residual, we impose a soft topological constraint that favors spatially proximal interactions while retaining the capacity to model complex dynamics. We demonstrate the phenomenon of ``Deep Variography'', where the network successfully recovers the true spatial decay parameters of the underlying process end-to-end via backpropagation. Extensive experiments on synthetic Gaussian random fields and real-world traffic benchmarks confirm that our method outperforms state-of-the-art graph neural networks. Furthermore, rigorous statistical validation confirms that the proposed method delivers not only superior predictive accuracy but also well-calibrated probabilistic forecasts, effectively bridging the gap between physics-aware modeling and data-driven learning.
toXiv_bot_toot

@arXiv_csLG_bot@mastoxiv.page
2026-02-25 12:33:48

Crosslisted article(s) found for cs.LG. arxiv.org/list/cs.LG/new
[3/3]:
- Functional Continuous Decomposition
Teymur Aghayev
arxiv.org/abs/2602.20857 mastoxiv.page/@arXiv_eessSP_bo
- SpatiaLQA: A Benchmark for Evaluating Spatial Logical Reasoning in Vision-Language Models
Xie, Zhang, Shan, Zhu, Tang, Wei, Song, Wan, Song
arxiv.org/abs/2602.20901 mastoxiv.page/@arXiv_csCV_bot/
- Some Simple Economics of AGI
Christian Catalini, Xiang Hui, Jane Wu
arxiv.org/abs/2602.20946 mastoxiv.page/@arXiv_econGN_bo
- Multimodal MRI Report Findings Supervised Brain Lesion Segmentation with Substructures
Yubin Ge, Yongsong Huang, Xiaofeng Liu
arxiv.org/abs/2602.20994 mastoxiv.page/@arXiv_eessIV_bo
- MIP Candy: A Modular PyTorch Framework for Medical Image Processing
Tianhao Fu, Yucheng Chen
arxiv.org/abs/2602.21033 mastoxiv.page/@arXiv_csCV_bot/
- Empirically Calibrated Conditional Independence Tests
Milleno Pan, Antoine de Mathelin, Wesley Tansey
arxiv.org/abs/2602.21036 mastoxiv.page/@arXiv_statME_bo
- Is Multi-Distribution Learning as Easy as PAC Learning: Sharp Rates with Bounded Label Noise
Rafael Hanashiro, Abhishek Shetty, Patrick Jaillet
arxiv.org/abs/2602.21039 mastoxiv.page/@arXiv_statML_bo
- Position-Aware Sequential Attention for Accurate Next Item Recommendations
Timur Nabiev, Evgeny Frolov
arxiv.org/abs/2602.21052 mastoxiv.page/@arXiv_csIR_bot/
- Motivation is Something You Need
Mehdi Acheli, Walid Gaaloul
arxiv.org/abs/2602.21064 mastoxiv.page/@arXiv_csAI_bot/
- An Enhanced Projection Pursuit Tree Classifier with Visual Methods for Assessing Algorithmic Impr...
Natalia da Silva, Dianne Cook, Eun-Kyung Lee
arxiv.org/abs/2602.21130 mastoxiv.page/@arXiv_statML_bo
- Complexity of Classical Acceleration for $\ell_1$-Regularized PageRank
Kimon Fountoulakis, David Mart\'inez-Rubio
arxiv.org/abs/2602.21138 mastoxiv.page/@arXiv_mathOC_bo
- LUMEN: Longitudinal Multi-Modal Radiology Model for Prognosis and Diagnosis
Jiang, Yang, Nath, Parida, Kulkarni, Xu, Xu, Anwar, Roth, Linguraru
arxiv.org/abs/2602.21142 mastoxiv.page/@arXiv_csCV_bot/
- A Benchmark for Deep Information Synthesis
Debjit Paul, et al.
arxiv.org/abs/2602.21143 mastoxiv.page/@arXiv_csAI_bot/
- Scaling State-Space Models on Multiple GPUs with Tensor Parallelism
Anurag Dutt, Nimit Shah, Hazem Masarani, Anshul Gandhi
arxiv.org/abs/2602.21144 mastoxiv.page/@arXiv_csDC_bot/
- Not Just How Much, But Where: Decomposing Epistemic Uncertainty into Per-Class Contributions
Mame Diarra Toure, David A. Stephens
arxiv.org/abs/2602.21160 mastoxiv.page/@arXiv_statML_bo
- Aletheia tackles FirstProof autonomously
Tony Feng, et al.
arxiv.org/abs/2602.21201 mastoxiv.page/@arXiv_csAI_bot/
- Squint: Fast Visual Reinforcement Learning for Sim-to-Real Robotics
Abdulaziz Almuzairee, Henrik I. Christensen
arxiv.org/abs/2602.21203 mastoxiv.page/@arXiv_csRO_bot/
toXiv_bot_toot

@arXiv_csLG_bot@mastoxiv.page
2025-12-22 13:54:35

Replaced article(s) found for cs.LG. arxiv.org/list/cs.LG/new
[2/5]:
- The Diffusion Duality
Sahoo, Deschenaux, Gokaslan, Wang, Chiu, Kuleshov
arxiv.org/abs/2506.10892 mastoxiv.page/@arXiv_csLG_bot/
- Multimodal Representation Learning and Fusion
Jin, Ge, Xie, Luo, Song, Bi, Liang, Guan, Yeong, Song, Hao
arxiv.org/abs/2506.20494 mastoxiv.page/@arXiv_csLG_bot/
- The kernel of graph indices for vector search
Mariano Tepper, Ted Willke
arxiv.org/abs/2506.20584 mastoxiv.page/@arXiv_csLG_bot/
- OptScale: Probabilistic Optimality for Inference-time Scaling
Youkang Wang, Jian Wang, Rubing Chen, Xiao-Yong Wei
arxiv.org/abs/2506.22376 mastoxiv.page/@arXiv_csLG_bot/
- Boosting Revisited: Benchmarking and Advancing LP-Based Ensemble Methods
Fabian Akkerman, Julien Ferry, Christian Artigues, Emmanuel Hebrard, Thibaut Vidal
arxiv.org/abs/2507.18242 mastoxiv.page/@arXiv_csLG_bot/
- MolMark: Safeguarding Molecular Structures through Learnable Atom-Level Watermarking
Runwen Hu, Peilin Chen, Keyan Ding, Shiqi Wang
arxiv.org/abs/2508.17702 mastoxiv.page/@arXiv_csLG_bot/
- Dual-Distilled Heterogeneous Federated Learning with Adaptive Margins for Trainable Global Protot...
Fatema Siddika, Md Anwar Hossen, Wensheng Zhang, Anuj Sharma, Juan Pablo Mu\~noz, Ali Jannesari
arxiv.org/abs/2508.19009 mastoxiv.page/@arXiv_csLG_bot/
- STDiff: A State Transition Diffusion Framework for Time Series Imputation in Industrial Systems
Gary Simethy, Daniel Ortiz-Arroyo, Petar Durdevic
arxiv.org/abs/2508.19011 mastoxiv.page/@arXiv_csLG_bot/
- EEGDM: Learning EEG Representation with Latent Diffusion Model
Shaocong Wang, Tong Liu, Yihan Li, Ming Li, Kairui Wen, Pei Yang, Wenqi Ji, Minjing Yu, Yong-Jin Liu
arxiv.org/abs/2508.20705 mastoxiv.page/@arXiv_csLG_bot/
- Data-Free Continual Learning of Server Models in Model-Heterogeneous Cloud-Device Collaboration
Xiao Zhang, Zengzhe Chen, Yuan Yuan, Yifei Zou, Fuzhen Zhuang, Wenyu Jiao, Yuke Wang, Dongxiao Yu
arxiv.org/abs/2509.25977 mastoxiv.page/@arXiv_csLG_bot/
- Fine-Tuning Masked Diffusion for Provable Self-Correction
Jaeyeon Kim, Seunggeun Kim, Taekyun Lee, David Z. Pan, Hyeji Kim, Sham Kakade, Sitan Chen
arxiv.org/abs/2510.01384 mastoxiv.page/@arXiv_csLG_bot/
- A Generic Machine Learning Framework for Radio Frequency Fingerprinting
Alex Hiles, Bashar I. Ahmad
arxiv.org/abs/2510.09775 mastoxiv.page/@arXiv_csLG_bot/
- ASecond-Order SpikingSSM for Wearables
Kartikay Agrawal, Abhijeet Vikram, Vedant Sharma, Vaishnavi Nagabhushana, Ayon Borthakur
arxiv.org/abs/2510.14386 mastoxiv.page/@arXiv_csLG_bot/
- Utility-Diversity Aware Online Batch Selection for LLM Supervised Fine-tuning
Heming Zou, Yixiu Mao, Yun Qu, Qi Wang, Xiangyang Ji
arxiv.org/abs/2510.16882 mastoxiv.page/@arXiv_csLG_bot/
- Seeing Structural Failure Before it Happens: An Image-Based Physics-Informed Neural Network (PINN...
Omer Jauhar Khan, Sudais Khan, Hafeez Anwar, Shahzeb Khan, Shams Ul Arifeen
arxiv.org/abs/2510.23117 mastoxiv.page/@arXiv_csLG_bot/
- Training Deep Physics-Informed Kolmogorov-Arnold Networks
Spyros Rigas, Fotios Anagnostopoulos, Michalis Papachristou, Georgios Alexandridis
arxiv.org/abs/2510.23501 mastoxiv.page/@arXiv_csLG_bot/
- Semi-Supervised Preference Optimization with Limited Feedback
Seonggyun Lee, Sungjun Lim, Seojin Park, Soeun Cheon, Kyungwoo Song
arxiv.org/abs/2511.00040 mastoxiv.page/@arXiv_csLG_bot/
- Towards Causal Market Simulators
Dennis Thumm, Luis Ontaneda Mijares
arxiv.org/abs/2511.04469 mastoxiv.page/@arXiv_csLG_bot/
- Incremental Generation is Necessary and Sufficient for Universality in Flow-Based Modelling
Hossein Rouhvarzi, Anastasis Kratsios
arxiv.org/abs/2511.09902 mastoxiv.page/@arXiv_csLG_bot/
- Optimizing Mixture of Block Attention
Guangxuan Xiao, Junxian Guo, Kasra Mazaheri, Song Han
arxiv.org/abs/2511.11571 mastoxiv.page/@arXiv_csLG_bot/
- Assessing Automated Fact-Checking for Medical LLM Responses with Knowledge Graphs
Shasha Zhou, Mingyu Huang, Jack Cole, Charles Britton, Ming Yin, Jan Wolber, Ke Li
arxiv.org/abs/2511.12817 mastoxiv.page/@arXiv_csLG_bot/
toXiv_bot_toot