Tootfinder

Opt-in global Mastodon full text search. Join the index!

Clay Parikh does not want to talk about who he is working with inside the White House.
However, in an extraordinary and extensive conversation with TPM, he had a lot to say about many other things,
including his concerns that the “deep state”
and a shadowy “cabal” is influencing our elections. 
“When they say ‘deep state,’ it’s deep and it’s everywhere,”
Parikh said in a phone call late Tuesday evening. 
Parikh, who, according to court documents unsealed th…

@detondev@social.linux.pizza
2025-12-10 14:39:26

Kimi Onoda, Japan's new Minister of State for Economic Security, is a 43 year old half-Irish ex-game industry PR femcel with an extensive history of defending her exclusive attraction to anime boys on twitter

I don't think it's twisted at all.

I'm a woman who likes men, and I'm not interested in 3D men.

That's all.
I apologize for rambling on. I just couldn't stay silent... I really wish I had more allies within the party...

From here on, this is completely my personal opinion, but fundamentally, people who truly love 2D wouldn't touch 3D at all. I myself have absolutely no interest in 3D and consider it out of bounds. Maybe that kind of feeling is something only those involved can understand.
"Hurry up and get married," "Have kids" I've been told this by voters since my 20s, but even at 40, I still sigh every time these words are thrown at me. At what age will I finally be free of this?

In the 3D world, I'm married to my country, and besides, I've said my private life is 2D-exclusive, haven't I!! I'll say it over and over: I'm 2D-exclusive!!
I've been saying this for a while now, but I don't consider 3D (real-life) people as romantic prospects. I'm dead serious, not joking. For me, the very act of someone seeing the "possibility of marriage" in me is inherently uncomfortable (quoted from a reply)-it's the same as if you were to suggest to a gay person that they marry someone of the opposite sex... If you can understand it that way, that would help. This isn't about sexual harassment or anything like that; it's a deep-seated discomf…
@arXiv_qbioNC_bot@mastoxiv.page
2025-12-11 08:29:01

NeuroSketch: An Effective Framework for Neural Decoding via Systematic Architectural Optimization
Gaorui Zhang, Zhizhang Yuan, Jialan Yang, Junru Chen, Li Meng, Yang Yang
arxiv.org/abs/2512.09524 arxiv.org/pdf/2512.09524 arxiv.org/html/2512.09524
arXiv:2512.09524v1 Announce Type: new
Abstract: Neural decoding, a critical component of Brain-Computer Interface (BCI), has recently attracted increasing research interest. Previous research has focused on leveraging signal processing and deep learning methods to enhance neural decoding performance. However, the in-depth exploration of model architectures remains underexplored, despite its proven effectiveness in other tasks such as energy forecasting and image classification. In this study, we propose NeuroSketch, an effective framework for neural decoding via systematic architecture optimization. Starting with the basic architecture study, we find that CNN-2D outperforms other architectures in neural decoding tasks and explore its effectiveness from temporal and spatial perspectives. Building on this, we optimize the architecture from macro- to micro-level, achieving improvements in performance at each step. The exploration process and model validations take over 5,000 experiments spanning three distinct modalities (visual, auditory, and speech), three types of brain signals (EEG, SEEG, and ECoG), and eight diverse decoding tasks. Experimental results indicate that NeuroSketch achieves state-of-the-art (SOTA) performance across all evaluated datasets, positioning it as a powerful tool for neural decoding. Our code and scripts are available at github.com/Galaxy-Dawn/NeuroSk.
toXiv_bot_toot

@Techmeme@techhub.social
2025-12-01 09:30:45

Omnicom's DM9 returned three Cannes awards after a NC state senator and CNN Brazil found the company had used AI to manipulate their content and used it for ads (Emmanuel Felton/Washington Post)
washingtonpost.com/nation/2025

@memeorandum@universeodon.com
2026-02-08 16:05:42

The Secret History of the Deep State (James Rosen/New York Times)
nytimes.com/2026/02/08/opinion
memeorandum.com/260208/p22#a26

@Mediagazer@mstdn.social
2025-12-01 02:25:35

Omnicom's DM9 returned three Cannes awards after a NC state senator and CNN Brazil found the company had used AI to manipulate their content and used it for ads (Emmanuel Felton/Washington Post)
washingtonpost.com/nation/2025

@PaulWermer@sfba.social
2025-12-02 14:45:13

How soon before LLMs replace the laborers?
Age of the ‘scam state’: how an illicit, multibillion-dollar industry has taken root in south-east Asia
theguardian.com/technology/202

@paulwermer@sfba.social
2025-12-02 14:45:13

How soon before LLMs replace the laborers?
Age of the ‘scam state’: how an illicit, multibillion-dollar industry has taken root in south-east Asia
theguardian.com/technology/202

@DrPlanktonguy@ecoevo.social
2026-02-07 14:40:03

Weekend #Plankton Factoid 🦠🦐
Some calanoid copepods, including Calanus finmarchicus, spend 6-8 months of the winter in a dormancy state as juvenile copepodites deep under the thermocline at a depth of 500-1500m which reduces predation and energy use. This depth is determined by interaction of temperature and salinity of the water and the copepod's lipid content of waxy esters. They have …

image/jpeg a very transparent torpedo shaped crustacean is seen with its antennae folded along underneath its body. A large clear lipid sac is visible extending through the body dorsally. Photo by M. Runge.
@ascendor@social.tchncs.de
2026-01-07 16:39:00

Geschichte wird von Siegern geschrieben. 🤮
whitehouse.gov/j6/

@gwire@mastodon.social
2025-12-06 20:39:40

If anything, one piece of evidence supporting Liz Truss's thesis about the decline of Britain, is the national indignity of a former Prime Minister appearing on Rumble to rant about the deep state wokes.
(She *was* Prime Minister, there are citations for this on Wikipedia.)

@servelan@newsie.social
2025-12-27 00:39:13

"the Journal argued that the recent decision by the Jim Beam distillery to halt production at its Claremont, Kentucky facility for all of 2026 can be chalked up to Trump's tariffs."
Trump is actively causing 'harm' to this deep-red state's key industry: WSJ - Alternet.org
alternet.org/trump-harm-red-st

@Techmeme@techhub.social
2026-01-02 12:55:32

Tech analyst Dan Wang reflects on the Chinese Communist Party vs. Silicon Valley, AI and manufacturing, and how China and the US are building the future (Dan Wang)
danwang.co/2025-letter/

@detondev@social.linux.pizza
2026-02-03 15:12:23

deep state humiliation ritual where they make u put white neurospicy late 30s queerdo in ur bio

@lpryszcz@genomic.social
2026-01-14 19:43:58

assumptionblanket.blogspot.com

@arXiv_csLG_bot@mastoxiv.page
2025-12-22 10:32:50

Spatially-informed transformers: Injecting geostatistical covariance biases into self-attention for spatio-temporal forecasting
Yuri Calleo
arxiv.org/abs/2512.17696 arxiv.org/pdf/2512.17696 arxiv.org/html/2512.17696
arXiv:2512.17696v1 Announce Type: new
Abstract: The modeling of high-dimensional spatio-temporal processes presents a fundamental dichotomy between the probabilistic rigor of classical geostatistics and the flexible, high-capacity representations of deep learning. While Gaussian processes offer theoretical consistency and exact uncertainty quantification, their prohibitive computational scaling renders them impractical for massive sensor networks. Conversely, modern transformer architectures excel at sequence modeling but inherently lack a geometric inductive bias, treating spatial sensors as permutation-invariant tokens without a native understanding of distance. In this work, we propose a spatially-informed transformer, a hybrid architecture that injects a geostatistical inductive bias directly into the self-attention mechanism via a learnable covariance kernel. By formally decomposing the attention structure into a stationary physical prior and a non-stationary data-driven residual, we impose a soft topological constraint that favors spatially proximal interactions while retaining the capacity to model complex dynamics. We demonstrate the phenomenon of ``Deep Variography'', where the network successfully recovers the true spatial decay parameters of the underlying process end-to-end via backpropagation. Extensive experiments on synthetic Gaussian random fields and real-world traffic benchmarks confirm that our method outperforms state-of-the-art graph neural networks. Furthermore, rigorous statistical validation confirms that the proposed method delivers not only superior predictive accuracy but also well-calibrated probabilistic forecasts, effectively bridging the gap between physics-aware modeling and data-driven learning.
toXiv_bot_toot

@PaulWermer@sfba.social
2025-11-14 17:12:20

"Despite the generally strong support for democracy, there was an equally strong desire for radical change, with more people in most countries thinking the system was rigged in favour of the rich and powerful rather than working for everyone."
What? Do ordinary mortals think the neoliberal cosnsenus has failed us? How can this be? Look at all the wonderful things the billionaires have developed for us!

@paulwermer@sfba.social
2025-11-14 17:12:20

"Despite the generally strong support for democracy, there was an equally strong desire for radical change, with more people in most countries thinking the system was rigged in favour of the rich and powerful rather than working for everyone."
What? Do ordinary mortals think the neoliberal cosnsenus has failed us? How can this be? Look at all the wonderful things the billionaires have developed for us!

@arXiv_csLG_bot@mastoxiv.page
2025-12-22 13:54:35

Replaced article(s) found for cs.LG. arxiv.org/list/cs.LG/new
[2/5]:
- The Diffusion Duality
Sahoo, Deschenaux, Gokaslan, Wang, Chiu, Kuleshov
arxiv.org/abs/2506.10892 mastoxiv.page/@arXiv_csLG_bot/
- Multimodal Representation Learning and Fusion
Jin, Ge, Xie, Luo, Song, Bi, Liang, Guan, Yeong, Song, Hao
arxiv.org/abs/2506.20494 mastoxiv.page/@arXiv_csLG_bot/
- The kernel of graph indices for vector search
Mariano Tepper, Ted Willke
arxiv.org/abs/2506.20584 mastoxiv.page/@arXiv_csLG_bot/
- OptScale: Probabilistic Optimality for Inference-time Scaling
Youkang Wang, Jian Wang, Rubing Chen, Xiao-Yong Wei
arxiv.org/abs/2506.22376 mastoxiv.page/@arXiv_csLG_bot/
- Boosting Revisited: Benchmarking and Advancing LP-Based Ensemble Methods
Fabian Akkerman, Julien Ferry, Christian Artigues, Emmanuel Hebrard, Thibaut Vidal
arxiv.org/abs/2507.18242 mastoxiv.page/@arXiv_csLG_bot/
- MolMark: Safeguarding Molecular Structures through Learnable Atom-Level Watermarking
Runwen Hu, Peilin Chen, Keyan Ding, Shiqi Wang
arxiv.org/abs/2508.17702 mastoxiv.page/@arXiv_csLG_bot/
- Dual-Distilled Heterogeneous Federated Learning with Adaptive Margins for Trainable Global Protot...
Fatema Siddika, Md Anwar Hossen, Wensheng Zhang, Anuj Sharma, Juan Pablo Mu\~noz, Ali Jannesari
arxiv.org/abs/2508.19009 mastoxiv.page/@arXiv_csLG_bot/
- STDiff: A State Transition Diffusion Framework for Time Series Imputation in Industrial Systems
Gary Simethy, Daniel Ortiz-Arroyo, Petar Durdevic
arxiv.org/abs/2508.19011 mastoxiv.page/@arXiv_csLG_bot/
- EEGDM: Learning EEG Representation with Latent Diffusion Model
Shaocong Wang, Tong Liu, Yihan Li, Ming Li, Kairui Wen, Pei Yang, Wenqi Ji, Minjing Yu, Yong-Jin Liu
arxiv.org/abs/2508.20705 mastoxiv.page/@arXiv_csLG_bot/
- Data-Free Continual Learning of Server Models in Model-Heterogeneous Cloud-Device Collaboration
Xiao Zhang, Zengzhe Chen, Yuan Yuan, Yifei Zou, Fuzhen Zhuang, Wenyu Jiao, Yuke Wang, Dongxiao Yu
arxiv.org/abs/2509.25977 mastoxiv.page/@arXiv_csLG_bot/
- Fine-Tuning Masked Diffusion for Provable Self-Correction
Jaeyeon Kim, Seunggeun Kim, Taekyun Lee, David Z. Pan, Hyeji Kim, Sham Kakade, Sitan Chen
arxiv.org/abs/2510.01384 mastoxiv.page/@arXiv_csLG_bot/
- A Generic Machine Learning Framework for Radio Frequency Fingerprinting
Alex Hiles, Bashar I. Ahmad
arxiv.org/abs/2510.09775 mastoxiv.page/@arXiv_csLG_bot/
- ASecond-Order SpikingSSM for Wearables
Kartikay Agrawal, Abhijeet Vikram, Vedant Sharma, Vaishnavi Nagabhushana, Ayon Borthakur
arxiv.org/abs/2510.14386 mastoxiv.page/@arXiv_csLG_bot/
- Utility-Diversity Aware Online Batch Selection for LLM Supervised Fine-tuning
Heming Zou, Yixiu Mao, Yun Qu, Qi Wang, Xiangyang Ji
arxiv.org/abs/2510.16882 mastoxiv.page/@arXiv_csLG_bot/
- Seeing Structural Failure Before it Happens: An Image-Based Physics-Informed Neural Network (PINN...
Omer Jauhar Khan, Sudais Khan, Hafeez Anwar, Shahzeb Khan, Shams Ul Arifeen
arxiv.org/abs/2510.23117 mastoxiv.page/@arXiv_csLG_bot/
- Training Deep Physics-Informed Kolmogorov-Arnold Networks
Spyros Rigas, Fotios Anagnostopoulos, Michalis Papachristou, Georgios Alexandridis
arxiv.org/abs/2510.23501 mastoxiv.page/@arXiv_csLG_bot/
- Semi-Supervised Preference Optimization with Limited Feedback
Seonggyun Lee, Sungjun Lim, Seojin Park, Soeun Cheon, Kyungwoo Song
arxiv.org/abs/2511.00040 mastoxiv.page/@arXiv_csLG_bot/
- Towards Causal Market Simulators
Dennis Thumm, Luis Ontaneda Mijares
arxiv.org/abs/2511.04469 mastoxiv.page/@arXiv_csLG_bot/
- Incremental Generation is Necessary and Sufficient for Universality in Flow-Based Modelling
Hossein Rouhvarzi, Anastasis Kratsios
arxiv.org/abs/2511.09902 mastoxiv.page/@arXiv_csLG_bot/
- Optimizing Mixture of Block Attention
Guangxuan Xiao, Junxian Guo, Kasra Mazaheri, Song Han
arxiv.org/abs/2511.11571 mastoxiv.page/@arXiv_csLG_bot/
- Assessing Automated Fact-Checking for Medical LLM Responses with Knowledge Graphs
Shasha Zhou, Mingyu Huang, Jack Cole, Charles Britton, Ming Yin, Jan Wolber, Ke Li
arxiv.org/abs/2511.12817 mastoxiv.page/@arXiv_csLG_bot/
toXiv_bot_toot