Maybe in 2026 we can start to agree that "a complex algorithm selects content" is an editorial stance and platforms should be responsible to a certain degree.
I love taking the train, but the weaponized incompetence of VIA Rail makes wish I'd taken another type of transportation, nearly every time.
Current status: “I, like the other two agents you spoke to, can't actually help you in any way, but I assure you that once you're on the train, the staff will be able to help. It's *the algorithm* not us!”
Olha que coincidência engraçada. Eu estou ouvindo os episódios do #RPGuaxa em sequência e, como não tenho muito tempo para essas coisas, estou muito atrasado. E o que eu peguei pra ouvir agora é justamente um episódio especial de Natal, só que antigo (acho que é de 2022) rs.
Ališs, pra quem curte RPG eu recomendo muito seguir o podcast "Realidades Paralelas do Guaxinim" do
Simulation and optimization of the Active Magnetic Shield of the n2EDM experiment
N. J. Ayres, G. Ban, G. Bison, K. Bodek, V. Bondar, T. Bouillaud, G. L. Caratsch, E. Chanel, W. Chen, C. Crawford, V. Czamler, C. B. Doorenbos, S. Emmeneger, S. K. Ermakov, M. Ferry, M. Fertl, A. Fratangelo, D. Galbinski, W. C. Griffith, Z. D. Grujic, K. Kirch, V. Kletzl, J. Krempel, B. Lauss, T. Lefort, A. Lejuez, K. Michielsen, J. Micko, P. Mullan, O. Naviliat-Cuncic, F. M. Piegsa, G. Pignol, C. Pistillo, I. Rien\"acker, D. Ries, S. Roccia, D. Rozp\k{e}dzik, L. Sanchez-Real Zielniewicz, N. von Schickh, P. Schmidt-Wellenburg, E. P. Segarra, L. Segner, N. Severijns, K. Svirina, J. Thorne, J. Vankeirsbilck, N. Yazdandoost, J. Zejma, N. Ziehl, G. Zsigmond
https://arxiv.org/abs/2601.22960 https://arxiv.org/pdf/2601.22960 https://arxiv.org/html/2601.22960
arXiv:2601.22960v1 Announce Type: new
Abstract: The n2EDM experiment at the Paul Scherrer Institute aims to conduct a high-sensitivity search for the electric dipole moment of the neutron. Magnetic stability and control are achieved through a combination of passive shielding, provided by a magnetically shielded room (MSR), and a surrounding active field compensation system by an Active Magnetic Shield (AMS). The AMS is a feedback-controlled system of eight coils spanned on an irregular grid, designed to provide magnetic stability to the enclosed volume by actively suppressing external magnetic disturbances. It can compensate static and variable magnetic fields up to $\pm 50$ $\mu$T (homogeneous components) and $\pm 5$ $\mu$T/m (first-order gradients), suppressing them to a few $\mu$T in the sub-Hertz frequency range. We present a full finite element simulation of magnetic fields generated by the AMS in the presence of the MSR. This simulation is of sufficient accuracy to approach our measurements. We demonstrate how the simulation can be used with an example, obtaining an optimal number and placement of feedback sensors using genetic algorithms.
toXiv_bot_toot
Benders Decomposition for Passenger-Oriented Train Timetabling with Hybrid Periodicity
Zhiyuan Yao, Anita Sch\"obel, Lei Nie, Sven J\"ager
https://arxiv.org/abs/2511.09892 https://arxiv.org/pdf/2511.09892 https://arxiv.org/html/2511.09892
arXiv:2511.09892v1 Announce Type: new
Abstract: Periodic timetables are widely adopted in passenger railway operations due to their regular service patterns and well-coordinated train connections. However, fluctuations in passenger demand require varying train services across different periods, necessitating adjustments to the periodic timetable. This study addresses a hybrid periodic train timetabling problem, which enhances the flexibility and demand responsiveness of a given periodic timetable through schedule adjustments and aperiodic train insertions, taking into account the rolling stock circulation. Since timetable modifications may affect initial passenger routes, passenger routing is incorporated into the problem to guide planning decisions towards a passenger-oriented objective. Using a time-space network representation, the problem is formulated as a dynamic railway service network design model with resource constraints. To handle the complexity of real-world instances, we propose a decomposition-based algorithm integrating Benders decomposition and column generation, enhanced with multiple preprocessing and accelerating techniques. Numerical experiments demonstrate the effectiveness of the algorithm and highlight the advantage of hybrid periodic timetables in reducing passenger travel costs.
toXiv_bot_toot
Crosslisted article(s) found for cs.LG. https://arxiv.org/list/cs.LG/new
[1/3]:
- Optimizing Text Search: A Novel Pattern Matching Algorithm Based on Ukkonen's Approach
Xinyu Guan, Shaohua Zhang
https://arxiv.org/abs/2512.16927 https://mastoxiv.page/@arXiv_csDS_bot/115762062326187898
- SpIDER: Spatially Informed Dense Embedding Retrieval for Software Issue Localization
Shravan Chaudhari, Rahul Thomas Jacob, Mononito Goswami, Jiajun Cao, Shihab Rashid, Christian Bock
https://arxiv.org/abs/2512.16956 https://mastoxiv.page/@arXiv_csSE_bot/115762248476963893
- MemoryGraft: Persistent Compromise of LLM Agents via Poisoned Experience Retrieval
Saksham Sahai Srivastava, Haoyu He
https://arxiv.org/abs/2512.16962 https://mastoxiv.page/@arXiv_csCR_bot/115762140339109012
- Colormap-Enhanced Vision Transformers for MRI-Based Multiclass (4-Class) Alzheimer's Disease Clas...
Faisal Ahmed
https://arxiv.org/abs/2512.16964 https://mastoxiv.page/@arXiv_eessIV_bot/115762196702065869
- Probing Scientific General Intelligence of LLMs with Scientist-Aligned Workflows
Wanghan Xu, et al.
https://arxiv.org/abs/2512.16969 https://mastoxiv.page/@arXiv_csAI_bot/115762050529328276
- PAACE: A Plan-Aware Automated Agent Context Engineering Framework
Kamer Ali Yuksel
https://arxiv.org/abs/2512.16970 https://mastoxiv.page/@arXiv_csAI_bot/115762054461584205
- A Women's Health Benchmark for Large Language Models
Elisabeth Gruber, et al.
https://arxiv.org/abs/2512.17028 https://mastoxiv.page/@arXiv_csCL_bot/115762049873946945
- Perturb Your Data: Paraphrase-Guided Training Data Watermarking
Pranav Shetty, Mirazul Haque, Petr Babkin, Zhiqiang Ma, Xiaomo Liu, Manuela Veloso
https://arxiv.org/abs/2512.17075 https://mastoxiv.page/@arXiv_csCL_bot/115762077400293945
- Disentangled representations via score-based variational autoencoders
Benjamin S. H. Lyo, Eero P. Simoncelli, Cristina Savin
https://arxiv.org/abs/2512.17127 https://mastoxiv.page/@arXiv_statML_bot/115762251753966702
- Biosecurity-Aware AI: Agentic Risk Auditing of Soft Prompt Attacks on ESM-Based Variant Predictors
Huixin Zhan
https://arxiv.org/abs/2512.17146 https://mastoxiv.page/@arXiv_csCR_bot/115762318582013305
- Application of machine learning to predict food processing level using Open Food Facts
Arora, Chauhan, Rana, Aditya, Bhagat, Kumar, Kumar, Semar, Singh, Bagler
https://arxiv.org/abs/2512.17169 https://mastoxiv.page/@arXiv_qbioBM_bot/115762302873829397
- Systemic Risk Radar: A Multi-Layer Graph Framework for Early Market Crash Warning
Sandeep Neela
https://arxiv.org/abs/2512.17185 https://mastoxiv.page/@arXiv_qfinRM_bot/115762275982224870
- Do Foundational Audio Encoders Understand Music Structure?
Keisuke Toyama, Zhi Zhong, Akira Takahashi, Shusuke Takahashi, Yuki Mitsufuji
https://arxiv.org/abs/2512.17209 https://mastoxiv.page/@arXiv_csSD_bot/115762341541572505
- CheXPO-v2: Preference Optimization for Chest X-ray VLMs with Knowledge Graph Consistency
Xiao Liang, Yuxuan An, Di Wang, Jiawei Hu, Zhicheng Jiao, Bin Jing, Quan Wang
https://arxiv.org/abs/2512.17213 https://mastoxiv.page/@arXiv_csCV_bot/115762574180736975
- Machine Learning Assisted Parameter Tuning on Wavelet Transform Amorphous Radial Distribution Fun...
Deriyan Senjaya, Stephen Ekaputra Limantoro
https://arxiv.org/abs/2512.17245 https://mastoxiv.page/@arXiv_condmatmtrlsci_bot/115762447037143855
- AlignDP: Hybrid Differential Privacy with Rarity-Aware Protection for LLMs
Madhava Gaikwad
https://arxiv.org/abs/2512.17251 https://mastoxiv.page/@arXiv_csCR_bot/115762396593872943
- Practical Framework for Privacy-Preserving and Byzantine-robust Federated Learning
Baolei Zhang, Minghong Fang, Zhuqing Liu, Biao Yi, Peizhao Zhou, Yuan Wang, Tong Li, Zheli Liu
https://arxiv.org/abs/2512.17254 https://mastoxiv.page/@arXiv_csCR_bot/115762402470985707
- Verifiability-First Agents: Provable Observability and Lightweight Audit Agents for Controlling A...
Abhivansh Gupta
https://arxiv.org/abs/2512.17259 https://mastoxiv.page/@arXiv_csMA_bot/115762225538364939
- Warmer for Less: A Cost-Efficient Strategy for Cold-Start Recommendations at Pinterest
Saeed Ebrahimi, Weijie Jiang, Jaewon Yang, Olafur Gudmundsson, Yucheng Tu, Huizhong Duan
https://arxiv.org/abs/2512.17277 https://mastoxiv.page/@arXiv_csIR_bot/115762214396869930
- LibriVAD: A Scalable Open Dataset with Deep Learning Benchmarks for Voice Activity Detection
Ioannis Stylianou, Achintya kr. Sarkar, Nauman Dawalatabad, James Glass, Zheng-Hua Tan
https://arxiv.org/abs/2512.17281 https://mastoxiv.page/@arXiv_csSD_bot/115762361858560703
- Penalized Fair Regression for Multiple Groups in Chronic Kidney Disease
Carter H. Nakamoto, Lucia Lushi Chen, Agata Foryciarz, Sherri Rose
https://arxiv.org/abs/2512.17340 https://mastoxiv.page/@arXiv_statME_bot/115762446402738033
toXiv_bot_toot
Cynicism, "AI"
I've been pointed out the "Reflections on 2025" post by Samuel Albanie [1]. The author's writing style makes it quite a fun, I admit.
The first part, "The Compute Theory of Everything" is an optimistic piece on "#AI". Long story short, poor "AI researchers" have been struggling for years because of predominant misconception that "machines should have been powerful enough". Fortunately, now they can finally get their hands on the kind of power that used to be only available to supervillains, and all they have to do is forget about morals, agree that their research will be used to murder millions of people, and a few more millions will die as a side effect of the climate crisis. But I'm digressing.
The author is referring to an essay by Hans Moravec, "The Role of Raw Power in Intelligence" [2]. It's also quite an interesting read, starting with a chapter on how intelligence evolved independently at least four times. The key point inferred from that seems to be, that all we need is more computing power, and we'll eventually "brute-force" all AI-related problems (or die trying, I guess).
As a disclaimer, I have to say I'm not a biologist. Rather just a random guy who read a fair number of pieces on evolution. And I feel like the analogies brought here are misleading at best.
Firstly, there seems to be an assumption that evolution inexorably leads to higher "intelligence", with a certain implicit assumption on what intelligence is. Per that assumption, any animal that gets "brainier" will eventually become intelligent. However, this seems to be missing the point that both evolution and learning doesn't operate in a void.
Yes, many animals did attain a certain level of intelligence, but they attained it in a long chain of development, while solving specific problems, in specific bodies, in specific environments. I don't think that you can just stuff more brains into a random animal, and expect it to attain human intelligence; and the same goes for a computer — you can't expect that given more power, algorithms will eventually converge on human-like intelligence.
Secondly, and perhaps more importantly, what evolution did succeed at first is achieving neural networks that are far more energy efficient than whatever computers are doing today. Even if indeed "computing power" paved the way for intelligence, what came first is extremely efficient "hardware". Nowadays, human seem to be skipping that part. Optimizing is hard, so why bother with it? We can afford bigger data centers, we can afford to waste more energy, we can afford to deprive people of drinking water, so let's just skip to the easy part!
And on top of that, we're trying to squash hundreds of millions of years of evolution into… a decade, perhaps? What could possibly go wrong?
[1] #NoAI #NoLLM #LLM