Tootfinder

Opt-in global Mastodon full text search. Join the index!

@ocrampal@mastodon.social
2026-02-15 16:38:23

In the rush to scale neural networks, we have fallen into a category error: believing that a perfect simulation of an intelligent behavior is the same thing as the existence of intelligence itself.
ocrampal.com/chasing-our-own-t

@metacurity@infosec.exchange
2026-02-04 11:46:06

Yoshua Bengio, Turing Award winner: ‘There is empirical evidence of AI acting against our instructions’
english.elpais.com/technology/

@relcfp@mastodon.social
2026-02-13 07:26:00

Ecclesia laborans: Reproductive Labor and the Hidden Work of Liturgical Performance, 1350–1600 (Berlin, 6-7 March 2026) networks.h-net.org/group/annou

@mszll@datasci.social
2026-04-02 09:51:12

Women's mobility networks enable more efficient travel
arxiv.org/abs/2604.00943

@relcfp@mastodon.social
2026-02-13 06:22:42

Ecclesia laborans: Reproductive Labor and the Hidden Work of Liturgical Performance, 1350–1600 (Berlin, 6-7 March 2026) networks.h-net.org/group/annou

@newsie@darktundra.xyz
2026-02-11 20:58:15

Interim CISA chief: ‘When the government shuts down, cyber threats do not’ therecord.media/interim-cisa-c

@daniel@social.telemetrydeck.com
2026-03-23 19:37:08

People who abuse their partners or others emotionally, sexually or physically should be brought to justice. Especially if they’re using their privilege to do so. Especially if they are in positions of power.
We need to shame these abuses of power and build and strengthen networks of support and trust, especially in our privileged circles as is our responsibility. We need to believe and support victims and people at risk. And we need to ensure that offenders face meaningful consequences…

@tiago@social.skewed.de
2026-02-23 10:36:21

New blog post:
"Higher orders need higher standards"
skewed.de/lab/posts/higher-sta
I discuss our current work disentangling misconceptions around "higher-order" networks.

We might think of the 1963 March on Washington when we talk about organizing and civic change
-- but smaller networks are equally important.
Find many small groups of people you trust,
as small as 2-5 other people.
Use encrypted communications,
like Signal.
Meet in person and break bread,
if you can, building trust slowly.
Be a bridge-builder between trusted colleagues, where appropriate.
Appreciate the beauty of all that can be achie…

@seeingwithsound@mas.to
2026-01-27 08:52:47

#LaVCa: LLM-assisted visual cortex captioning arxiv.org/abs/2502.13606 using "large language models (LLMs) to generate natural-language captions for images to which voxels are selective"; to be presented a…

@metacurity@infosec.exchange
2026-03-05 00:59:33

Trump’s CISA nominee said he left Coast Guard to address GOP hold
nextgov.com/people/2026/03/tru

@arXiv_csDS_bot@mastoxiv.page
2026-02-04 07:41:25

Perfect Network Resilience in Polynomial Time
Matthias Bentert, Stefan Schmid
arxiv.org/abs/2602.03827 arxiv.org/pdf/2602.03827 arxiv.org/html/2602.03827
arXiv:2602.03827v1 Announce Type: new
Abstract: Modern communication networks support local fast rerouting mechanisms to quickly react to link failures: nodes store a set of conditional rerouting rules which define how to forward an incoming packet in case of incident link failures. The rerouting decisions at any node $v$ must rely solely on local information available at $v$: the link from which a packet arrived at $v$, the target of the packet, and the incident link failures at $v$. Ideally, such rerouting mechanisms provide perfect resilience: any packet is routed from its source to its target as long as the two are connected in the underlying graph after the link failures. Already in their seminal paper at ACM PODC '12, Feigenbaum, Godfrey, Panda, Schapira, Shenker, and Singla showed that perfect resilience cannot always be achieved. While the design of local rerouting algorithms has received much attention since then, we still lack a detailed understanding of when perfect resilience is achievable.
This paper closes this gap and presents a complete characterization of when perfect resilience can be achieved. This characterization also allows us to design an $O(n)$-time algorithm to decide whether a given instance is perfectly resilient and an $O(nm)$-time algorithm to compute perfectly resilient rerouting rules whenever it is. Our algorithm is also attractive for the simple structure of the rerouting rules it uses, known as skipping in the literature: alternative links are chosen according to an ordered priority list (per in-port), where failed links are simply skipped. Intriguingly, our result also implies that in the context of perfect resilience, skipping rerouting rules are as powerful as more general rerouting rules. This partially answers a long-standing open question by Chiesa, Nikolaevskiy, Mitrovic, Gurtov, Madry, Schapira, and Shenker [IEEE/ACM Transactions on Networking, 2017] in the affirmative.
toXiv_bot_toot

@relcfp@mastodon.social
2026-02-08 21:42:12

University of Michigan Center for Southeast Asian Studies Lecture - Pain and Buddhism in Thailand: How does Bodily Experience affect Religious Worlds? (HYBRID, Feb. 20) networks.h-net.org/group/annou

@arXiv_csLG_bot@mastoxiv.page
2026-02-25 10:36:11

Deep unfolding of MCMC kernels: scalable, modular & explainable GANs for high-dimensional posterior sampling
Jonathan Spence, Tob\'ias I. Liaudat, Konstantinos Zygalakis, Marcelo Pereyra
arxiv.org/abs/2602.20758 arxiv.org/pdf/2602.20758 arxiv.org/html/2602.20758
arXiv:2602.20758v1 Announce Type: new
Abstract: Markov chain Monte Carlo (MCMC) methods are fundamental to Bayesian computation, but can be computationally intensive, especially in high-dimensional settings. Push-forward generative models, such as generative adversarial networks (GANs), variational auto-encoders and normalising flows offer a computationally efficient alternative for posterior sampling. However, push-forward models are opaque as they lack the modularity of Bayes Theorem, leading to poor generalisation with respect to changes in the likelihood function. In this work, we introduce a novel approach to GAN architecture design by applying deep unfolding to Langevin MCMC algorithms. This paradigm maps fixed-step iterative algorithms onto modular neural networks, yielding architectures that are both flexible and amenable to interpretation. Crucially, our design allows key model parameters to be specified at inference time, offering robustness to changes in the likelihood parameters. We train these unfolded samplers end-to-end using a supervised regularized Wasserstein GAN framework for posterior sampling. Through extensive Bayesian imaging experiments, we demonstrate that our proposed approach achieves high sampling accuracy and excellent computational efficiency, while retaining the physics consistency, adaptability and interpretability of classical MCMC strategies.
toXiv_bot_toot

@draxil@social.linux.pizza
2026-03-26 11:47:24

Concerned at the rabble rousing talk in the #UK around a social media ban for young people. Fine for corporate networks, but I fear all the blow back and unintended consequence of the legislators boots on our beloved #fediverse

@relcfp@mastodon.social
2026-02-08 16:07:35

University of Michigan Center for Southeast Asian Studies Lecture - Pain and Buddhism in Thailand: How does Bodily Experience affect Religious Worlds? (HYBRID, Feb. 20) networks.h-net.org/group/annou

@relcfp@mastodon.social
2026-02-06 16:56:13

PROGRAM> Woodenfish Buddhist Monastic Life Program 2026 networks.h-net.org/group/annou

@relcfp@mastodon.social
2026-02-06 16:15:19

PROGRAM> Woodenfish Buddhist Monastic Life Program 2026 networks.h-net.org/group/annou

@arXiv_csLG_bot@mastoxiv.page
2026-02-25 10:43:11

Probing Graph Neural Network Activation Patterns Through Graph Topology
Floriano Tori, Lorenzo Bini, Marco Sorbi, St\'ephane Marchand-Maillet, Vincent Ginis
arxiv.org/abs/2602.21092 arxiv.org/pdf/2602.21092 arxiv.org/html/2602.21092
arXiv:2602.21092v1 Announce Type: new
Abstract: Curvature notions on graphs provide a theoretical description of graph topology, highlighting bottlenecks and denser connected regions. Artifacts of the message passing paradigm in Graph Neural Networks, such as oversmoothing and oversquashing, have been attributed to these regions. However, it remains unclear how the topology of a graph interacts with the learned preferences of GNNs. Through Massive Activations, which correspond to extreme edge activation values in Graph Transformers, we probe this correspondence. Our findings on synthetic graphs and molecular benchmarks reveal that MAs do not preferentially concentrate on curvature extremes, despite their theoretical link to information flow. On the Long Range Graph Benchmark, we identify a systemic \textit{curvature shift}: global attention mechanisms exacerbate topological bottlenecks, drastically increasing the prevalence of negative curvature. Our work reframes curvature as a diagnostic probe for understanding when and why graph learning fails.
toXiv_bot_toot

@arXiv_qbioPE_bot@mastoxiv.page
2026-03-24 09:01:53

Epidemic reproduction numbers in spatial networks
Zahra Ghadiri, Jari Saram\"aki, Takayuki Hiraoka
arxiv.org/abs/2603.22150 arxiv.org/pdf/2603.22150 arxiv.org/html/2603.22150
arXiv:2603.22150v1 Announce Type: new
Abstract: The basic and effective reproduction numbers are widely used metrics for characterizing the dynamics of infectious disease epidemics. However, the interpretation of these numbers is based on the assumption of homogeneous mixing and may not hold in real-world populations where the contact patterns deviate from that assumption. In this paper, we present a network-based framework to compare reproduction numbers in populations with and without spatial structure, while other parameters of the disease remain fixed. Using this framework, we show that in homogeneously mixed populations, in the absence of external interventions, the effective reproduction number decreases exponentially as the susceptible population declines. In contrast, in spatially structured populations, the basic reproduction number is smaller, and the effective reproduction number initially decreases faster but eventually converges to unity. We show that the reproduction number is determined by the level of competition between infectious nodes, which is governed by the network structure. Our results suggest that without knowledge of the network structure, reproduction numbers may not be informative for parameterizing the contagiousness of the disease or predicting the behavior of epidemic spreading.
toXiv_bot_toot

@relcfp@mastodon.social
2026-02-23 02:21:12

CFP> Call for Papers: Pacific World Journal networks.h-net.org/group/annou

@relcfp@mastodon.social
2026-02-23 07:28:31

CFP> Call for Papers: Pacific World Journal networks.h-net.org/group/annou

@relcfp@mastodon.social
2026-01-29 07:22:36

Feb 5 - Kurtis Schaeffer on How to Live in Hard Times: Examples from Buddhist Lives networks.h-net.org/group/annou

@relcfp@mastodon.social
2026-01-29 06:15:56

Feb 5 - Kurtis Schaeffer on How to Live in Hard Times: Examples from Buddhist Lives networks.h-net.org/group/annou

@arXiv_csLG_bot@mastoxiv.page
2026-02-25 10:36:41

Understanding the Role of Rehearsal Scale in Continual Learning under Varying Model Capacities
JinLi He, Liang Bai, Xian Yang
arxiv.org/abs/2602.20791 arxiv.org/pdf/2602.20791 arxiv.org/html/2602.20791
arXiv:2602.20791v1 Announce Type: new
Abstract: Rehearsal is one of the key techniques for mitigating catastrophic forgetting and has been widely adopted in continual learning algorithms due to its simplicity and practicality. However, the theoretical understanding of how rehearsal scale influences learning dynamics remains limited. To address this gap, we formulate rehearsal-based continual learning as a multidimensional effectiveness-driven iterative optimization problem, providing a unified characterization across diverse performance metrics. Within this framework, we derive a closed-form analysis of adaptability, memorability, and generalization from the perspective of rehearsal scale. Our results uncover several intriguing and counterintuitive findings. First, rehearsal can impair model's adaptability, in sharp contrast to its traditionally recognized benefits. Second, increasing the rehearsal scale does not necessarily improve memory retention. When tasks are similar and noise levels are low, the memory error exhibits a diminishing lower bound. Finally, we validate these insights through numerical simulations and extended analyses on deep neural networks across multiple real-world datasets, revealing statistical patterns of rehearsal mechanisms in continual learning.
toXiv_bot_toot

@relcfp@mastodon.social
2026-01-29 22:38:19

Feb 5 - Kurtis Schaeffer on How to Live in Hard Times: Examples from Buddhist Lives networks.h-net.org/group/annou

@relcfp@mastodon.social
2026-02-22 16:08:40

CFP> Call for Papers: Pacific World Journal networks.h-net.org/group/annou

@arXiv_csLG_bot@mastoxiv.page
2026-02-25 10:37:11

Exploring the Impact of Parameter Update Magnitude on Forgetting and Generalization of Continual Learning
JinLi He, Liang Bai, Xian Yang
arxiv.org/abs/2602.20796 arxiv.org/pdf/2602.20796 arxiv.org/html/2602.20796
arXiv:2602.20796v1 Announce Type: new
Abstract: The magnitude of parameter updates are considered a key factor in continual learning. However, most existing studies focus on designing diverse update strategies, while a theoretical understanding of the underlying mechanisms remains limited. Therefore, we characterize model's forgetting from the perspective of parameter update magnitude and formalize it as knowledge degradation induced by task-specific drift in the parameter space, which has not been fully captured in previous studies due to their assumption of a unified parameter space. By deriving the optimal parameter update magnitude that minimizes forgetting, we unify two representative update paradigms, frozen training and initialized training, within an optimization framework for constrained parameter updates. Our theoretical results further reveals that sequence tasks with small parameter distances exhibit better generalization and less forgetting under frozen training rather than initialized training. These theoretical insights inspire a novel hybrid parameter update strategy that adaptively adjusts update magnitude based on gradient directions. Experiments on deep neural networks demonstrate that this hybrid approach outperforms standard training strategies, providing new theoretical perspectives and practical inspiration for designing efficient and scalable continual learning algorithms.
toXiv_bot_toot

@relcfp@mastodon.social
2026-01-28 07:10:57

Japanese Religions 46/2 networks.h-net.org/group/annou

@relcfp@mastodon.social
2026-02-25 07:28:40

LECTURE> Janet Gyatso on “Being With Animal Kin: Buddhist Resources for a Posthuman Ethics” - Tue Mar 3, 4-5:30 PT networks.h-net.org/group/annou

@relcfp@mastodon.social
2026-02-25 06:26:04

LECTURE> Janet Gyatso on “Being With Animal Kin: Buddhist Resources for a Posthuman Ethics” - Tue Mar 3, 4-5:30 PT networks.h-net.org/group/annou

@relcfp@mastodon.social
2026-02-25 11:19:16

LECTURE> Janet Gyatso on “Being With Animal Kin: Buddhist Resources for a Posthuman Ethics” - Tue Mar 3, 4-5:30 PT networks.h-net.org/group/annou

@arXiv_csLG_bot@mastoxiv.page
2026-02-25 10:36:21

On Electric Vehicle Energy Demand Forecasting and the Effect of Federated Learning
Andreas Tritsarolis, Gil Sampaio, Nikos Pelekis, Yannis Theodoridis
arxiv.org/abs/2602.20782 arxiv.org/pdf/2602.20782 arxiv.org/html/2602.20782
arXiv:2602.20782v1 Announce Type: new
Abstract: The wide spread of new energy resources, smart devices, and demand side management strategies has motivated several analytics operations, from infrastructure load modeling to user behavior profiling. Energy Demand Forecasting (EDF) of Electric Vehicle Supply Equipments (EVSEs) is one of the most critical operations for ensuring efficient energy management and sustainability, since it enables utility providers to anticipate energy/power demand, optimize resource allocation, and implement proactive measures to improve grid reliability. However, accurate EDF is a challenging problem due to external factors, such as the varying user routines, weather conditions, driving behaviors, unknown state of charge, etc. Furthermore, as concerns and restrictions about privacy and sustainability have grown, training data has become increasingly fragmented, resulting in distributed datasets scattered across different data silos and/or edge devices, calling for federated learning solutions. In this paper, we investigate different well-established time series forecasting methodologies to address the EDF problem, from statistical methods (the ARIMA family) to traditional machine learning models (such as XGBoost) and deep neural networks (GRU and LSTM). We provide an overview of these methods through a performance comparison over four real-world EVSE datasets, evaluated under both centralized and federated learning paradigms, focusing on the trade-offs between forecasting fidelity, privacy preservation, and energy overheads. Our experimental results demonstrate, on the one hand, the superiority of gradient boosted trees (XGBoost) over statistical and NN-based models in both prediction accuracy and energy efficiency and, on the other hand, an insight that Federated Learning-enabled models balance these factors, offering a promising direction for decentralized energy demand forecasting.
toXiv_bot_toot

@relcfp@mastodon.social
2026-01-27 17:54:16

Japanese Religions 46/2 networks.h-net.org/group/annou

@relcfp@mastodon.social
2026-01-27 16:07:58

Japanese Religions 46/2 networks.h-net.org/group/annou

@relcfp@mastodon.social
2026-01-27 07:13:55

Japanese Religions 46/2 networks.h-net.org/group/annou

@relcfp@mastodon.social
2026-01-27 06:08:05

Japanese Religions 46/2 networks.h-net.org/group/annou

@relcfp@mastodon.social
2026-01-27 07:13:55

Applications Now Open! Rare Book School Summer 2026 networks.h-net.org/group/annou

@relcfp@mastodon.social
2026-01-27 13:49:15

Japanese Religions 46/2 #acrel networks.h-net.org/group/annou

@relcfp@mastodon.social
2026-01-27 20:18:13

Applications Now Open! Rare Book School Summer 2026 #acrel networks.h-net.org/group/annou

@relcfp@mastodon.social
2026-01-23 07:12:17

POSTDOC> 2026-2028 Ho Center for Buddhist Studies at Stanford Postdoctoral Fellowship networks.h-net.org/group/annou

@relcfp@mastodon.social
2026-01-22 22:59:19

POSTDOC> 2026-2028 Ho Center for Buddhist Studies at Stanford Postdoctoral Fellowship networks.h-net.org/group/annou

@relcfp@mastodon.social
2026-01-22 16:11:00

POSTDOC> 2026-2028 Ho Center for Buddhist Studies at Stanford Postdoctoral Fellowship networks.h-net.org/group/annou