2026-01-29 22:38:19
Feb 5 - Kurtis Schaeffer on How to Live in Hard Times: Examples from Buddhist Lives https://networks.h-net.org/group/announcements/20139568/feb-5-kurtis-schaeffer-how-live-hard-times-examples-buddhist-lives
Feb 5 - Kurtis Schaeffer on How to Live in Hard Times: Examples from Buddhist Lives https://networks.h-net.org/group/announcements/20139568/feb-5-kurtis-schaeffer-how-live-hard-times-examples-buddhist-lives
New blog post:
"Higher orders need higher standards"
https://skewed.de/lab/posts/higher-standards
I discuss our current work disentangling misconceptions around "higher-order" networks.
#LaVCa: LLM-assisted visual cortex captioning https://arxiv.org/abs/2502.13606 using "large language models (LLMs) to generate natural-language captions for images to which voxels are selective"; to be presented a…
In the rush to scale neural networks, we have fallen into a category error: believing that a perfect simulation of an intelligent behavior is the same thing as the existence of intelligence itself.
https://www.ocrampal.com/chasing-our-own-t
Deep unfolding of MCMC kernels: scalable, modular & explainable GANs for high-dimensional posterior sampling
Jonathan Spence, Tob\'ias I. Liaudat, Konstantinos Zygalakis, Marcelo Pereyra
https://arxiv.org/abs/2602.20758 https://arxiv.org/pdf/2602.20758 https://arxiv.org/html/2602.20758
arXiv:2602.20758v1 Announce Type: new
Abstract: Markov chain Monte Carlo (MCMC) methods are fundamental to Bayesian computation, but can be computationally intensive, especially in high-dimensional settings. Push-forward generative models, such as generative adversarial networks (GANs), variational auto-encoders and normalising flows offer a computationally efficient alternative for posterior sampling. However, push-forward models are opaque as they lack the modularity of Bayes Theorem, leading to poor generalisation with respect to changes in the likelihood function. In this work, we introduce a novel approach to GAN architecture design by applying deep unfolding to Langevin MCMC algorithms. This paradigm maps fixed-step iterative algorithms onto modular neural networks, yielding architectures that are both flexible and amenable to interpretation. Crucially, our design allows key model parameters to be specified at inference time, offering robustness to changes in the likelihood parameters. We train these unfolded samplers end-to-end using a supervised regularized Wasserstein GAN framework for posterior sampling. Through extensive Bayesian imaging experiments, we demonstrate that our proposed approach achieves high sampling accuracy and excellent computational efficiency, while retaining the physics consistency, adaptability and interpretability of classical MCMC strategies.
toXiv_bot_toot
👁️ Very glad to present our Art/Perception collaboration with Etienne Rey today!
🔗 https://laurentperrinet.github.io/talk/2026-01-19-art-and-science
What happens today:
We will discuss with Etienne about the emergence of our collaboration, discover th…
Feb 5 - Kurtis Schaeffer on How to Live in Hard Times: Examples from Buddhist Lives https://networks.h-net.org/group/announcements/20139568/feb-5-kurtis-schaeffer-how-live-hard-times-examples-buddhist-lives
Yoshua Bengio, Turing Award winner: ‘There is empirical evidence of AI acting against our instructions’
https://english.elpais.com/technology/2026-02-03/yoshua-bengio-turing-award-winner-…
We might think of the 1963 March on Washington when we talk about organizing and civic change
-- but smaller networks are equally important.
Find many small groups of people you trust,
as small as 2-5 other people.
Use encrypted communications,
like Signal.
Meet in person and break bread,
if you can, building trust slowly.
Be a bridge-builder between trusted colleagues, where appropriate.
Appreciate the beauty of all that can be achie…
Feb 5 - Kurtis Schaeffer on How to Live in Hard Times: Examples from Buddhist Lives https://networks.h-net.org/group/announcements/20139568/feb-5-kurtis-schaeffer-how-live-hard-times-examples-buddhist-lives
The working class is the fundamental force that produces the material conditions necessary for the reproduction of society, and through our collective labor we not only create food, shelter, healthcare, education, transportation, and communication networks, but we also sustain and reproduce the conditions that allow every aspect of social life to continue functioning, so that even those who do not work, whether they are managers, bureaucrats, or capitalists, depend entirely on our labor for …
What if our networks could do more than just carry data?
In December, the Fibre Sensing Task of the GÉANT (GN5-2) Project, together with SURF @… turned 57 km of live optical fibre into a sensor.
The result? Detecting everything from trams to a plane landing at Amsterdam Airport Schiphol.
🎥 Watch Chris Atherton walk us through the experiment.
Our propaganda networks are out in force recycling the “fucking for virginity” line from the Iraq war era to whitewash their new war.
Japanese Religions 46/2 https://networks.h-net.org/group/announcements/20139446/japanese-religions-462
Describing modem dial-up noises to the 13yo as "two computers screaming at each other"...
"See, nowadays everything is over the internet. You make a phone call, it's routed over the internet. Back then [we wore an onion in our belt as was the style at the time] everything was over voice networks, so when you were on the internet it was translating bits and data into sound.."
I'm not sure at which point he tuned out.
Probing Graph Neural Network Activation Patterns Through Graph Topology
Floriano Tori, Lorenzo Bini, Marco Sorbi, St\'ephane Marchand-Maillet, Vincent Ginis
https://arxiv.org/abs/2602.21092 https://arxiv.org/pdf/2602.21092 https://arxiv.org/html/2602.21092
arXiv:2602.21092v1 Announce Type: new
Abstract: Curvature notions on graphs provide a theoretical description of graph topology, highlighting bottlenecks and denser connected regions. Artifacts of the message passing paradigm in Graph Neural Networks, such as oversmoothing and oversquashing, have been attributed to these regions. However, it remains unclear how the topology of a graph interacts with the learned preferences of GNNs. Through Massive Activations, which correspond to extreme edge activation values in Graph Transformers, we probe this correspondence. Our findings on synthetic graphs and molecular benchmarks reveal that MAs do not preferentially concentrate on curvature extremes, despite their theoretical link to information flow. On the Long Range Graph Benchmark, we identify a systemic \textit{curvature shift}: global attention mechanisms exacerbate topological bottlenecks, drastically increasing the prevalence of negative curvature. Our work reframes curvature as a diagnostic probe for understanding when and why graph learning fails.
toXiv_bot_toot
Japanese Religions 46/2 https://networks.h-net.org/group/announcements/20139446/japanese-religions-462
Japanese Religions 46/2 https://networks.h-net.org/group/announcements/20139446/japanese-religions-462
Meta-learning three-factor plasticity rules for structured credit assignment with sparse feedback
Dimitra Maoutsa
https://arxiv.org/abs/2512.09366 https://arxiv.org/pdf/2512.09366 https://arxiv.org/html/2512.09366
arXiv:2512.09366v1 Announce Type: new
Abstract: Biological neural networks learn complex behaviors from sparse, delayed feedback using local synaptic plasticity, yet the mechanisms enabling structured credit assignment remain elusive. In contrast, artificial recurrent networks solving similar tasks typically rely on biologically implausible global learning rules or hand-crafted local updates. The space of local plasticity rules capable of supporting learning from delayed reinforcement remains largely unexplored. Here, we present a meta-learning framework that discovers local learning rules for structured credit assignment in recurrent networks trained with sparse feedback. Our approach interleaves local neo-Hebbian-like updates during task execution with an outer loop that optimizes plasticity parameters via \textbf{tangent-propagation through learning}. The resulting three-factor learning rules enable long-timescale credit assignment using only local information and delayed rewards, offering new insights into biologically grounded mechanisms for learning in recurrent circuits.
toXiv_bot_toot
Japanese Religions 46/2 #acrel https://networks.h-net.org/group/announcements/20139447/japanese-religions-462
Interim CISA chief: ‘When the government shuts down, cyber threats do not’ https://therecord.media/interim-cisa-chief-tells-congress-threats-continue-during-shutdown
LECTURE> Janet Gyatso on “Being With Animal Kin: Buddhist Resources for a Posthuman Ethics” - Tue Mar 3, 4-5:30 PT https://networks.h-net.org/group/announcements/20142905/janet-gyatso-being-animal-kin-buddhist-resourc…
LECTURE> Janet Gyatso on “Being With Animal Kin: Buddhist Resources for a Posthuman Ethics” - Tue Mar 3, 4-5:30 PT https://networks.h-net.org/group/announcements/20142905/janet-gyatso-being-animal-kin-buddhist-resourc…
Estimating Spatially Resolved Radiation Fields Using Neural Networks
Felix Lehner, Pasquale Lombardo, Susana Castillo, Oliver Hupe, Marcus Magnor
https://arxiv.org/abs/2512.17654 https://arxiv.org/pdf/2512.17654 https://arxiv.org/html/2512.17654
arXiv:2512.17654v1 Announce Type: new
Abstract: We present an in-depth analysis on how to build and train neural networks to estimate the spatial distribution of scattered radiation fields for radiation protection dosimetry in medical radiation fields, such as those found in Interventional Radiology and Cardiology. Therefore, we present three different synthetically generated datasets with increasing complexity for training, using a Monte-Carlo Simulation application based on Geant4. On those datasets, we evaluate convolutional and fully connected architectures of neural networks to demonstrate which design decisions work well for reconstructing the fluence and spectra distributions over the spatial domain of such radiation fields. All used datasets as well as our training pipeline are published as open source in separate repositories.
toXiv_bot_toot
What if submarine fibre-optic cables could do more than carry data? SUBMERSE project is proving they can.
This EU-funded project brings together 25 consortium partners, from research groups to National Research and Education Networks (NRENs) and industry, including GÉANT and our member NRENs.
Together, we're exploring how existing telecom infrastructure can be transformed into scientific sensors.
🔗 Read more in the latest
LECTURE> Janet Gyatso on “Being With Animal Kin: Buddhist Resources for a Posthuman Ethics” - Tue Mar 3, 4-5:30 PT https://networks.h-net.org/group/announcements/20142905/janet-gyatso-being-animal-kin-buddhist-resourc…
Perfect Network Resilience in Polynomial Time
Matthias Bentert, Stefan Schmid
https://arxiv.org/abs/2602.03827 https://arxiv.org/pdf/2602.03827 https://arxiv.org/html/2602.03827
arXiv:2602.03827v1 Announce Type: new
Abstract: Modern communication networks support local fast rerouting mechanisms to quickly react to link failures: nodes store a set of conditional rerouting rules which define how to forward an incoming packet in case of incident link failures. The rerouting decisions at any node $v$ must rely solely on local information available at $v$: the link from which a packet arrived at $v$, the target of the packet, and the incident link failures at $v$. Ideally, such rerouting mechanisms provide perfect resilience: any packet is routed from its source to its target as long as the two are connected in the underlying graph after the link failures. Already in their seminal paper at ACM PODC '12, Feigenbaum, Godfrey, Panda, Schapira, Shenker, and Singla showed that perfect resilience cannot always be achieved. While the design of local rerouting algorithms has received much attention since then, we still lack a detailed understanding of when perfect resilience is achievable.
This paper closes this gap and presents a complete characterization of when perfect resilience can be achieved. This characterization also allows us to design an $O(n)$-time algorithm to decide whether a given instance is perfectly resilient and an $O(nm)$-time algorithm to compute perfectly resilient rerouting rules whenever it is. Our algorithm is also attractive for the simple structure of the rerouting rules it uses, known as skipping in the literature: alternative links are chosen according to an ordered priority list (per in-port), where failed links are simply skipped. Intriguingly, our result also implies that in the context of perfect resilience, skipping rerouting rules are as powerful as more general rerouting rules. This partially answers a long-standing open question by Chiesa, Nikolaevskiy, Mitrovic, Gurtov, Madry, Schapira, and Shenker [IEEE/ACM Transactions on Networking, 2017] in the affirmative.
toXiv_bot_toot
Applications Now Open! Rare Book School Summer 2026 #acrel https://networks.h-net.org/group/announcements/20139027/applications-now-open-rare-b…
Understanding the Role of Rehearsal Scale in Continual Learning under Varying Model Capacities
JinLi He, Liang Bai, Xian Yang
https://arxiv.org/abs/2602.20791 https://arxiv.org/pdf/2602.20791 https://arxiv.org/html/2602.20791
arXiv:2602.20791v1 Announce Type: new
Abstract: Rehearsal is one of the key techniques for mitigating catastrophic forgetting and has been widely adopted in continual learning algorithms due to its simplicity and practicality. However, the theoretical understanding of how rehearsal scale influences learning dynamics remains limited. To address this gap, we formulate rehearsal-based continual learning as a multidimensional effectiveness-driven iterative optimization problem, providing a unified characterization across diverse performance metrics. Within this framework, we derive a closed-form analysis of adaptability, memorability, and generalization from the perspective of rehearsal scale. Our results uncover several intriguing and counterintuitive findings. First, rehearsal can impair model's adaptability, in sharp contrast to its traditionally recognized benefits. Second, increasing the rehearsal scale does not necessarily improve memory retention. When tasks are similar and noise levels are low, the memory error exhibits a diminishing lower bound. Finally, we validate these insights through numerical simulations and extended analyses on deep neural networks across multiple real-world datasets, revealing statistical patterns of rehearsal mechanisms in continual learning.
toXiv_bot_toot
Japanese Journal of Religious Studies 52 https://networks.h-net.org/group/announcements/20136307/japanese-journal-religious-studies-52
CFP> Call for Papers: Pacific World Journal https://networks.h-net.org/group/announcements/20142585/call-papers-pacific-world-journal
Exploring the Impact of Parameter Update Magnitude on Forgetting and Generalization of Continual Learning
JinLi He, Liang Bai, Xian Yang
https://arxiv.org/abs/2602.20796 https://arxiv.org/pdf/2602.20796 https://arxiv.org/html/2602.20796
arXiv:2602.20796v1 Announce Type: new
Abstract: The magnitude of parameter updates are considered a key factor in continual learning. However, most existing studies focus on designing diverse update strategies, while a theoretical understanding of the underlying mechanisms remains limited. Therefore, we characterize model's forgetting from the perspective of parameter update magnitude and formalize it as knowledge degradation induced by task-specific drift in the parameter space, which has not been fully captured in previous studies due to their assumption of a unified parameter space. By deriving the optimal parameter update magnitude that minimizes forgetting, we unify two representative update paradigms, frozen training and initialized training, within an optimization framework for constrained parameter updates. Our theoretical results further reveals that sequence tasks with small parameter distances exhibit better generalization and less forgetting under frozen training rather than initialized training. These theoretical insights inspire a novel hybrid parameter update strategy that adaptively adjusts update magnitude based on gradient directions. Experiments on deep neural networks demonstrate that this hybrid approach outperforms standard training strategies, providing new theoretical perspectives and practical inspiration for designing efficient and scalable continual learning algorithms.
toXiv_bot_toot
CFP> Call for Papers: Pacific World Journal https://networks.h-net.org/group/announcements/20142585/call-papers-pacific-world-journal
CFP> Call for Papers: Pacific World Journal https://networks.h-net.org/group/announcements/20142585/call-papers-pacific-world-journal
Japanese Religions 46/2 https://networks.h-net.org/group/announcements/20139447/japanese-religions-462
On Electric Vehicle Energy Demand Forecasting and the Effect of Federated Learning
Andreas Tritsarolis, Gil Sampaio, Nikos Pelekis, Yannis Theodoridis
https://arxiv.org/abs/2602.20782 https://arxiv.org/pdf/2602.20782 https://arxiv.org/html/2602.20782
arXiv:2602.20782v1 Announce Type: new
Abstract: The wide spread of new energy resources, smart devices, and demand side management strategies has motivated several analytics operations, from infrastructure load modeling to user behavior profiling. Energy Demand Forecasting (EDF) of Electric Vehicle Supply Equipments (EVSEs) is one of the most critical operations for ensuring efficient energy management and sustainability, since it enables utility providers to anticipate energy/power demand, optimize resource allocation, and implement proactive measures to improve grid reliability. However, accurate EDF is a challenging problem due to external factors, such as the varying user routines, weather conditions, driving behaviors, unknown state of charge, etc. Furthermore, as concerns and restrictions about privacy and sustainability have grown, training data has become increasingly fragmented, resulting in distributed datasets scattered across different data silos and/or edge devices, calling for federated learning solutions. In this paper, we investigate different well-established time series forecasting methodologies to address the EDF problem, from statistical methods (the ARIMA family) to traditional machine learning models (such as XGBoost) and deep neural networks (GRU and LSTM). We provide an overview of these methods through a performance comparison over four real-world EVSE datasets, evaluated under both centralized and federated learning paradigms, focusing on the trade-offs between forecasting fidelity, privacy preservation, and energy overheads. Our experimental results demonstrate, on the one hand, the superiority of gradient boosted trees (XGBoost) over statistical and NN-based models in both prediction accuracy and energy efficiency and, on the other hand, an insight that Federated Learning-enabled models balance these factors, offering a promising direction for decentralized energy demand forecasting.
toXiv_bot_toot
Japanese Religions 46/2 https://networks.h-net.org/group/announcements/20139447/japanese-religions-462
Applications Now Open! Rare Book School Summer 2026 https://networks.h-net.org/group/announcements/20139027/applications-now-open-rare-book-school-summer-2026
JOURNAL> Japanese Journal of Religious Studies 52 https://networks.h-net.org/group/announcements/20136305/japanese-journal-religious-studies-52
Allometric scaling of brain activity explained by avalanche criticality
Tiago S. A. N. Sim\~oes, Jos\'e S. Andrade Jr., Hans J. Herrmann, Stefano Zapperi, Lucilla de Arcangelis
https://arxiv.org/abs/2512.10834 https://arxiv.org/pdf/2512.10834 https://arxiv.org/html/2512.10834
arXiv:2512.10834v1 Announce Type: new
Abstract: Allometric scaling laws, such as Kleiber's law for metabolic rate, highlight how efficiency emerges with size across living systems. The brain, with its characteristic sublinear scaling of activity, has long posed a puzzle: why do larger brains operate with disproportionately lower firing rates? Here we show that this economy of scale is a universal outcome of avalanche dynamics. We derive analytical scaling laws directly from avalanche statistics, establishing that any system governed by critical avalanches must exhibit sublinear activity-size relations. This theoretical prediction is then verified in integrate-and-fire neuronal networks at criticality and in classical self-organized criticality models, demonstrating that the effect is not model-specific but generic. The predicted exponents align with experimental observations across mammal species, bridging dynamical criticality with the allometry of brain metabolism. Our results reveal avalanche criticality as a fundamental mechanism underlying Kleiber-like scaling in the brain.
toXiv_bot_toot
Fifty-Year Anniversary of the Nanzan Institute for Religion and Culture https://networks.h-net.org/group/announcements/20136310/fifty-year-anniversary-nanzan-institute-religion-and-culture
SYMPOSIUM> Fifty-Year Anniversary of the Nanzan Institute for Religion and Culture https://networks.h-net.org/group/announcements/20136308/fifty-year-anniversary-nanzan-institute-religion-and-culture
Call for Papers: “A Vision for Liberating Our Democracy” Conference, February 27–28, 2026 https://networks.h-net.org/group/announcements/20137830/call-papers-vision-liberating-our-democracy-conference-february-27-28
Spatially-informed transformers: Injecting geostatistical covariance biases into self-attention for spatio-temporal forecasting
Yuri Calleo
https://arxiv.org/abs/2512.17696 https://arxiv.org/pdf/2512.17696 https://arxiv.org/html/2512.17696
arXiv:2512.17696v1 Announce Type: new
Abstract: The modeling of high-dimensional spatio-temporal processes presents a fundamental dichotomy between the probabilistic rigor of classical geostatistics and the flexible, high-capacity representations of deep learning. While Gaussian processes offer theoretical consistency and exact uncertainty quantification, their prohibitive computational scaling renders them impractical for massive sensor networks. Conversely, modern transformer architectures excel at sequence modeling but inherently lack a geometric inductive bias, treating spatial sensors as permutation-invariant tokens without a native understanding of distance. In this work, we propose a spatially-informed transformer, a hybrid architecture that injects a geostatistical inductive bias directly into the self-attention mechanism via a learnable covariance kernel. By formally decomposing the attention structure into a stationary physical prior and a non-stationary data-driven residual, we impose a soft topological constraint that favors spatially proximal interactions while retaining the capacity to model complex dynamics. We demonstrate the phenomenon of ``Deep Variography'', where the network successfully recovers the true spatial decay parameters of the underlying process end-to-end via backpropagation. Extensive experiments on synthetic Gaussian random fields and real-world traffic benchmarks confirm that our method outperforms state-of-the-art graph neural networks. Furthermore, rigorous statistical validation confirms that the proposed method delivers not only superior predictive accuracy but also well-calibrated probabilistic forecasts, effectively bridging the gap between physics-aware modeling and data-driven learning.
toXiv_bot_toot
Call for Papers: “A Vision for Liberating Our Democracy” Conference, February 27–28, 2026 https://networks.h-net.org/group/announcements/20137830/call-papers-vision-liberating-our-democracy-conference-february-27-28
Call for Papers: “A Vision for Liberating Our Democracy” Conference, February 27–28, 2026 https://networks.h-net.org/group/announcements/20137830/call-papers-vision-liberating-our-democracy-conference-february-27-28
JOURNAL> Journal of Global Buddhism v 26, n 1 https://networks.h-net.org/group/announcements/20136290/journal-journal-global-buddhism-v-26-n-1
POSTDOC> 2026-2028 Ho Center for Buddhist Studies at Stanford Postdoctoral Fellowship https://networks.h-net.org/group/announcements/20138845/2026-2028-ho-center-buddhist-studies-stanford-postdoctoral-fellowship
POSTDOC> 2026-2028 Ho Center for Buddhist Studies at Stanford Postdoctoral Fellowship https://networks.h-net.org/group/announcements/20138845/2026-2028-ho-center-buddhist-studies-stanford-postdoctoral-fellowship
POSTDOC> 2026-2028 Ho Center for Buddhist Studies at Stanford Postdoctoral Fellowship https://networks.h-net.org/group/announcements/20138845/2026-2028-ho-center-buddhist-studies-stanford-postdoctoral-fellowship
LECTURE> Steven Heine on Dōgen's Approach to Personal and Political Upheaval - Wed, Jan 21 4-5:30 PT https://networks.h-net.org/group/announcements/20137770/steven-heine-dogens-approach-personal-and-political-uphea…
LECTURE> Steven Heine on Dōgen's Approach to Personal and Political Upheaval - Wed, Jan 21 4-5:30 PT https://networks.h-net.org/group/announcements/20137770/steven-heine-dogens-approach-personal-and-political-uphea…
LECTURE> Steven Heine on Dōgen's Approach to Personal and Political Upheaval - Wed, Jan 21 4-5:30 PT https://networks.h-net.org/group/announcements/20137770/steven-heine-dogens-approach-personal-and-political-uphea…
Short-Term Research Fellowships at Haverford College Quaker & Special Collections https://networks.h-net.org/group/announcements/20134912/short-term-research-fellowships-haverford-college-quaker-special
Webinar: More Than Just Choice: How Religion, Race, and Identity Influence Food Decisions #acrel https://networks.h-net.org/group/annou
Ecclesia laborans: Reproductive Labor and the Hidden Work of Liturgical Performance, 1350–1600 (Berlin, 6-7 March 2026) https://networks.h-net.org/group/announcements/20141272/ecclesia-laborans-reproductive-labor-and-hidden-wo…
Ecclesia laborans: Reproductive Labor and the Hidden Work of Liturgical Performance, 1350–1600 (Berlin, 6-7 March 2026) https://networks.h-net.org/group/announcements/20141272/ecclesia-laborans-reproductive-labor-and-hidden-wo…
Short-Term Research Fellowships at Haverford College Quaker & Special Collections https://networks.h-net.org/group/announcements/20134911/short-term-research-fellowships-haverford-college-quaker-special
Online Lecture Series “Interdisciplinary Thanatology” (German) https://networks.h-net.org/group/announcements/20131233/online-lecture-series-interdisciplinary-thanatology-german
Webinar: More Than Just Choice: How Religion, Race, and Identity Influence Food Decisions https://networks.h-net.org/group/announcements/20134839/webinar-more-just-choice-how-religion-race-and-identity-influence-food
University of Michigan Center for Southeast Asian Studies Lecture - Pain and Buddhism in Thailand: How does Bodily Experience affect Religious Worlds? (HYBRID, Feb. 20) https://networks.h-net.org/group/announcements/201408…
University of Michigan Center for Southeast Asian Studies Lecture - Pain and Buddhism in Thailand: How does Bodily Experience affect Religious Worlds? (HYBRID, Feb. 20) https://networks.h-net.org/group/announcements/201408…
PROGRAM> Woodenfish Buddhist Monastic Life Program 2026 https://networks.h-net.org/group/announcements/20140755/woodenfish-buddhist-monastic-life-program-2026
PROGRAM> Woodenfish Buddhist Monastic Life Program 2026 https://networks.h-net.org/group/announcements/20140755/woodenfish-buddhist-monastic-life-program-2026
Webinar: More Than Just Choice: How Religion, Race, and Identity Influence Food Decisions https://networks.h-net.org/group/announcements/20134842/webinar-more-just-choice-how-religion-race-and-identity-influence-food