Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@hex@kolektiva.social
2026-02-25 06:06:41

This is as good a time as any for a thought experiment.
You're in Nazi Germany. You know about the camps, you know what they do, you see the ash fall, you smell it. People who resist alone are killed, some are sent to the camps too. You're afraid to even talk to people about it for fear that they'll turn you in.
You think back to when the camps were being built. You had all the warning signs, but you didn't know how to interpret them. You could believe it would happen. You thought you'd have a chance to vote him out. You thought there might be another way. You thought maybe things would turn out differently if you just sat tight, kept your head down, kept yourself safe.
You see a family being dragged from their home. You know they will be killed. You want to fight, not just for them but for yourself. You opposed Hitler, and at any point you know you could be on the list... Even if you do nothing.
You wish you could rise up, shoot the SS, open the gates, fight it all. You know you aren't alone, but you don't know how to connect with the people who want the same thing.
Using the knowledge we have now, what should you have done in the preceding months and years to connect, to build a community that would open up all paths of resistance?
There were people who resisted. We know it wasn't enough.
Gun laws in Nazi Germany were very similar to US laws in that Nazis were largely free to own guns and everyone else was not. Unlike the US, where "others" have historically controlled using the fear that they might be randomly executed, Germany did codify it. Red flag laws were one more step in the US towards that codification, and there will be more.
When Nazis were taking away those guns, the social networks didn't exist to make resistance possible for most folks. But some Jews were able to resist.
It wasn't the guns that made the Warsaw Ghetto Uprising possible, though they definitely helped. The Warsaw Ghetto uprising was made possible by labor organizing in the precessing years.
If there were more uprisings like that, the Holocaust could have been stopped if not prevented. Social networks make resistance possible. Guns are only useful tools to resist authoritarianism *after* you build a community able to support that resistance, and they are only one of many tools made useful by that community.
Getting guns is easy, and not always necessary. Building community is hard. Guns won't keep you safe. Community will.
Single acts of resistance may slow the machine down, but to actually bring down a monster you need to be able to attack more than once. You need a society of resistance. If you are afraid now, build that. Talk to people while it's still safe to do so. Ask them where their red line is. Talk to neighbors. Figure out your network.
Take the steps you need now to keep your neighbors safe, to keep yourself safe.
#USPol

@arXiv_csLG_bot@mastoxiv.page
2026-02-25 10:36:21

On Electric Vehicle Energy Demand Forecasting and the Effect of Federated Learning
Andreas Tritsarolis, Gil Sampaio, Nikos Pelekis, Yannis Theodoridis
arxiv.org/abs/2602.20782 arxiv.org/pdf/2602.20782 arxiv.org/html/2602.20782
arXiv:2602.20782v1 Announce Type: new
Abstract: The wide spread of new energy resources, smart devices, and demand side management strategies has motivated several analytics operations, from infrastructure load modeling to user behavior profiling. Energy Demand Forecasting (EDF) of Electric Vehicle Supply Equipments (EVSEs) is one of the most critical operations for ensuring efficient energy management and sustainability, since it enables utility providers to anticipate energy/power demand, optimize resource allocation, and implement proactive measures to improve grid reliability. However, accurate EDF is a challenging problem due to external factors, such as the varying user routines, weather conditions, driving behaviors, unknown state of charge, etc. Furthermore, as concerns and restrictions about privacy and sustainability have grown, training data has become increasingly fragmented, resulting in distributed datasets scattered across different data silos and/or edge devices, calling for federated learning solutions. In this paper, we investigate different well-established time series forecasting methodologies to address the EDF problem, from statistical methods (the ARIMA family) to traditional machine learning models (such as XGBoost) and deep neural networks (GRU and LSTM). We provide an overview of these methods through a performance comparison over four real-world EVSE datasets, evaluated under both centralized and federated learning paradigms, focusing on the trade-offs between forecasting fidelity, privacy preservation, and energy overheads. Our experimental results demonstrate, on the one hand, the superiority of gradient boosted trees (XGBoost) over statistical and NN-based models in both prediction accuracy and energy efficiency and, on the other hand, an insight that Federated Learning-enabled models balance these factors, offering a promising direction for decentralized energy demand forecasting.
toXiv_bot_toot

@arXiv_csLG_bot@mastoxiv.page
2026-02-25 10:41:21

Localized Dynamics-Aware Domain Adaption for Off-Dynamics Offline Reinforcement Learning
Zhangjie Xia, Yu Yang, Pan Xu
arxiv.org/abs/2602.21072 arxiv.org/pdf/2602.21072 arxiv.org/html/2602.21072
arXiv:2602.21072v1 Announce Type: new
Abstract: Off-dynamics offline reinforcement learning (RL) aims to learn a policy for a target domain using limited target data and abundant source data collected under different transition dynamics. Existing methods typically address dynamics mismatch either globally over the state space or via pointwise data filtering; these approaches can miss localized cross-domain similarities or incur high computational cost. We propose Localized Dynamics-Aware Domain Adaptation (LoDADA), which exploits localized dynamics mismatch to better reuse source data. LoDADA clusters transitions from source and target datasets and estimates cluster-level dynamics discrepancy via domain discrimination. Source transitions from clusters with small discrepancy are retained, while those from clusters with large discrepancy are filtered out. This yields a fine-grained and scalable data selection strategy that avoids overly coarse global assumptions and expensive per-sample filtering. We provide theoretical insights and extensive experiments across environments with diverse global and local dynamics shifts. Results show that LoDADA consistently outperforms state-of-the-art off-dynamics offline RL methods by better leveraging localized distribution mismatch.
toXiv_bot_toot

@grumpybozo@toad.social
2026-03-20 13:18:44
Content warning: #USpol #MIPol

Glad to see someone major endorsing McMorrow.
I got the sense that el-Sayed jumped in because he saw a lane for a non-woman progressive candidate, not because he wants to be a Senator or because he believes he’d be good at it. Stevens is a neolib disaster. McMorrow is an experienced *legislator* with the right attitude for the time. @…

@leftsidestory@mstdn.social
2026-03-07 11:00:02

Life Treats 🥇
生活奖励 🥇
📷 Nikon FE
🎞️ Ilford HP5 Plus 400, expired 1993
If you like my work, buy me a coffee from PayPal #filmphotography

Ilford HP5 Plus 400 (FF)

Image 3 – Alt Text (English)

A black‑and‑white photo of a tall, symmetrical building with two towers, each topped with an antenna. The building has a grid pattern of windows. In the foreground, out‑of‑focus plants or branches partially block the view. The sky is clear. The contrast highlights the straight architectural lines behind the softer shapes of the foliage.

Image 3 – Alt Text (Chinese)

一张黑白照片,主体是一栋高大且对称的建筑,两侧各有一座带天线的塔楼。建筑外墙呈网格状窗户。前景有模糊的植物或枝叶,部分遮挡视线。天空晴朗。画面呈现…
Ilford HP5 Plus 400 (FF)

Image 1 – Alt Text (English)

A black‑and‑white photo showing a tall building seen through a chain‑link fence. The fence is close to the camera and has a bent or damaged opening in the center. Through this opening, a multi‑story building with many windows is visible in the background. The building appears rectangular and modern. The angle is slightly low, making the building look taller. The focus is on the fence, while the building behind it is somewhat softer in deta…
Ilford HP5 Plus 400 (FF)

Image 2 – Alt Text (English)

A black‑and‑white close‑up of a vine with several leaves in the foreground. The vine is sharply in focus. Behind it, out of focus, is a multi‑story building and a roadway or overpass with vehicles on it. The background appears urban, but details are softened by shallow depth of field. The contrast highlights the difference between the natural vine and the built environment.

Image 2 – Alt Text (Chinese)

一张黑白特写照片,前景是一根带叶子的藤蔓,清晰可见。背景模糊,可看到多…
Ilford HP5 Plus 400 (6x6)

Image 4 – Alt Text (English)

A black‑and‑white photo of a concrete wall with a tiled base along a sidewalk. Shadows of tree branches and leaves stretch across the wall and ground, creating irregular patterns. The shadows are long, suggesting low sunlight. The scene contains only the wall, sidewalk, and shadows, with no people present.

Image 4 – Alt Text (Chinese)

一张黑白照片,画面为一面混凝土墙及其下方的瓷砖基座,旁边是人行道。树枝和树叶的影子投射在墙面和地面上,形成不规则的图案。影子较长,显示光源位置较低。画面中没有人物。
@arXiv_physicsfludyn_bot@mastoxiv.page
2026-02-27 08:29:00

On the spatial structure and intermittency of soot in a lab-scale gas turbine combustor: Insights from large-eddy simulations
Leonardo Pachano, Daniel Mira, Abhijit Kalbhor, Jeroen van Oijen
arxiv.org/abs/2602.23155 arxiv.org/pdf/2602.23155 arxiv.org/html/2602.23155
arXiv:2602.23155v1 Announce Type: new
Abstract: This work presents a numerical investigation of soot formation in the Cambridge lab-scale gas turbine combustor. Large-eddy simulations (LES) of a swirl-stabilized ethylene flame are performed using the flamelet generated manifold method coupled with a discrete sectional model to account for soot formation, growth, and oxidation. The study aims to elucidate the mechanism governing the spatial structure and intermittency of soot, supported by comparisons with experimental data. The predicted soot distribution agrees well with measurements, with peak concentrations near the bluff body. Flow recirculation is identified as the key mechanism driving soot accumulation in fuel-rich regions, where surface reactions dominate soot mass growth. Soot intermittency arises from fluctuations in the flow field driven by interactions between the flame front and the recirculation vortex. Two soot modeling approaches are evaluated, differing in their treatment of soot model quantities: the first approach employs on-the-fly computation of source terms (FGM-C), while the second uses fully pre-tabulated source terms (FGM-T). Their predictive performance and computational cost are compared in the context of unsteady, sooting flames in swirl-stabilized combustors.
toXiv_bot_toot

@arXiv_csCL_bot@mastoxiv.page
2026-03-31 10:09:47

Not All Subjectivity Is the Same! Defining Desiderata for the Evaluation of Subjectivity in NLP
Urja Khurana, Michiel van der Meer, Enrico Liscio, Antske Fokkens, Pradeep K. Murukannaiah
arxiv.org/abs/2603.28351 arxiv.org/pdf/2603.28351 arxiv.org/html/2603.28351
arXiv:2603.28351v1 Announce Type: new
Abstract: Subjective judgments are part of several NLP datasets and recent work is increasingly prioritizing models whose outputs reflect this diversity of perspectives. Such responses allow us to shed light on minority voices, which are frequently marginalized or obscured by dominant perspectives. It remains a question whether our evaluation practices align with these models' objectives. This position paper proposes seven evaluation desiderata for subjectivity-sensitive models, rooted in how subjectivity is represented in NLP data and models. The desiderata are constructed in a top-down approach, keeping in mind the user-centric impact of such models. We scan the experimental setup of 60 papers and show that various aspects of subjectivity are still understudied: the distinction between ambiguous and polyphonic input, whether subjectivity is effectively expressed to the user, and a lack of interplay between different desiderata, amongst other gaps.
toXiv_bot_toot

@arXiv_qbioPE_bot@mastoxiv.page
2026-03-27 08:09:37

Modeling the mutational dynamics of very short tandem repeats
Amos Onn (Chair of Experimental Medicine and Therapy Research, University of Regensburg, Bioinformatics Group, Faculty of Mathematics and Computer Science, and Interdisciplinary Center for Bioinformatics, University of Leipzig), Tzipy Marx (Department of Computer Science and Applied Mathematics, Weizmann Institute of Science), Liming Tao (Cellular Tissue Genomics, Genentech), Tamir Biezuner (Department of Computer Science and Applied Mathematics, Weizmann Institute of Science), Ehud Shapiro (Department of Computer Science and Applied Mathematics, Weizmann Institute of Science), Christoph A. Klein (Chair of Experimental Medicine and Therapy Research, University of Regensburg, Fraunhofer Institute for Toxicology and Experimental Medicine Regensburg), Peter F. Stadler (Bioinformatics Group, Faculty of Mathematics and Computer Science, and Interdisciplinary Center for Bioinformatics, University of Leipzig, Max Planck Institute for Mathematics in the Sciences, Institute for Theoretical Chemistry, University of Vienna, Facultad de Ciencias, Universidad Nacional de Colombia, Center for non-coding RNA in Technology and Health, University of Copenhagen, Santa Fe Institute)
arxiv.org/abs/2603.25628 arxiv.org/pdf/2603.25628 arxiv.org/html/2603.25628
arXiv:2603.25628v1 Announce Type: new
Abstract: Short tandem repeats (STRs) are low-entropy regions in the genome, consisting of a short (1-6 bp) unit that is consecutively repeated multiple times. They are known for high mutational instability, due to so-called stutter-mutations, in which the number of units in the run increases or descreases. In particular, STRs with repeat unit length of 1-2 bp are prone to mutate even within several cell divisions. The extremely rapid accumulation of variation makes them interesting phylogenetic markers for retrospective single-cell lineage reconstruction. Here we model their mutational dynamics at the level of individual repeat unit type and then aggregate length variations over many STR loci with the aim of obtaining a very fast ``molecular clock''. We calibrate our model based on several datasets with known lineage structure prepared from cultured cells. We find that the mutational dynamics of STRs are reasonably consistent for a given cell line, but vary among different ones. This suggests that the dynamics are not entirely explained by mutations in caretaker genes, rather, various other factors play a role -- possibly tissue origin and differentiation state. Further data and research is necessary to asses their relative effects.
toXiv_bot_toot

@arXiv_csLG_bot@mastoxiv.page
2026-02-25 16:08:08

Replaced article(s) found for cs.LG. arxiv.org/list/cs.LG/new
[4/6]:
- Neural Proposals, Symbolic Guarantees: Neuro-Symbolic Graph Generation with Hard Constraints
Chuqin Geng, Li Zhang, Mark Zhang, Haolin Ye, Ziyu Zhao, Xujie Si
arxiv.org/abs/2602.16954 mastoxiv.page/@arXiv_csLG_bot/
- Multi-Probe Zero Collision Hash (MPZCH): Mitigating Embedding Collisions and Enhancing Model Fres...
Ziliang Zhao, et al.
arxiv.org/abs/2602.17050 mastoxiv.page/@arXiv_csLG_bot/
- MASPO: Unifying Gradient Utilization, Probability Mass, and Signal Reliability for Robust and Sam...
Fu, Lin, Fang, Zheng, Hu, Shao, Qin, Pan, Zeng, Cai
arxiv.org/abs/2602.17550 mastoxiv.page/@arXiv_csLG_bot/
- A Theoretical Framework for Modular Learning of Robust Generative Models
Corinna Cortes, Mehryar Mohri, Yutao Zhong
arxiv.org/abs/2602.17554 mastoxiv.page/@arXiv_csLG_bot/
- Multi-Round Human-AI Collaboration with User-Specified Requirements
Sima Noorani, Shayan Kiyani, Hamed Hassani, George Pappas
arxiv.org/abs/2602.17646 mastoxiv.page/@arXiv_csLG_bot/
- NEXUS: A compact neural architecture for high-resolution spatiotemporal air quality forecasting i...
Rampunit Kumar, Aditya Maheshwari
arxiv.org/abs/2602.19654 mastoxiv.page/@arXiv_csLG_bot/
- Augmenting Lateral Thinking in Language Models with Humor and Riddle Data for the BRAINTEASER Task
Mina Ghashami, Soumya Smruti Mishra
arxiv.org/abs/2405.10385 mastoxiv.page/@arXiv_csCL_bot/
- Watermarking Language Models with Error Correcting Codes
Patrick Chao, Yan Sun, Edgar Dobriban, Hamed Hassani
arxiv.org/abs/2406.10281 mastoxiv.page/@arXiv_csCR_bot/
- Learning to Control Unknown Strongly Monotone Games
Siddharth Chandak, Ilai Bistritz, Nicholas Bambos
arxiv.org/abs/2407.00575 mastoxiv.page/@arXiv_csMA_bot/
- Classification and reconstruction for single-pixel imaging with classical and quantum neural netw...
Sofya Manko, Dmitry Frolovtsev
arxiv.org/abs/2407.12506 mastoxiv.page/@arXiv_quantph_b
- Statistical Inference for Temporal Difference Learning with Linear Function Approximation
Weichen Wu, Gen Li, Yuting Wei, Alessandro Rinaldo
arxiv.org/abs/2410.16106 mastoxiv.page/@arXiv_statML_bo
- Big data approach to Kazhdan-Lusztig polynomials
Abel Lacabanne, Daniel Tubbenhauer, Pedro Vaz
arxiv.org/abs/2412.01283 mastoxiv.page/@arXiv_mathRT_bo
- MoEMba: A Mamba-based Mixture of Experts for High-Density EMG-based Hand Gesture Recognition
Mehran Shabanpour, Kasra Rad, Sadaf Khademi, Arash Mohammadi
arxiv.org/abs/2502.17457 mastoxiv.page/@arXiv_eessSP_bo
- Tightening Optimality gap with confidence through conformal prediction
Miao Li, Michael Klamkin, Russell Bent, Pascal Van Hentenryck
arxiv.org/abs/2503.04071 mastoxiv.page/@arXiv_statML_bo
- SEED: Towards More Accurate Semantic Evaluation for Visual Brain Decoding
Juhyeon Park, Peter Yongho Kim, Jiook Cha, Shinjae Yoo, Taesup Moon
arxiv.org/abs/2503.06437 mastoxiv.page/@arXiv_csCV_bot/
- How much does context affect the accuracy of AI health advice?
Prashant Garg, Thiemo Fetzer
arxiv.org/abs/2504.18310 mastoxiv.page/@arXiv_econGN_bo
- Reproducing and Improving CheXNet: Deep Learning for Chest X-ray Disease Classification
Daniel J. Strick, Carlos Garcia, Anthony Huang, Thomas Gardos
arxiv.org/abs/2505.06646 mastoxiv.page/@arXiv_eessIV_bo
- Sharp Gaussian approximations for Decentralized Federated Learning
Soham Bonnerjee, Sayar Karmakar, Wei Biao Wu
arxiv.org/abs/2505.08125 mastoxiv.page/@arXiv_statML_bo
- HoloLLM: Multisensory Foundation Model for Language-Grounded Human Sensing and Reasoning
Chuhao Zhou, Jianfei Yang
arxiv.org/abs/2505.17645 mastoxiv.page/@arXiv_csCV_bot/
- A Copula Based Supervised Filter for Feature Selection in Diabetes Risk Prediction Using Machine ...
Agnideep Aich, Md Monzur Murshed, Sameera Hewage, Amanda Mayeaux
arxiv.org/abs/2505.22554 mastoxiv.page/@arXiv_statML_bo
- Synthesis of discrete-continuous quantum circuits with multimodal diffusion models
Florian F\"urrutter, Zohim Chandani, Ikko Hamamura, Hans J. Briegel, Gorka Mu\~noz-Gil
arxiv.org/abs/2506.01666 mastoxiv.page/@arXiv_quantph_b
toXiv_bot_toot

@arXiv_csLG_bot@mastoxiv.page
2026-02-25 16:07:47

Replaced article(s) found for cs.LG. arxiv.org/list/cs.LG/new
[2/6]:
- Performance Asymmetry in Model-Based Reinforcement Learning
Jing Yu Lim, Rushi Shah, Zarif Ikram, Samson Yu, Haozhe Ma, Tze-Yun Leong, Dianbo Liu
arxiv.org/abs/2505.19698 mastoxiv.page/@arXiv_csLG_bot/
- Towards Robust Real-World Multivariate Time Series Forecasting: A Unified Framework for Dependenc...
Jinkwan Jang, Hyungjin Park, Jinmyeong Choi, Taesup Kim
arxiv.org/abs/2506.08660 mastoxiv.page/@arXiv_csLG_bot/
- Wasserstein Barycenter Soft Actor-Critic
Zahra Shahrooei, Ali Baheri
arxiv.org/abs/2506.10167 mastoxiv.page/@arXiv_csLG_bot/
- Foundation Models for Causal Inference via Prior-Data Fitted Networks
Yuchen Ma, Dennis Frauen, Emil Javurek, Stefan Feuerriegel
arxiv.org/abs/2506.10914 mastoxiv.page/@arXiv_csLG_bot/
- FREQuency ATTribution: benchmarking frequency-based occlusion for time series data
Dominique Mercier, Andreas Dengel, Sheraz Ahmed
arxiv.org/abs/2506.18481 mastoxiv.page/@arXiv_csLG_bot/
- Complexity-aware fine-tuning
Andrey Goncharov, Daniil Vyazhev, Petr Sychev, Edvard Khalafyan, Alexey Zaytsev
arxiv.org/abs/2506.21220 mastoxiv.page/@arXiv_csLG_bot/
- Transfer Learning in Infinite Width Feature Learning Networks
Clarissa Lauditi, Blake Bordelon, Cengiz Pehlevan
arxiv.org/abs/2507.04448 mastoxiv.page/@arXiv_csLG_bot/
- A hierarchy tree data structure for behavior-based user segment representation
Liu, Kang, Iyer, Malik, Li, Wang, Lu, Zhao, Wang, Liu, Liu, Liang, Yu
arxiv.org/abs/2508.01115 mastoxiv.page/@arXiv_csLG_bot/
- One-Step Flow Q-Learning: Addressing the Diffusion Policy Bottleneck in Offline Reinforcement Lea...
Thanh Nguyen, Chang D. Yoo
arxiv.org/abs/2508.13904 mastoxiv.page/@arXiv_csLG_bot/
- Uncertainty Propagation Networks for Neural Ordinary Differential Equations
Hadi Jahanshahi, Zheng H. Zhu
arxiv.org/abs/2508.16815 mastoxiv.page/@arXiv_csLG_bot/
- Learning Unified Representations from Heterogeneous Data for Robust Heart Rate Modeling
Zhengdong Huang, Zicheng Xie, Wentao Tian, Jingyu Liu, Lunhong Dong, Peng Yang
arxiv.org/abs/2508.21785 mastoxiv.page/@arXiv_csLG_bot/
- Monte Carlo Tree Diffusion with Multiple Experts for Protein Design
Liu, Cao, Jiang, Luo, Duan, Wang, Sosnick, Xu, Stevens
arxiv.org/abs/2509.15796 mastoxiv.page/@arXiv_csLG_bot/
- From Samples to Scenarios: A New Paradigm for Probabilistic Forecasting
Xilin Dai, Zhijian Xu, Wanxu Cai, Qiang Xu
arxiv.org/abs/2509.19975 mastoxiv.page/@arXiv_csLG_bot/
- Why High-rank Neural Networks Generalize?: An Algebraic Framework with RKHSs
Yuka Hashimoto, Sho Sonoda, Isao Ishikawa, Masahiro Ikeda
arxiv.org/abs/2509.21895 mastoxiv.page/@arXiv_csLG_bot/
- From Parameters to Behaviors: Unsupervised Compression of the Policy Space
Davide Tenedini, Riccardo Zamboni, Mirco Mutti, Marcello Restelli
arxiv.org/abs/2509.22566 mastoxiv.page/@arXiv_csLG_bot/
- RHYTHM: Reasoning with Hierarchical Temporal Tokenization for Human Mobility
Haoyu He, Haozheng Luo, Yan Chen, Qi R. Wang
arxiv.org/abs/2509.23115 mastoxiv.page/@arXiv_csLG_bot/
- Polychromic Objectives for Reinforcement Learning
Jubayer Ibn Hamid, Ifdita Hasan Orney, Ellen Xu, Chelsea Finn, Dorsa Sadigh
arxiv.org/abs/2509.25424 mastoxiv.page/@arXiv_csLG_bot/
- Recursive Self-Aggregation Unlocks Deep Thinking in Large Language Models
Siddarth Venkatraman, et al.
arxiv.org/abs/2509.26626 mastoxiv.page/@arXiv_csLG_bot/
- Cautious Weight Decay
Chen, Li, Liang, Su, Xie, Pierse, Liang, Lao, Liu
arxiv.org/abs/2510.12402 mastoxiv.page/@arXiv_csLG_bot/
- TeamFormer: Shallow Parallel Transformers with Progressive Approximation
Wei Wang, Xiao-Yong Wei, Qing Li
arxiv.org/abs/2510.15425 mastoxiv.page/@arXiv_csLG_bot/
- Latent-Augmented Discrete Diffusion Models
Dario Shariatian, Alain Durmus, Umut Simsekli, Stefano Peluchetti
arxiv.org/abs/2510.18114 mastoxiv.page/@arXiv_csLG_bot/
- Predicting Metabolic Dysfunction-Associated Steatotic Liver Disease using Machine Learning Method...
Mary E. An, Paul Griffin, Jonathan G. Stine, Ramakrishna Balakrishnan, Soundar Kumara
arxiv.org/abs/2510.22293 mastoxiv.page/@arXiv_csLG_bot/
toXiv_bot_toot