Tootfinder

Opt-in global Mastodon full text search. Join the index!

@arXiv_csDC_bot@mastoxiv.page
2025-07-02 08:02:00

Real-Time In-Network Machine Learning on P4-Programmable FPGA SmartNICs with Fixed-Point Arithmetic and Taylor
Mohammad Firas Sada, John J. Graham, Mahidhar Tatineni, Dmitry Mishin, Thomas A. DeFanti, Frank W\"urthwein
arxiv.org/abs/2507.00428

@arXiv_statME_bot@mastoxiv.page
2025-07-01 11:11:13

Auto-Doubly Robust Estimation of Causal Effects on a Network
Jizhou Liu, Dake Zhang, Eric J. Tchetgen Tchetgen
arxiv.org/abs/2506.23332

@arXiv_csNI_bot@mastoxiv.page
2025-07-02 08:28:00

Plan-Based Scalable Online Virtual Network Embedding
Oleg Kolosov, David Breitgand, Dean H. Lorenz, Gala Yadgar
arxiv.org/abs/2507.00237

@arXiv_csRO_bot@mastoxiv.page
2025-06-02 10:25:13

This arxiv.org/abs/2505.15304 has been replaced.
initial toot: mastoxiv.page/@arXiv_csRO_…

@arXiv_qbioPE_bot@mastoxiv.page
2025-08-01 08:12:51

Phylogenetic network models as graphical models
Seth Sullivant
arxiv.org/abs/2507.23056 arxiv.org/pdf/2507.23056

@arXiv_csCR_bot@mastoxiv.page
2025-06-02 09:55:13

This arxiv.org/abs/2311.16139 has been replaced.
initial toot: mastoxiv.page/@arXiv_csCR_…

@arXiv_csSI_bot@mastoxiv.page
2025-07-01 08:52:23

Community-Based Efficient Algorithms for User-Driven Competitive Influence Maximization in Social Networks
Rahul Kumar Gautam
arxiv.org/abs/2506.23179

@arXiv_csIT_bot@mastoxiv.page
2025-07-01 10:26:23

Bridging Physical and Digital Worlds: Embodied Large AI for Future Wireless Systems
Xinquan Wang, Fenghao Zhu, Zhaohui Yang, Chongwen Huang, Xiaoming Chen, Zhaoyang Zhang, Sami Muhaidat, M\'erouane Debbah
arxiv.org/abs/2506.24009

@arXiv_eessIV_bot@mastoxiv.page
2025-07-02 10:00:00

Automated anatomy-based post-processing reduces false positives and improved interpretability of deep learning intracranial aneurysm detection
Jisoo Kim, Chu-Hsuan Lin, Alberto Ceballos-Arroyo, Ping Liu, Huaizu Jiang, Shrikanth Yadav, Qi Wan, Lei Qin, Geoffrey S Young
arxiv.org/abs/2507.00832

@arXiv_csIR_bot@mastoxiv.page
2025-06-02 09:58:21

This arxiv.org/abs/2410.13230 has been replaced.
initial toot: mastoxiv.page/@arXiv_csIR_…

@arXiv_csLG_bot@mastoxiv.page
2025-07-01 08:19:33

Learning Interpretable Rules from Neural Networks: Neurosymbolic AI for Radar Hand Gesture Recognition
Sarah Seifi, Tobias Sukianto, Cecilia Carbonelli, Lorenzo Servadei, Robert Wille
arxiv.org/abs/2506.22443

@arXiv_quantph_bot@mastoxiv.page
2025-06-02 10:30:23

This arxiv.org/abs/2409.15683 has been replaced.
initial toot: mastoxiv.page/@arXiv_qu…

@arXiv_statME_bot@mastoxiv.page
2025-06-02 07:39:16

A2 Copula-Driven Spatial Bayesian Neural Network For Modeling Non-Gaussian Dependence: A Simulation Study
Agnideep Aich, Sameera Hewage, Md Monzur Murshed, Ashit Baran Aich, Amanda Mayeaux, Asim K. Dey, Kumer P. Das, Bruce Wade
arxiv.org/abs/2505.24006

@arXiv_csNI_bot@mastoxiv.page
2025-07-01 11:06:53

Learning Constraints Directly from Network Data
Hongyu H\`e, Minhao Jin, Maria Apostolaki
arxiv.org/abs/2506.23964 ar…

@arXiv_physicssocph_bot@mastoxiv.page
2025-06-30 07:54:59

Adaptive network dynamics and behavioral contagion in multi-state drug use propagation
Hsuan-Wei Lee, Yi-Hsuan Huang, Nishant Malik
arxiv.org/abs/2506.21766

@arXiv_csCE_bot@mastoxiv.page
2025-07-31 07:40:11

A holomorphic Kolmogorov-Arnold network framework for solving elliptic problems on arbitrary 2D domains
Matteo Calaf\`a, Tito Andriollo, Allan P. Engsig-Karup, Cheol-Ho Jeong
arxiv.org/abs/2507.22678

@arXiv_csSD_bot@mastoxiv.page
2025-07-01 10:00:23

From Large-scale Audio Tagging to Real-Time Explainable Emergency Vehicle Sirens Detection
Stefano Giacomelli, Marco Giordano, Claudia Rinaldi, Fabio Graziosi
arxiv.org/abs/2506.23437

@arXiv_statML_bot@mastoxiv.page
2025-07-30 08:23:32

Graph neural networks for residential location choice: connection to classical logit models
Zhanhong Cheng, Lingqian Hu, Yuheng Bu, Yuqi Zhou, Shenhao Wang
arxiv.org/abs/2507.21334

@lysander07@sigmoid.social
2025-05-28 05:10:40

Last week, we continued our #ISE2025 lecture on distributional semantics with the introduction of neural language models (NLMs) and compared them to traditional statistical n-gram models.
Benefits of NLMs:
- Capturing Long-Range Dependencies
- Computational and Statistical Tractability
- Improved Generalisation
- Higher Accuracy
@…

The image illustrates the architecture of a Neural Language Model, specifically focusing on Word Vectors II - Neural Language Models. It is part of a presentation on Natural Language Processing, created by the Karlsruhe Institute of Technology (KIT) and FIZ Karlsruhe, as indicated by their logos in the top right corner.

The diagram shows a neural network processing an input word embedding, represented by the phrase "to be or not to." The input is transformed into a d-sized vector representatio…
@arXiv_physicsbioph_bot@mastoxiv.page
2025-07-02 08:38:40

Topological weight and structural diversity of polydisperse chromatin loop networks
Andrea Bonato, Enrico Carlon, Sergey Kitaev, Davide Marenduzzo, Enzo Orlandini
arxiv.org/abs/2507.00520

@arXiv_eessSP_bot@mastoxiv.page
2025-06-30 09:41:40

Learning-Based Hybrid Neural Receiver for 6G-V2X Communications
Osama Saleem, Mohammed Alfaqawi, Pierre Merdrignac, Abdelaziz Bensrhair, Soheyb Ribouh
arxiv.org/abs/2506.21983

@arXiv_hepph_bot@mastoxiv.page
2025-07-30 10:13:51

Neural network extraction of chromo-electric and chromo-magnetic gluon masses
Jie Mei, Lingxiao Wang, Mei Huang
arxiv.org/abs/2507.22012 ar…

@arXiv_csCR_bot@mastoxiv.page
2025-06-02 09:57:39

This arxiv.org/abs/2410.00059 has been replaced.
initial toot: mastoxiv.page/@arXiv_csCR_…

@arXiv_csNI_bot@mastoxiv.page
2025-06-02 07:20:18

Design and Analysis of Power Consumption Models for Open-RAN Architectures
Urooj Tariq, Rishu Raj, Dan Kilper
arxiv.org/abs/2505.24552

@arXiv_csCV_bot@mastoxiv.page
2025-07-28 10:15:21

Fast Learning of Non-Cooperative Spacecraft 3D Models through Primitive Initialization
Pol Francesch Huc, Emily Bates, Simone D'Amico
arxiv.org/abs/2507.19459

@arXiv_physicsgeoph_bot@mastoxiv.page
2025-07-01 09:02:33

Physics-informed conditional diffusion model for generalizable elastic wave-mode separation
Shijun Cheng, Xinru Mu, Tariq Alkhalifah
arxiv.org/abs/2506.23007

@arXiv_csLG_bot@mastoxiv.page
2025-07-31 09:35:31

Parametrized Multi-Agent Routing via Deep Attention Models
Salar Basiri, Dhananjay Tiwari, Srinivasa M. Salapaka
arxiv.org/abs/2507.22338 a…

@arXiv_eessAS_bot@mastoxiv.page
2025-07-01 08:59:53

Less is More: Data Curation Matters in Scaling Speech Enhancement
Chenda Li, Wangyou Zhang, Wei Wang, Robin Scheibler, Kohei Saijo, Samuele Cornell, Yihui Fu, Marvin Sach, Zhaoheng Ni, Anurag Kumar, Tim Fingscheidt, Shinji Watanabe, Yanmin Qian
arxiv.org/abs/2506.23859

@arXiv_csDC_bot@mastoxiv.page
2025-07-02 07:50:30

CrossPipe: Towards Optimal Pipeline Schedules for Cross-Datacenter Training
Tiancheng Chen, Ales Kubicek, Langwen Huang, Torsten Hoefler
arxiv.org/abs/2507.00217

@pbloem@sigmoid.social
2025-06-26 10:41:24

New pre-print! #ai
**Universal pre-training by iterated random computation.**
⌨️🐒 A monkey behind a typewriter will produce the collected works of Shakespeare eventually.
💻🐒 But what if we put a monkey behind a computer?
⌨️🐒 needs to be lucky enough to type all characters of all of Shakespeare correctly. 💻🐒 only needs to be lucky enough to type a program for Shakespeare.

A table showing one string of random characters next to an emoji of a monkey next to a keyboard (representing a typewriter). Below it, three strings, also of random characters, but with more structure. Some characters and n-grams repeat. Next to these three strings is an emoji of a monkey next to a laptop computer. The caption reads: (⌨️🐒) A string of randomly sampled characters. (💻🐒) The result of passing this string through three randomly initialized neural network models. The latter data is …
@arXiv_csIT_bot@mastoxiv.page
2025-07-02 08:01:30

Accuracy and Security-Guaranteed Participant Selection and Beamforming Design for RIS-Assisted Federated Learning
Mengru Wu, Yu Gao, Weidang Lu, Huimei Han, Lei Sun, Wanli Ni
arxiv.org/abs/2507.00388

@arXiv_csCE_bot@mastoxiv.page
2025-05-30 07:15:46

Unified Network-Based Representation of BIM Models for Embedding Semantic, Spatial, and Topological Data
Jin Han, Xin-Zheng Lu, Jia-Rui Lin
arxiv.org/abs/2505.22670

@arXiv_csIR_bot@mastoxiv.page
2025-06-02 07:19:18

A Novel Discrete Memristor-Coupled Heterogeneous Dual-Neuron Model and Its Application in Multi-Scenario Image Encryption
Yi Zou, Mengjiao Wang, Xinan Zhang, Herbert Ho-Ching Iu
arxiv.org/abs/2505.24294

@arXiv_csNE_bot@mastoxiv.page
2025-05-29 10:12:17

This arxiv.org/abs/2501.15081 has been replaced.
initial toot: mastoxiv.page/@arXiv_csNE_…

@arXiv_csSI_bot@mastoxiv.page
2025-08-01 07:42:50

LLMs Between the Nodes: Community Discovery Beyond Vectors
Ekta Gujral, Apurva Sinha
arxiv.org/abs/2507.22955 arxiv.org/pdf/2507.22955

@arXiv_nlinAO_bot@mastoxiv.page
2025-06-02 07:31:28

Cascades on Constrained Multiplex Networks
Christian Kluge, Christian Kuehn
arxiv.org/abs/2505.24631 arxiv.org/pdf/25…

@arXiv_qbioTO_bot@mastoxiv.page
2025-07-02 08:43:10

Cardiorespiratory coupling improves cardiac pumping efficiency in heart failure
Josh Border, Andrew Lefevre, Vishal Jain, Alain Nogaret
arxiv.org/abs/2507.00597

@arXiv_csAI_bot@mastoxiv.page
2025-07-16 08:04:11

IoT Malware Network Traffic Detection using Deep Learning and GraphSAGE Models
Nikesh Prajapati, Bimal Karki, Saroj Gopali, Akbar Siami Namin
arxiv.org/abs/2507.10758

@arXiv_csLO_bot@mastoxiv.page
2025-06-24 08:35:50

ARCH-COMP25 Category Report: Stochastic Models
Alessandro Abate, Omid Akbarzadeh, Henk A. P. Blom, Sofie Haesaert, Sina Hassani, Abolfazl Lavaei, Frederik Baymler Mathiesen, Rahul Misra, Amy Nejati, Mathis Niehage, Fie {\O}rum, Anne Remke, Behrad Samari, Ruohan Wang, Rafal Wisniewski, Ben Wooding, Mahdieh Zaker
arxiv.org…

@arXiv_csNI_bot@mastoxiv.page
2025-06-02 09:59:22

This arxiv.org/abs/2505.19305 has been replaced.
initial toot: mastoxiv.page/@arXiv_csNI_…

@arXiv_condmatstatmech_bot@mastoxiv.page
2025-06-25 08:32:20

Efficient optimization of variational tensor-network approach to three-dimensional statistical systems
Xia-Ze Xu, Tong-Yu Lin, Guang-Ming Zhang
arxiv.org/abs/2506.19339

@arXiv_csCE_bot@mastoxiv.page
2025-06-02 09:54:43

This arxiv.org/abs/2503.02041 has been replaced.
initial toot: mastoxiv.page/@arXiv_csCE_…

@arXiv_physicssocph_bot@mastoxiv.page
2025-06-30 08:54:00

Preliminary analysis of Sus scrofa movement using Hidden Markov Models and Networks
Riccardo Basilone, Eleonora Bergamin, Federica Fanelli, Egor Kotov, Kevin Morelle, Alisa Klamm, Manal Nhili, Joshua Rosen, Andrew Schendl, Olena Holubowska, Andrew Renninger, Kamil Smolak
arxiv.org/abs/2506.22138

@arXiv_statML_bot@mastoxiv.page
2025-07-29 09:40:52

Predicting Parkinson's Disease Progression Using Statistical and Neural Mixed Effects Models: A Comparative Study on Longitudinal Biomarkers
Ran Tong, Lanruo Wang, Tong Wang, Wei Yan
arxiv.org/abs/2507.20058

@arXiv_eessSY_bot@mastoxiv.page
2025-05-30 09:57:38

This arxiv.org/abs/2411.06268 has been replaced.
initial toot: mastoxiv.page/@arXiv_ees…

@arXiv_csAR_bot@mastoxiv.page
2025-06-23 09:17:10

SparseDPD: A Sparse Neural Network-based Digital Predistortion FPGA Accelerator for RF Power Amplifier Linearization
Manno Versluis, Yizhuo Wu, Chang Gao
arxiv.org/abs/2506.16591

@tiotasram@kolektiva.social
2025-07-31 16:25:48

LLM coding is the opposite of DRY
An important principle in software engineering is DRY: Don't Repeat Yourself. We recognize that having the same code copied in more than one place is bad for several reasons:
1. It makes the entire codebase harder to read.
2. It increases maintenance burden, since any problems in the duplicated code need to be solved in more than one place.
3. Because it becomes possible for the copies to drift apart if changes to one aren't transferred to the other (maybe the person making the change has forgotten there was a copy) it makes the code more error-prone and harder to debug.
All modern programming languages make it almost entirely unnecessary to repeat code: we can move the repeated code into a "function" or "module" and then reference it from all the different places it's needed. At a larger scale, someone might write an open-source "library" of such functions or modules and instead of re-implementing that functionality ourselves, we can use their code, with an acknowledgement. Using another person's library this way is complicated, because now you're dependent on them: if they stop maintaining it or introduce bugs, you've inherited a problem, but still, you could always copy their project and maintain your own version, and it would be not much more work than if you had implemented stuff yourself from the start. It's a little more complicated than this, but the basic principle holds, and it's a foundational one for software development in general and the open-source movement in particular. The network of "citations" as open-source software builds on other open-source software and people contribute patches to each others' projects is a lot of what makes the movement into a community, and it can lead to collaborations that drive further development. So the DRY principle is important at both small and large scales.
Unfortunately, the current crop of hyped-up LLM coding systems from the big players are antithetical to DRY at all scales:
- At the library scale, they train on open source software but then (with some unknown frequency) replicate parts of it line-for-line *without* any citation [1]. The person who was using the LLM has no way of knowing that this happened, or even any way to check for it. In theory the LLM company could build a system for this, but it's not likely to be profitable unless the courts actually start punishing these license violations, which doesn't seem likely based on results so far and the difficulty of finding out that the violations are happening. By creating these copies (and also mash-ups, along with lots of less-problematic stuff), the LLM users (enabled and encouraged by the LLM-peddlers) are directly undermining the DRY principle. If we see what the big AI companies claim to want, which is a massive shift towards machine-authored code, DRY at the library scale will effectively be dead, with each new project simply re-implementing the functionality it needs instead of every using a library. This might seem to have some upside, since dependency hell is a thing, but the downside in terms of comprehensibility and therefore maintainability, correctness, and security will be massive. The eventual lack of new high-quality DRY-respecting code to train the models on will only make this problem worse.
- At the module & function level, AI is probably prone to re-writing rather than re-using the functions or needs, especially with a workflow where a human prompts it for many independent completions. This part I don't have direct evidence for, since I don't use LLM coding models myself except in very specific circumstances because it's not generally ethical to do so. I do know that when it tries to call existing functions, it often guesses incorrectly about the parameters they need, which I'm sure is a headache and source of bugs for the vibe coders out there. An AI could be designed to take more context into account and use existing lookup tools to get accurate function signatures and use them when generating function calls, but even though that would probably significantly improve output quality, I suspect it's the kind of thing that would be seen as too-baroque and thus not a priority. Would love to hear I'm wrong about any of this, but I suspect the consequences are that any medium-or-larger sized codebase written with LLM tools will have significant bloat from duplicate functionality, and will have places where better use of existing libraries would have made the code simpler. At a fundamental level, a principle like DRY is not something that current LLM training techniques are able to learn, and while they can imitate it from their training sets to some degree when asked for large amounts of code, when prompted for many smaller chunks, they're asymptotically likely to violate it.
I think this is an important critique in part because it cuts against the argument that "LLMs are the modern compliers, if you reject them you're just like the people who wanted to keep hand-writing assembly code, and you'll be just as obsolete." Compilers actually represented a great win for abstraction, encapsulation, and DRY in general, and they supported and are integral to open source development, whereas LLMs are set to do the opposite.
[1] to see what this looks like in action in prose, see the example on page 30 of the NYTimes copyright complaint against OpenAI (#AI #GenAI #LLMs #VibeCoding

@arXiv_csNI_bot@mastoxiv.page
2025-07-31 08:34:01

OFCnetLLM: Large Language Model for Network Monitoring and Alertness
Hong-Jun Yoon, Mariam Kiran, Danial Ebling, Joe Breen
arxiv.org/abs/2507.22711

@arXiv_qbioNC_bot@mastoxiv.page
2025-06-24 09:22:09

Sequence-to-Sequence Models with Attention Mechanistically Map to the Architecture of Human Memory Search
Nikolaus Salvatore, Qiong Zhang
arxiv.org/abs/2506.17424

@arXiv_econEM_bot@mastoxiv.page
2025-07-22 09:20:00

Volatility Spillovers and Interconnectedness in OPEC Oil Markets: A Network-Based log-ARCH Approach
Fay\c{c}al Djebari, Kahina Mehidi, Khelifa Mazouz, Philipp Otto
arxiv.org/abs/2507.15046

@Techmeme@techhub.social
2025-06-05 17:36:22

X changes its developer agreement to prevent third parties from using "the X API or X Content to fine-tune or train a foundation or frontier model" (Ivan Mehta/TechCrunch)
techcrunch.com/2025/06/05/x-ch

@arXiv_csDC_bot@mastoxiv.page
2025-05-30 09:52:21

This arxiv.org/abs/2505.16508 has been replaced.
initial toot: mastoxiv.page/@arXiv_csDC_…

@arXiv_astrophSR_bot@mastoxiv.page
2025-05-29 07:29:51

The chemical yields of stars in the range 9-15 Msun
Marco Limongi, Lorenzo Roberti, Agnese Falla, Alessandro Chieffi, Ken'ichi Nomoto
arxiv.org/abs/2505.22030

@privacity@social.linux.pizza
2025-07-06 23:39:20

Nature of Data in Pre-Trained Large Language Models
fpf.org/blog/nature-of-data-in
@…

@arXiv_csSI_bot@mastoxiv.page
2025-05-29 07:21:30

Network classification through random walks
Gonzalo Travieso, Joao Merenda, Odemir M. Bruno
arxiv.org/abs/2505.21706

@arXiv_physicsfludyn_bot@mastoxiv.page
2025-06-24 08:44:40

Inferring viscoplastic models from velocity fields: a physics-informed neural network approach
Martin Lardy, Sham Tlili, Simon Gsell
arxiv.org/abs/2506.17681

@arXiv_mathCO_bot@mastoxiv.page
2025-07-18 09:02:22

Zero Forcing on Iterated Graph Models
Christopher Brice, Erin Meger, Nhat-Dinh Nguyen, Allen Rakhamimov, Abigail Raz
arxiv.org/abs/2507.12579

@arXiv_csNE_bot@mastoxiv.page
2025-05-29 10:12:17

This arxiv.org/abs/2501.15081 has been replaced.
initial toot: mastoxiv.page/@arXiv_csNE_…

@arXiv_csDB_bot@mastoxiv.page
2025-05-29 07:17:19

ChatPD: An LLM-driven Paper-Dataset Networking System
Anjie Xu, Ruiqing Ding, Leye Wang
arxiv.org/abs/2505.22349 arxi…

@arXiv_csGR_bot@mastoxiv.page
2025-05-30 07:18:09

Quality assessment of 3D human animation: Subjective and objective evaluation
Rim Rekik, Stefanie Wuhrer, Ludovic Hoyet, Katja Zibrek, Anne-H\'el\`ene Olivier
arxiv.org/abs/2505.23301

@arXiv_csLG_bot@mastoxiv.page
2025-07-31 09:24:51

TRIBE: TRImodal Brain Encoder for whole-brain fMRI response prediction
St\'ephane d'Ascoli, J\'er\'emy Rapin, Yohann Benchetrit, Hubert Banville, Jean-R\'emi King
arxiv.org/abs/2507.22229

@arXiv_csHC_bot@mastoxiv.page
2025-06-18 08:27:49

Exploring MLLMs Perception of Network Visualization Principles
Jacob Miller, Markus Wallinger, Ludwig Felder, Timo Brand, Henry F\"orster, Johannes Zink, Chunyang Chen, Stephen Kobourov
arxiv.org/abs/2506.14611

@arXiv_nlinAO_bot@mastoxiv.page
2025-06-02 07:31:37

Symmetry breaking in minimum dissipation networks
Aarathi Parameswaran, Iva Ba\v{c}i\'c, Andrea Benigni, Dirk Witthaut
arxiv.org/abs/2505.24818

@arXiv_csCR_bot@mastoxiv.page
2025-06-26 07:54:40

Robust Anomaly Detection in Network Traffic: Evaluating Machine Learning Models on CICIDS2017
Zhaoyang Xu, Yunbo Liu
arxiv.org/abs/2506.19877

@arXiv_statME_bot@mastoxiv.page
2025-06-27 09:16:09

Bayesian Modeling for Aggregated Relational Data: A Unified Perspective
Owen G. Ward, Anna L. Smith, Tian Zheng
arxiv.org/abs/2506.21353

@arXiv_qfinPR_bot@mastoxiv.page
2025-06-24 08:59:30

Empirical Models of the Time Evolution of SPX Option Prices
Alessio Brini, David A. Hsieh, Patrick Kuiper, Sean Moushegian, David Ye
arxiv.org/abs/2506.17511

@arXiv_qbiobm_bot@mastoxiv.page
2025-06-26 09:04:20

DualEquiNet: A Dual-Space Hierarchical Equivariant Network for Large Biomolecules
Junjie Xu, Jiahao Zhang, Mangal Prakash, Xiang Zhang, Suhang Wang
arxiv.org/abs/2506.19862

@arXiv_physicsplasmph_bot@mastoxiv.page
2025-06-23 10:15:40

Learning Heat Transport Kernels Using a Nonlocal Heat Transport Theory-Informed Neural Network
Mufei Luo, Charles Heaton, Yizhen Wang, Daniel Plummer, Mila Fitzgerald, Francesco Miniati, Sam M. Vinko, Gianluca Gregori
arxiv.org/abs/2506.16619

@arXiv_hepph_bot@mastoxiv.page
2025-07-28 09:13:21

Deep Neural Network Driven Simulation Based Inference Method for Pole Position Estimation under Model Misspecification
Daniel Sadasivan, Isaac Cordero, Andrew Graham, Cecilia Marsh, Daniel Kupcho, Melana Mourad, Maxim Mai
arxiv.org/abs/2507.18824

@arXiv_qbioQM_bot@mastoxiv.page
2025-06-23 09:47:40

EHCube4P: Learning Epistatic Patterns Through Hypercube Graph Convolution Neural Network for Protein Fitness Function Estimation
Muhammad Daud, Philippe Charton, Cedric Damour, Jingbo Wang, Frederic Cadet
arxiv.org/abs/2506.16921

@arXiv_csIR_bot@mastoxiv.page
2025-06-24 11:42:20

Rethinking Click Models in Light of Carousel Interfaces: Theory-Based Categorization and Design of Click Models
Jingwei Kang, Maarten de Rijke, Santiago de Leon-Martinez, Harrie Oosterhuis
arxiv.org/abs/2506.18548

@arXiv_physicssocph_bot@mastoxiv.page
2025-07-29 09:31:21

DynamiX: Large-Scale Dynamic Social Network Simulator
Yanhui Sun, Wu Liu, Wentao Wang, Hantao Yao, Jiebo Luo, Yongdong Zhang
arxiv.org/abs/2507.19929

@arXiv_eessIV_bot@mastoxiv.page
2025-06-27 09:17:49

GANet-Seg: Adversarial Learning for Brain Tumor Segmentation with Hybrid Generative Models
Qifei Cui, Xinyu Lu
arxiv.org/abs/2506.21245

@arXiv_statML_bot@mastoxiv.page
2025-07-28 08:28:11

Perfect Clustering in Very Sparse Diverse Multiplex Networks
Marianna Pensky
arxiv.org/abs/2507.19423 arxiv.org/pdf/2507.19423

@arXiv_csSD_bot@mastoxiv.page
2025-07-28 08:24:01

From Continuous to Discrete: Cross-Domain Collaborative General Speech Enhancement via Hierarchical Language Models
Zhaoxi Mu, Rilin Chen, Andong Li, Meng Yu, Xinyu Yang, Dong Yu
arxiv.org/abs/2507.19062

@arXiv_csSI_bot@mastoxiv.page
2025-05-30 07:21:43

BLUE: Bi-layer Heterogeneous Graph Fusion Network for Avian Influenza Forecasting
Jing Du, Haley Stone, Yang Yang, Ashna Desai, Hao Xue, Andreas Z\"ufle, Chandini Raina MacIntyre, Flora D. Salim
arxiv.org/abs/2505.22692

@arXiv_csCV_bot@mastoxiv.page
2025-07-28 07:39:40

Quantum-Cognitive Tunnelling Neural Networks for Military-Civilian Vehicle Classification and Sentiment Analysis
Milan Maksimovic, Anna Bohdanets, Immaculate Motsi-Omoijiade, Guido Governatori, Ivan S. Maksymov
arxiv.org/abs/2507.18645

@tiotasram@kolektiva.social
2025-07-25 10:57:58

Just saw this:
#AI can mean a lot of things these days, but lots of the popular meanings imply a bevy of harms that I definitely wouldn't feel are worth a cute fish game. In fact, these harms are so acute that even "just" playing into the AI hype becomes its own kind of harm (it's similar to blockchain in that way).
@… noticed that the authors claim the code base is 80% AI generated, which is a red flag because people with sound moral compasses wouldn't be using AI to "help" write code in the first place. The authors aren't by some miracle people who couldn't build this app without help, in case that influences your thinking about it: they have the skills to write the code themselves, although it likely would have taken longer (but also been better).
I was more interested in the fish-classification AI, and how much it might be dependent on datacenters. Thankfully, a quick glance at the code confirms they're using ONNX and running a self-trained neural network on your device. While the exponentially-increasing energy & water demands of datacenters to support billion-parameter models are a real concern, this is not that. Even a non-AI game can burn a lot of cycles on someone's phone, and I don't think there's anything to complain about energy-wise if we're just using cycles on the end user's device as long as we're not having them keep it on for hours crunching numbers like blockchain stuff does. Running whatever stuff locally while the user is playing a game is a negligible environmental concern, unlike, say, calling out to ChatGPT where you're directly feeding datacenter demand. Since they claimed to have trained the network themselves, and since it's actually totally reasonable to make your own dataset for this and get good-enough-for-a-silly-game results with just a few hundred examples, I don't have any ethical objections to the data sourcing or training processes either. Hooray! This is finally an example of "ethical use of neutral networks" that I can hold up as an example of what people should be doing instead of the BS they are doing.
But wait... Remember what I said about feeding the AI hype being its own form of harm? Yeah, between using AI tools for coding and calling their classifier "AI" in a way that makes it seem like the same kind of thing as ChatGPT et al., they're leaning into the hype rather than helping restrain it. And that means they're causing harm. Big AI companies can point to them and say "look AI enables cute things you like" when AI didn't actually enable it. So I'm feeling meh about this cute game and won't be sharing it aside from this post. If you love the cute fish, you don't really have to feel bad for playing with it, but I'd feel bad for advertising it without a disclaimer.

@arXiv_statME_bot@mastoxiv.page
2025-07-30 08:53:42

Regression Analysis of Reciprocity in Directed Networks
Rui Feng, Chenlei Leng
arxiv.org/abs/2507.21469 arxiv.org/pdf/2507.21469

@arXiv_eessSP_bot@mastoxiv.page
2025-06-24 10:11:00

LLM-Integrated Digital Twins for Hierarchical Resource Allocation in 6G Networks
Majumder Haider, Imtiaz Ahmed, Zoheb Hassan, Kamrul Hasan, H. Vincent Poor
arxiv.org/abs/2506.18293

@arXiv_csNI_bot@mastoxiv.page
2025-07-28 08:14:31

iPLAN: Redefining Indoor Wireless Network Planning Through Large Language Models
Jinbo Hou, Stefanos Bakirtzis, Kehai Qiu, Sichong Liao, Hui Song, Haonan Hu, Kezhi Wang, Jie Zhang
arxiv.org/abs/2507.19096

@arXiv_quantph_bot@mastoxiv.page
2025-07-17 09:51:40

A resource-centric, task-based approach to quantum network control
Alexander Pirker, Belen Munoz, Wolfgang D\"ur
arxiv.org/abs/2507.12030

@arXiv_csCR_bot@mastoxiv.page
2025-07-23 09:08:02

eX-NIDS: A Framework for Explainable Network Intrusion Detection Leveraging Large Language Models
Paul R. B. Houssel, Siamak Layeghy, Priyanka Singh, Marius Portmann
arxiv.org/abs/2507.16241

@arXiv_statML_bot@mastoxiv.page
2025-06-25 08:29:40

When Diffusion Models Memorize: Inductive Biases in Probability Flow of Minimum-Norm Shallow Neural Nets
Chen Zeno, Hila Manor, Greg Ongie, Nir Weinberger, Tomer Michaeli, Daniel Soudry
arxiv.org/abs/2506.19031

@arXiv_statME_bot@mastoxiv.page
2025-07-23 08:56:42

Bayesian unanchored additive models for component network meta-analysis
Augustine Wigle, Audrey B\'eliveau
arxiv.org/abs/2507.16047

@arXiv_csLG_bot@mastoxiv.page
2025-07-24 10:03:09

Tabular Diffusion based Actionable Counterfactual Explanations for Network Intrusion Detection
Vinura Galwaduge, Jagath Samarabandu
arxiv.org/abs/2507.17161

@arXiv_eessIV_bot@mastoxiv.page
2025-06-25 09:44:50

ReCoGNet: Recurrent Context-Guided Network for 3D MRI Prostate Segmentation
Ahmad Mustafa, Reza Rastegar, Ghassan AlRegib
arxiv.org/abs/2506.19687

@arXiv_csDC_bot@mastoxiv.page
2025-06-23 08:03:59

NetSenseML: Network-Adaptive Compression for Efficient Distributed Machine Learning
Yisu Wang, Xinjiao Li, Ruilong Wu, Huangxun Chen, Dirk Kutscher
arxiv.org/abs/2506.16235

@arXiv_eessSP_bot@mastoxiv.page
2025-07-24 08:25:49

Efficient and Distortion-less Spectrum Multiplexer via Neural Network-based Filter Banks
Jiazhao Wang, Wenchao Jiang
arxiv.org/abs/2507.17106

@arXiv_csCV_bot@mastoxiv.page
2025-07-23 10:34:12

Task-Specific Zero-shot Quantization-Aware Training for Object Detection
Changhao Li, Xinrui Chen, Ji Wang, Kang Zhao, Jianfei Chen
arxiv.org/abs/2507.16782

@arXiv_csCR_bot@mastoxiv.page
2025-07-25 08:32:41

Removing Box-Free Watermarks for Image-to-Image Models via Query-Based Reverse Engineering
Haonan An, Guang Hua, Hangcheng Cao, Zhengru Fang, Guowen Xu, Susanto Rahardja, Yuguang Fang
arxiv.org/abs/2507.18034

@arXiv_eessIV_bot@mastoxiv.page
2025-06-25 08:58:00

Explicit Residual-Based Scalable Image Coding for Humans and Machines
Yui Tatsumi, Ziyue Zeng, Hiroshi Watanabe
arxiv.org/abs/2506.19297

@arXiv_csCR_bot@mastoxiv.page
2025-06-23 10:56:40

SmartGuard: Leveraging Large Language Models for Network Attack Detection through Audit Log Analysis and Summarization
Hao Zhang, Shuo Shao, Song Li, Zhenyu Zhong, Yan Liu, Zhan Qin, Kui Ren
arxiv.org/abs/2506.16981

@arXiv_csLG_bot@mastoxiv.page
2025-07-17 10:22:50

Selective Quantization Tuning for ONNX Models
Nikolaos Louloudakis, Ajitha Rajan
arxiv.org/abs/2507.12196 arxiv.org/p…

@arXiv_statME_bot@mastoxiv.page
2025-06-18 10:26:59

Network Cross-Validation for Nested Models by Edge-Sampling: Selection Consistency
Bokai Yang
arxiv.org/abs/2506.14244

@arXiv_csCR_bot@mastoxiv.page
2025-06-26 09:23:10

RepuNet: A Reputation System for Mitigating Malicious Clients in DFL
Isaac Marroqui Penalva, Enrique Tom\'as Mart\'inez Beltr\'an, Manuel Gil P\'erez, Alberto Huertas Celdr\'an
arxiv.org/abs/2506.19892

@arXiv_eessIV_bot@mastoxiv.page
2025-06-16 08:25:29

Brain Network Analysis Based on Fine-tuned Self-supervised Model for Brain Disease Diagnosis
Yifei Tang, Hongjie Jiang, Changhong Jing, Hieu Pham, Shuqiang Wang
arxiv.org/abs/2506.11671

@arXiv_csNI_bot@mastoxiv.page
2025-07-22 09:55:50

Intent-Based Network for RAN Management with Large Language Models
Fransiscus Asisi Bimo, Maria Amparo Canaveras Galdon, Chun-Kai Lai, Ray-Guang Cheng, Edwin K. P. Chong
arxiv.org/abs/2507.14230

@arXiv_csNI_bot@mastoxiv.page
2025-06-25 08:23:29

Fractality of Wireless Mesh Networks: Dimensional Effects on Network Performance
Marat Zaidyn, Sayat Akhtanov, Dana Turlykozhayeva, Symbat Temesheva, Almat Akhmetali, Alisher Skabylov, Nurzhan Ussipov
arxiv.org/abs/2506.19366

@arXiv_csCR_bot@mastoxiv.page
2025-06-26 09:49:20

Vulnerability Disclosure through Adaptive Black-Box Adversarial Attacks on NIDS
Sabrine Ennaji, Elhadj Benkhelifa, Luigi V. Mancini
arxiv.org/abs/2506.20576