Tootfinder

Opt-in global Mastodon full text search. Join the index!

@azonenberg@ioc.exchange
2025-07-31 14:05:36

Anybody have access to an oscilloscope with a 100baseTX Ethernet decoder on it? Curious how fast commercial protocol decodes are compared to mine.
I'd love benchmarks:
* 2 channels, 10M points, 500 Msps, on P/N
* Simple edge trigger, saturated or nearly saturated link
* Decode to Ethernet frames
How many waveforms per second / seconds per waveform do you get?

@arXiv_eessIV_bot@mastoxiv.page
2025-07-31 07:47:21

Whole-brain Transferable Representations from Large-Scale fMRI Data Improve Task-Evoked Brain Activity Decoding
Yueh-Po Peng, Vincent K. M. Cheung, Li Su
arxiv.org/abs/2507.22378

@servelan@newsie.social
2025-07-23 15:11:54

Scientists Decode 1918 Flu Virus Genome From Century-Old Tissue
scitechdaily.com/scientists-de

@arXiv_csAR_bot@mastoxiv.page
2025-07-25 08:06:42

Sandwich: Separating Prefill-Decode Compilation for Efficient CPU LLM Serving
Juntao Zhao, Jiuru Li, Chuan Wu
arxiv.org/abs/2507.18454 arxi…

@azonenberg@ioc.exchange
2025-07-29 16:19:40

Asking for no reason whatsoever: Lots of oscilloscopes have protocol decodes for various flavors of Ethernet.
Have you ever encountered one, from any manufacturer, that is able to decode 100mbit ethernet in real time without dropping packets?

@arXiv_csHC_bot@mastoxiv.page
2025-06-27 08:38:49

Multimodal LLMs for Visualization Reconstruction and Understanding
Can Liu, Chunlin Da, Xiaoxiao Long, Yuxiao Yang, Yu Zhang, Yong Wang
arxiv.org/abs/2506.21319

@glauber@writing.exchange
2025-05-26 15:06:20

Will we ever decode the writing system used by the Inca?
Gift article, no paywall.
theatlantic.com/culture/archiv

@arXiv_csDC_bot@mastoxiv.page
2025-07-10 08:09:51

Nexus: Taming Throughput-Latency Tradeoff in LLM Serving via Efficient GPU Sharing
Xiaoxiang Shi, Colin Cai, Junjia Du, Zhanda Zhu, Xingda Wei, Zhihao Jia
arxiv.org/abs/2507.06608

@arXiv_eessSP_bot@mastoxiv.page
2025-06-26 09:42:20

Differential Transformer-driven 6G Physical Layer for Collaborative Perception Enhancement
Soheyb Ribouh, Osama Saleem, Mohamed Ababsa
arxiv.org/abs/2506.20597

@arXiv_csOS_bot@mastoxiv.page
2025-06-26 08:44:30

MNN-AECS: Energy Optimization for LLM Decoding on Mobile Devices via Adaptive Core Selection
Zhengxiang Huang, Chaoyue Niu, Zhaode Wang, Jiarui Xue, Hanming Zhang, Yugang Wang, Zewei Xin, Xiaotang Jiang, Chengfei Lv, Fan Wu, Guihai Chen
arxiv.org/abs/2506.19884

@wyri@toot-toot.wyrihaxim.us
2025-07-22 19:39:31

@… @… @… <?php eval(base64_decode('ZWNobyAnV2h5IG5vdD8nOw=='));

@glauber@writing.exchange
2025-05-26 15:06:20

Will we ever decode the writing system used by the Inca?
Gift article, no paywall.
theatlantic.com/culture/archiv

@fanf@mendeddrum.org
2025-06-12 17:42:07

from my link log —
Distance-based ISA for efficient register renaming.
sigarch.org/distance-based-isa
saved 2025-06-04

@inthehands@hachyderm.io
2025-06-09 16:42:33

All this brings me back to some text I was writing yesterday for my students, on which I’d appreciate any thoughtful feedback:
❝You can let the computer do the typing for you, but never let it do the thinking for you.
This is doubly true in the current era of AI hype. If the AI optimists are correct (the credible ones, anyway), software development will consist of humans critically evaluating, shaping, and correcting the output of LLMs. If the AI skeptics are correct, then the future will bring mountains of AI slop to decode, disentangle, fix, and/or rewrite. Either way, it is •understanding• and •critically evaluating• code — not merely •generating• it — that will be the truly essential ability. Always has been; will be even more so. •That• is what you are learning here.❞
11/

@Techmeme@techhub.social
2025-06-04 13:21:31

ChatGPT-4o, Claude 3.7 Sonnet, Gemini 2.0 Flash, Llama 4, and Copilot comparison: Claude was the best overall with the highest consistency and no hallucinations (Geoffrey A. Fowler/Washington Post)
washingtonpost.com/technology/

@arXiv_csCL_bot@mastoxiv.page
2025-07-17 10:12:00

Probing for Arithmetic Errors in Language Models
Yucheng Sun, Alessandro Stolfo, Mrinmaya Sachan
arxiv.org/abs/2507.12379

@arXiv_csSI_bot@mastoxiv.page
2025-07-23 08:23:02

SASH: Decoding Community Structure in Graphs
Allison Beemer, Jessalyn Bolkema
arxiv.org/abs/2507.16583 arxiv.org/pdf/…

@arXiv_eessSP_bot@mastoxiv.page
2025-06-25 08:47:20

EEG Foundation Challenge: From Cross-Task to Cross-Subject EEG Decoding
Bruno Aristimunha, Dung Truong, Pierre Guetschel, Seyed Yahya Shirazi, Isabelle Guyon, Alexandre R. Franco, Michael P. Milham, Aviv Dotan, Scott Makeig, Alexandre Gramfort, Jean-Remi King, Marie-Constance Corsi, Pedro A. Vald\'es-Sosa, Amit Majumdar, Alan Evans, Terrence J Sejnowski, Oren Shriki, Sylvain Chevallier, Arnaud Delorme

@arXiv_csIT_bot@mastoxiv.page
2025-07-17 08:12:40

On the error correction of iterative bounded distance decoding of generalized LDPC codes
David Burshtein
arxiv.org/abs/2507.12073

@arXiv_eessIV_bot@mastoxiv.page
2025-06-23 10:01:40

Fast Training-free Perceptual Image Compression
Ziran Zhu, Tongda Xu, Minye Huang, Dailan He, Xingtong Ge, Xinjie Zhang, Ling Li, Yan Wang
arxiv.org/abs/2506.16102

@arXiv_qbioNC_bot@mastoxiv.page
2025-06-17 11:40:26

Towards Unified Neural Decoding with Brain Functional Network Modeling
Di Wu, Linghao Bu, Yifei Jia, Lu Cao, Siyuan Li, Siyu Chen, Yueqian Zhou, Sheng Fan, Wenjie Ren, Dengchang Wu, Kang Wang, Yue Zhang, Yuehui Ma, Jie Yang, Mohamad Sawan
arxiv.org/abs/2506.12055

@Dragofix@veganism.social
2025-07-03 21:10:23

Is cheese secretly fueling your nightmares? Science weighs in #AnimalRights

@eyebee@mstdn.social
2025-07-08 16:54:44

Check this out! andrawatkins.substack.com/?r=j

@arXiv_csDC_bot@mastoxiv.page
2025-06-06 09:34:13

This arxiv.org/abs/2501.05460 has been replaced.
initial toot: mastoxiv.page/@arXiv_csDC_…

@arXiv_csSD_bot@mastoxiv.page
2025-06-04 07:44:15

UltrasonicSpheres: Localized, Multi-Channel Sound Spheres Using Off-the-Shelf Speakers and Earables
Michael K\"uttner, Valeria Sitz, Kathrin Gerling, Michael Beigl, Tobias R\"oddiger
arxiv.org/abs/2506.02715

@arXiv_qbioNC_bot@mastoxiv.page
2025-07-14 08:17:32

SPINT: Spatial Permutation-Invariant Neural Transformer for Consistent Intracortical Motor Decoding
Trung Le, Hao Fang, Jingyuan Li, Tung Nguyen, Lu Mi, Amy Orsborn, Uygar S\"umb\"ul, Eli Shlizerman
arxiv.org/abs/2507.08402

@arXiv_csDC_bot@mastoxiv.page
2025-06-13 07:34:20

TD-Pipe: Temporally-Disaggregated Pipeline Parallelism Architecture for High-Throughput LLM Inference
Hongbin Zhang, Taosheng Wei, Zhenyi Zheng, Jiangsu Du, Zhiguang Chen, Yutong Lu
arxiv.org/abs/2506.10470

@arXiv_csIT_bot@mastoxiv.page
2025-06-03 07:22:15

Over-the-Air Fronthaul Signaling for Uplink Cell-Free Massive MIMO Systems
Zakir Hussain Shaik, Sai Subramanyam Thoota, Emil Bj\"ornson, Erik G. Larsson
arxiv.org/abs/2506.00655

@arXiv_csAR_bot@mastoxiv.page
2025-06-04 07:17:25

Hardware-Centric Analysis of DeepSeek's Multi-Head Latent Attention
Robin Geens, Marian Verhelst
arxiv.org/abs/2506.02523

@arXiv_csDC_bot@mastoxiv.page
2025-06-10 16:25:49

This arxiv.org/abs/2506.03296 has been replaced.
initial toot: mastoxiv.page/@arXiv_csDC_…

@arXiv_csDC_bot@mastoxiv.page
2025-06-05 07:17:03

Parallel CPU-GPU Execution for LLM Inference on Constrained GPUs
Jiakun Fan, Yanglin Zhang, Xiangchen Li, Dimitrios S. Nikolopoulos
arxiv.org/abs/2506.03296