Tootfinder

Opt-in global Mastodon full text search. Join the index!

@arXiv_csLG_bot@mastoxiv.page
2024-01-30 05:40:53

Training Differentially Private Ad Prediction Models with Semi-Sensitive Features
Lynn Chua, Qiliang Cui, Badih Ghazi, Charlie Harrison, Pritish Kamath, Walid Krichene, Ravi Kumar, Pasin Manurangsi, Krishna Giri Narra, Amer Sinha, Avinash Varadarajan, Chiyuan Zhang
arXiv.org/abs/2401.15246

@arXiv_csCR_bot@mastoxiv.page
2024-01-30 06:11:19

MEA-Defender: A Robust Watermark against Model Extraction Attack
Peizhuo Lv, Hualong Ma, Kai Chen, Jiachen Zhou, Shengzhi Zhang, Ruigang Liang, Shenchen Zhu, Pan Li, Yingjun Zhang
arXiv.org/abs/2401.15239

@arXiv_csCL_bot@mastoxiv.page
2024-04-29 07:32:57

Introducing cosmosGPT: Monolingual Training for Turkish Language Models
H. Toprak Kesgin, M. Kaan Yuce, Eren Dogan, M. Egemen Uzun, Atahan Uz, H. Emre Seyrek, Ahmed Zeer, M. Fatih Amasyali
arxiv.org/abs/2404.17336

@arXiv_eessIV_bot@mastoxiv.page
2024-01-30 07:21:10

This arxiv.org/abs/2203.12476 has been replaced.
link: scholar.google.com/scholar?q=a

@DrYohanJohn@FediScience.org
2024-02-26 22:13:22

"... the attention pattern of a single layer can be ``nearly randomized'', while preserving the functionality of the network. We also show via extensive experiments that these constructions are not merely a theoretical artifact: even after severely constraining the architecture of the model, vastly different solutions can be reached via standard training."

@arXiv_mathST_bot@mastoxiv.page
2024-01-30 05:49:16

Asymptotic Behavior of Adversarial Training Estimator under $\ell_\infty$-Perturbation
Yiling Xie, Xiaoming Huo
arXiv.org/abs/2401.15262

@arXiv_csLG_bot@mastoxiv.page
2024-01-30 07:28:05

This arxiv.org/abs/2401.13034 has been replaced.
initial toot: mastoxiv.page/@arXiv_csLG_…

@arXiv_csIR_bot@mastoxiv.page
2024-03-29 08:33:14

This arxiv.org/abs/2211.13912 has been replaced.
link: scholar.google.com/scholar?q=a

@arXiv_csCV_bot@mastoxiv.page
2024-01-30 07:17:39

This arxiv.org/abs/2401.09720 has been replaced.
initial toot: mastoxiv.page/@arXiv_csCV_…

@arXiv_eessSP_bot@mastoxiv.page
2024-01-30 07:21:53

This arxiv.org/abs/2401.10282 has been replaced.
initial toot: mastoxiv.page/@arXiv_ees…

@arXiv_eessSY_bot@mastoxiv.page
2024-02-29 07:07:55

Reinforcement Learning and Graph Neural Networks for Probabilistic Risk Assessment
Joachim Grimstad, Andrey Morozov
arxiv.org/abs/2402.18246

@arXiv_csRO_bot@mastoxiv.page
2024-03-28 08:31:22

This arxiv.org/abs/2309.10718 has been replaced.
link: scholar.google.com/scholar?q=a

@arXiv_csDC_bot@mastoxiv.page
2024-03-27 06:48:34

A Unified CPU-GPU Protocol for GNN Training
Yi-Chien Lin, Gangda Deng, Viktor Prasanna
arxiv.org/abs/2403.17092 arxiv…

@arXiv_condmatstatmech_bot@mastoxiv.page
2024-04-30 07:12:39

Deep generative modelling of canonical ensemble with differentiable thermal properties
Shuo-Hui Li, Yao-Wen Zhang, Ding Pan
arxiv.org/abs/2404.18404

@arXiv_qbioNC_bot@mastoxiv.page
2024-03-29 08:48:18

This arxiv.org/abs/2402.10251 has been replaced.
initial toot: mastoxiv.page/@arXiv_qbi…

@arXiv_csCL_bot@mastoxiv.page
2024-04-29 08:29:24

This arxiv.org/abs/2403.08715 has been replaced.
link: scholar.google.com/scholar?q=a

@arXiv_csSD_bot@mastoxiv.page
2024-04-30 07:16:05

TI-ASU: Toward Robust Automatic Speech Understanding through Text-to-speech Imputation Against Missing Speech Modality
Tiantian Feng, Xuan Shi, Rahul Gupta, Shrikanth S. Narayanan
arxiv.org/abs/2404.17983

@arXiv_eessIV_bot@mastoxiv.page
2024-04-29 08:34:34

This arxiv.org/abs/2404.15620 has been replaced.
link: scholar.google.com/scholar?q=a

@crell@phpc.social
2024-02-26 15:14:51

Training exercise:
Try to replace all `$foo->isBeep()/$foo->canBar()` etc. calls with an instanceof check. $foo instanceof Beepable, $foo instanceof Bars, etc.
What does that do to your data model? If you leverage parameter types instead of manual instanceof checks, how does that simplify your logic flow?
I don't expect it to work for every use case, especially in PHP, but it would be a valuable exercise to try.

@arXiv_mathOC_bot@mastoxiv.page
2024-01-30 07:26:52

This arxiv.org/abs/2401.03451 has been replaced.
initial toot: mastoxiv.page/@arXiv_mat…

@arXiv_csCE_bot@mastoxiv.page
2024-02-27 06:47:17

ProLLaMA: A Protein Large Language Model for Multi-Task Protein Language Processing
Liuzhenghao Lv, Zongying Lin, Hao Li, Yuyang Liu, Jiaxi Cui, Calvin Yu-Chian Chen, Li Yuan, Yonghong Tian
arxiv.org/abs/2402.16445 arxiv.org/pdf/2402.16445
arXiv:2402.16445v1 Announce Type: new
Abstract: Large Language Models (LLMs), including GPT-x and LLaMA2, have achieved remarkable performance in multiple Natural Language Processing (NLP) tasks. Under the premise that protein sequences constitute the protein language, Protein Large Language Models (ProLLMs) trained on protein corpora excel at de novo protein sequence generation. However, as of now, unlike LLMs in NLP, no ProLLM is capable of multiple tasks in the Protein Language Processing (PLP) field. This prompts us to delineate the inherent limitations in current ProLLMs: (i) the lack of natural language capabilities, (ii) insufficient instruction understanding, and (iii) high training resource demands. To address these challenges, we introduce a training framework to transform any general LLM into a ProLLM capable of handling multiple PLP tasks. Specifically, our framework utilizes low-rank adaptation and employs a two-stage training approach, and it is distinguished by its universality, low overhead, and scalability. Through training under this framework, we propose the ProLLaMA model, the first known ProLLM to handle multiple PLP tasks simultaneously. Experiments show that ProLLaMA achieves state-of-the-art results in the unconditional protein sequence generation task. In the controllable protein sequence generation task, ProLLaMA can design novel proteins with desired functionalities. In the protein property prediction task, ProLLaMA achieves nearly 100\% accuracy across many categories. The latter two tasks are beyond the reach of other ProLLMs. Code is available at \url{github.com/Lyu6PosHao/ProLLaMA.

@arXiv_csNE_bot@mastoxiv.page
2024-02-27 07:12:56

Efficient Online Learning for Networks of Two-Compartment Spiking Neurons
Yujia Yin, Xinyi Chen, Chenxiang Ma, Jibin Wu, Kay Chen Tan
arxiv.org/abs/2402.15969 arxiv.org/pdf/2402.15969
arXiv:2402.15969v1 Announce Type: new
Abstract: The brain-inspired Spiking Neural Networks (SNNs) have garnered considerable research interest due to their superior performance and energy efficiency in processing temporal signals. Recently, a novel multi-compartment spiking neuron model, namely the Two-Compartment LIF (TC-LIF) model, has been proposed and exhibited a remarkable capacity for sequential modelling. However, training the TC-LIF model presents challenges stemming from the large memory consumption and the issue of gradient vanishing associated with the Backpropagation Through Time (BPTT) algorithm. To address these challenges, online learning methodologies emerge as a promising solution. Yet, to date, the application of online learning methods in SNNs has been predominantly confined to simplified Leaky Integrate-and-Fire (LIF) neuron models. In this paper, we present a novel online learning method specifically tailored for networks of TC-LIF neurons. Additionally, we propose a refined TC-LIF neuron model called Adaptive TC-LIF, which is carefully designed to enhance temporal information integration in online learning scenarios. Extensive experiments, conducted on various sequential benchmarks, demonstrate that our approach successfully preserves the superior sequential modeling capabilities of the TC-LIF neuron while incorporating the training efficiency and hardware friendliness of online learning. As a result, it offers a multitude of opportunities to leverage neuromorphic solutions for processing temporal signals.

@arXiv_csSE_bot@mastoxiv.page
2024-03-25 07:32:04

An Exploratory Investigation into Code License Infringements in Large Language Model Training Datasets
Jonathan Katzy, R\u{a}zvan-Mihai Popescu, Arie van Deursen, Maliheh Izadi
arxiv.org/abs/2403.15230

@arXiv_csLG_bot@mastoxiv.page
2024-01-30 07:28:17

This arxiv.org/abs/2401.14211 has been replaced.
initial toot: mastoxiv.page/@arXiv_csLG_…

@arXiv_csCR_bot@mastoxiv.page
2024-04-29 08:29:05

This arxiv.org/abs/2311.07550 has been replaced.
link: scholar.google.com/scholar?q=a

@arXiv_eessAS_bot@mastoxiv.page
2024-02-29 07:20:00

Why does music source separation benefit from cacophony?
Chang-Bin Jeon, Gordon Wichern, Fran\c{c}ois G. Germain, Jonathan Le Roux
arxiv.org/abs/2402.18407

@Techmeme@techhub.social
2024-04-04 15:51:04

OpenAI expands its Custom Model training program with "assisted fine-tuning", letting organizations set up data training pipelines, evaluation systems, and more (Kyle Wiggers/TechCrunch)
techcrunch.com/2024/04/04/open

@arXiv_csAI_bot@mastoxiv.page
2024-03-27 06:47:06

Out-of-distribution Rumor Detection via Test-Time Adaptation
Xiang Tao, Mingqing Zhang, Qiang Liu, Shu Wu, Liang Wang
arxiv.org/abs/2403.17735

@marekmcgann@sciences.social
2024-02-25 10:51:16

"It also seems incoherent to me to isolate and pathologize “distraction” as a force that somehow obliterates rather than redirects attention."
Rob Horning on Sora and AI critihype.
robhorning.substack.com/p/a-ph

@arXiv_csCL_bot@mastoxiv.page
2024-02-29 06:51:00

Learning to Generate Instruction Tuning Datasets for Zero-Shot Task Adaptation
Nihal V. Nayak, Yiyang Nan, Avi Trost, Stephen H. Bach
arxiv.org/abs/2402.18334

@ErikJonker@mastodon.social
2024-03-20 09:09:04

From the Rundown newsletter interesting highlights from the Nvidia Keynote,
The Blackwell B200 GPU delivers 30x the performance of its H100 GPU predecessor while using 25x less energy.
Nvidia said the Blackwell innovations will allow for training for up to 10T parameter models.
Huang also revealed that GPT-4 contains 1.8T parameters and that 2000 Blackwell chips could finish training the model in 90 days.
The last point illustrates the enormous training costs of a model l…

@arXiv_csIR_bot@mastoxiv.page
2024-01-30 07:17:42

This arxiv.org/abs/2305.15645 has been replaced.
initial toot: mastoxiv.page/@arXiv_csIR_…

@arXiv_eessIV_bot@mastoxiv.page
2024-01-30 05:44:01

Decentralized Gossip Mutual Learning (GML) for brain tumor segmentation on multi-parametric MRI
Jingyun Chen, Yading Yuan
arXiv.org/abs/2401.15434

@arXiv_eessSY_bot@mastoxiv.page
2024-02-27 08:26:20

This arxiv.org/abs/2311.03628 has been replaced.
initial toot: mastoxiv.page/@arXiv_ees…

@arXiv_condmatmtrlsci_bot@mastoxiv.page
2024-01-30 07:27:57

This arxiv.org/abs/2302.04914 has been replaced.
initial toot: mastoxiv.page/@a…

@arXiv_csLG_bot@mastoxiv.page
2024-01-30 05:40:46

Large Language Model Guided Knowledge Distillation for Time Series Anomaly Detection
Chen Liu, Shibo He, Qihang Zhou, Shizhong Li, Wenchao Meng
arXiv.org/abs/2401.15123

@arXiv_physicscompph_bot@mastoxiv.page
2024-04-29 08:44:36

This arxiv.org/abs/2404.14212 has been replaced.
initial toot: mastoxiv.page/@ar…

@arXiv_csDC_bot@mastoxiv.page
2024-01-30 07:16:27

This arxiv.org/abs/2312.02493 has been replaced.
initial toot: mastoxiv.page/@arXiv_csDC_…

@arXiv_mathAP_bot@mastoxiv.page
2024-01-30 07:22:53

This arxiv.org/abs/2211.15223 has been replaced.
link: scholar.google.com/scholar?q=a

@arXiv_csCR_bot@mastoxiv.page
2024-01-30 07:16:09

This arxiv.org/abs/2311.07550 has been replaced.
link: scholar.google.com/scholar?q=a

@arXiv_csRO_bot@mastoxiv.page
2024-03-28 08:32:22

This arxiv.org/abs/2403.14864 has been replaced.
initial toot: mastoxiv.page/@arXiv_csRO_…

@samvarma@fosstodon.org
2024-04-12 20:00:43

Didn't hear about this. Interesting.
cultofmac.com/852660/apple-lic

@arXiv_csSE_bot@mastoxiv.page
2024-03-25 07:32:04

An Exploratory Investigation into Code License Infringements in Large Language Model Training Datasets
Jonathan Katzy, R\u{a}zvan-Mihai Popescu, Arie van Deursen, Maliheh Izadi
arxiv.org/abs/2403.15230

@pbloem@sigmoid.social
2024-03-24 23:22:03

That's not... those are not the units you want for that.

Snippet from a verge piece on the new Nvidia GPU, uncritically parroting marketing nonsense: Training a 1.8 trillion parameter model would have previously taken 8,000 Hopper GPUs and 15 megawatts of power, Nvidia claims. Today, Nvidia’s CEO says 2,000 Blackwell GPUs can do it while consuming just four megawatts.
@arXiv_eessSP_bot@mastoxiv.page
2024-04-30 07:13:01

Energy-Efficient Federated Learning in Cooperative Communication within Factory Subnetworks
Hamid Reza Hashempour, Gilberto Berardinelli, Ramoni Adeogun, Shashi Raj Pandey
arxiv.org/abs/2404.18010

@arXiv_csNE_bot@mastoxiv.page
2024-04-29 08:32:05

This arxiv.org/abs/2310.19046 has been replaced.
link: scholar.google.com/scholar?q=a

@arXiv_csLG_bot@mastoxiv.page
2024-01-30 05:40:51

Deep Learning with Tabular Data: A Self-supervised Approach
Tirth Kiranbhai Vyas
arXiv.org/abs/2401.15238 arXiv.org/p…

@arXiv_csAR_bot@mastoxiv.page
2024-04-24 06:46:58

Workload-Aware Hardware Accelerator Mining for Distributed Deep Learning Training
Muhammad Adnan, Amar Phanishayee, Janardhan Kulkarni, Prashant J. Nair, Divya Mahajan
arxiv.org/abs/2404.14632

@seeingwithsound@mas.to
2024-03-19 16:17:07

#MindEye2: Shared-subject models enable fMRI-to-image with 1 hour of data arxiv.org/abs/2403.11207 Brain decoding of images using fMRI.;

MindEye2 vs. MindEye1 reconstructions from fMRI brain activity using varying amounts of training data.
@arXiv_statML_bot@mastoxiv.page
2024-04-26 07:21:56

Distributionally Robust Safe Screening
Hiroyuki Hanada, Satoshi Akahane, Tatsuya Aoyama, Tomonari Tanaka, Yoshito Okura, Yu Inatsu, Noriaki Hashimoto, Taro Murayama, Lee Hanju, Shinya Kojima, Ichiro Takeuchi
arxiv.org/abs/2404.16328

@arXiv_csCL_bot@mastoxiv.page
2024-03-28 08:28:42

This arxiv.org/abs/2403.16516 has been replaced.
initial toot: mastoxiv.page/@arXiv_csCL_…

@arXiv_csCR_bot@mastoxiv.page
2024-03-28 06:48:04

MisGUIDE : Defense Against Data-Free Deep Learning Model Extraction
Mahendra Gurve, Sankar Behera, Satyadev Ahlawat, Yamuna Prasad
arxiv.org/abs/2403.18580

@arXiv_csLG_bot@mastoxiv.page
2024-01-30 07:25:45

This arxiv.org/abs/2312.16554 has been replaced.
initial toot: mastoxiv.page/@arXiv_csLG_…

@arXiv_eessIV_bot@mastoxiv.page
2024-01-30 05:44:16

Evaluation of pseudo-healthy image reconstruction for anomaly detection with deep generative models: Application to brain FDG PET
Ravi Hassanaly, Camille Brianceau, Maëlys Solal, Olivier Colliot, Ninon Burgos
arXiv.org/abs/2401.16363

@arXiv_eessSY_bot@mastoxiv.page
2024-02-29 07:07:52

Online Ecological Gearshift Strategy via Neural Network with Soft-Argmax Operator
Xi Luo, Shiying Dong, Jinlong Hong, Bingzhao Gao, Hong Chen
arxiv.org/abs/2402.18076

@arXiv_csCV_bot@mastoxiv.page
2024-04-26 08:32:36

This arxiv.org/abs/2404.13016 has been replaced.
initial toot: mastoxiv.page/@arXiv_csCV_…

@arXiv_eessAS_bot@mastoxiv.page
2024-02-29 08:36:04

This arxiv.org/abs/2402.15725 has been replaced.
initial toot: mastoxiv.page/@arXiv_ees…

@arXiv_eessSP_bot@mastoxiv.page
2024-01-30 07:21:49

This arxiv.org/abs/2401.05363 has been replaced.
initial toot: mastoxiv.page/@arXiv_ees…

@ErikJonker@mastodon.social
2024-03-20 09:16:18

(continued from previous post)...blackwell GPU will cost $ 30.000 (minimum), so training a GPT4 model with 2000 GPUs costs approx. $ 60 million ? (in 90 days, at a minimum because there are also other costs)
#training #GPT4

@arXiv_csCL_bot@mastoxiv.page
2024-04-29 07:32:53

Reinforcement Retrieval Leveraging Fine-grained Feedback for Fact Checking News Claims with Black-Box LLM
Xuan Zhang, Wei Gao
arxiv.org/abs/2404.17283

@arXiv_csDC_bot@mastoxiv.page
2024-03-26 06:48:49

A Codesign of Scheduling and Parallelization for Large Model Training in Heterogeneous Clusters
Chunyu Xue, Weihao Cui, Han Zhao, Quan Chen, Shulai Zhang, Pengyu Yang, Jing Yang, Shaobo Li, Minyi Guo
arxiv.org/abs/2403.16125

@arXiv_physicscompph_bot@mastoxiv.page
2024-02-28 07:21:59

Generative diffusion model for surface structure discovery
Nikolaj R{\o}nne, Al\'an Aspuru-Guzik, Bj{\o}rk Hammer
arxiv.org/abs/2402.17404

@samvarma@fosstodon.org
2024-04-12 20:00:43

Didn't hear about this. Interesting.
cultofmac.com/852660/apple-lic

@arXiv_csCR_bot@mastoxiv.page
2024-03-29 06:47:47

CPR: Retrieval Augmented Generation for Copyright Protection
Aditya Golatkar, Alessandro Achille, Luca Zancato, Yu-Xiang Wang, Ashwin Swaminathan, Stefano Soatto
arxiv.org/abs/2403.18920

@arXiv_csSD_bot@mastoxiv.page
2024-02-27 07:18:58

GLA-Grad: A Griffin-Lim Extended Waveform Generation Diffusion Model
Haocheng LiuIP Paris, LTCI, IDS, S2A, Teysir BaouebIP Paris, LTCI, IDS, S2A, Mathieu FontaineIP Paris, LTCI, IDS, S2A, Jonathan Le RouxMERL, Gael RichardIP Paris, LTCI, IDS, S2A
arxiv.org/abs/2402.15516

@arXiv_csLG_bot@mastoxiv.page
2024-03-28 06:51:53

Deep Learning for Traffic Flow Prediction using Cellular Automata-based Model and CNN-LSTM architecture
Zhaohui Yang, Kshitij Jerath
arxiv.org/abs/2403.18710

@arXiv_csRO_bot@mastoxiv.page
2024-02-26 07:26:01

Dynamics-Guided Diffusion Model for Robot Manipulator Design
Xiaomeng Xu, Huy Ha, Shuran Song
arxiv.org/abs/2402.15038

@arXiv_csCL_bot@mastoxiv.page
2024-02-29 06:50:58

Is Crowdsourcing Breaking Your Bank? Cost-Effective Fine-Tuning of Pre-trained Language Models with Proximal Policy Optimization
Shuo Yang, Gjergji Kasneci
arxiv.org/abs/2402.18284

@arXiv_csDC_bot@mastoxiv.page
2024-03-26 06:48:49

A Codesign of Scheduling and Parallelization for Large Model Training in Heterogeneous Clusters
Chunyu Xue, Weihao Cui, Han Zhao, Quan Chen, Shulai Zhang, Pengyu Yang, Jing Yang, Shaobo Li, Minyi Guo
arxiv.org/abs/2403.16125

@arXiv_eessIV_bot@mastoxiv.page
2024-03-28 06:54:02

CT-3DFlow : Leveraging 3D Normalizing Flows for Unsupervised Detection of Pathological Pulmonary CT scans
Aissam Djahnine, Alexandre Popoff, Emilien Jupin-Delevaux, Vincent Cottin, Olivier Nempont, Loic Boussel
arxiv.org/abs/2403.18514

@arXiv_csLG_bot@mastoxiv.page
2024-03-28 06:51:12

Stragglers-Aware Low-Latency Synchronous Federated Learning via Layer-Wise Model Updates
Natalie Lang, Alejandro Cohen, Nir Shlezinger
arxiv.org/abs/2403.18375

@arXiv_csCL_bot@mastoxiv.page
2024-02-29 08:32:36

This arxiv.org/abs/2402.15861 has been replaced.
initial toot: mastoxiv.page/@arXiv_csCL_…

@arXiv_csLG_bot@mastoxiv.page
2024-04-30 09:09:13

This arxiv.org/abs/2404.11766 has been replaced.
initial toot: mastoxiv.page/@arXiv_csLG_…

@arXiv_eessSP_bot@mastoxiv.page
2024-02-28 07:14:37

Leveraging power of deep learning for fast and efficient elite pixel selection in time series SAR interferometry
Ashutosh Tiwari, Nitheshnirmal Sadhashivam, Leonard O. Ohenhen, Manoochehr Shirzaei
arxiv.org/abs/2402.17069

@arXiv_eessIV_bot@mastoxiv.page
2024-03-29 06:53:53

Debiasing Cardiac Imaging with Controlled Latent Diffusion Models
Grzegorz Skorupko, Richard Osuala, Zuzanna Szafranowska, Kaisar Kushibar, Nay Aung, Steffen E Petersen, Karim Lekadir, Polyxeni Gkontra
arxiv.org/abs/2403.19508

@arXiv_csRO_bot@mastoxiv.page
2024-02-27 08:26:32

This arxiv.org/abs/2311.13226 has been replaced.
initial toot: mastoxiv.page/@arXiv_csRO_…

@arXiv_csSE_bot@mastoxiv.page
2024-02-26 08:33:23

This arxiv.org/abs/2301.03553 has been replaced.
initial toot: mastoxiv.page/@arXiv_csSE_…

@arXiv_csSD_bot@mastoxiv.page
2024-03-26 06:52:49

Training Generative Adversarial Network-Based Vocoder with Limited Data Using Augmentation-Conditional Discriminator
Takuhiro Kaneko, Hirokazu Kameoka, Kou Tanaka
arxiv.org/abs/2403.16464

@arXiv_csDC_bot@mastoxiv.page
2024-04-23 07:27:56

Breaking the Memory Wall for Heterogeneous Federated Learning with Progressive Training
Yebo Wu, Li Li, Chunlin Tian, Chengzhong Xu
arxiv.org/abs/2404.13349

@arXiv_csCR_bot@mastoxiv.page
2024-02-26 08:29:19

This arxiv.org/abs/2104.10561 has been replaced.
link: scholar.google.com/scholar?q=a

@arXiv_csCL_bot@mastoxiv.page
2024-03-28 08:28:27

This arxiv.org/abs/2403.16432 has been replaced.
initial toot: mastoxiv.page/@arXiv_csCL_…

@arXiv_eessIV_bot@mastoxiv.page
2024-02-28 08:33:43

This arxiv.org/abs/2302.01622 has been replaced.
initial toot: mastoxiv.page/@arXiv_ees…

@arXiv_csCL_bot@mastoxiv.page
2024-03-28 08:28:58

This arxiv.org/abs/2403.17636 has been replaced.
initial toot: mastoxiv.page/@arXiv_csCL_…

@arXiv_csDC_bot@mastoxiv.page
2024-03-28 08:27:43

This arxiv.org/abs/2403.15721 has been replaced.
initial toot: mastoxiv.page/@arXiv_csDC_…

@arXiv_eessIV_bot@mastoxiv.page
2024-03-27 06:53:53

A Study in Dataset Pruning for Image Super-Resolution
Brian B. Moser, Federico Raue, Andreas Dengel
arxiv.org/abs/2403.17083

@arXiv_csCR_bot@mastoxiv.page
2024-03-28 06:48:00

Bayesian Learned Models Can Detect Adversarial Malware For Free
Bao Gia Doan, Dang Quang Nguyen, Paul Montague, Tamas Abraham, Olivier De Vel, Seyit Camtepe, Salil S. Kanhere, Ehsan Abbasnejad, Damith C. Ranasinghe
arxiv.org/abs/2403.18309

@arXiv_eessIV_bot@mastoxiv.page
2024-02-28 06:54:04

How we won BraTS 2023 Adult Glioma challenge? Just faking it! Enhanced Synthetic Data Augmentation and Model Ensemble for brain tumour segmentation
Andr\'e Ferreira, Naida Solak, Jianning Li, Philipp Dammann, Jens Kleesiek, Victor Alves, Jan Egger
arxiv.org/abs/2402.17317

@arXiv_csRO_bot@mastoxiv.page
2024-02-27 08:26:45

This arxiv.org/abs/2311.14153 has been replaced.
initial toot: mastoxiv.page/@arXiv_csRO_…

@arXiv_csCL_bot@mastoxiv.page
2024-02-23 06:56:04

Balanced Data Sampling for Language Model Training with Clustering
Yunfan Shao, Linyang Li, Zhaoye Fei, Hang Yan, Dahua Lin, Xipeng Qiu
arxiv.org/abs/2402.14526

@arXiv_csCR_bot@mastoxiv.page
2024-03-27 06:47:55

Hawk: Accurate and Fast Privacy-Preserving Machine Learning Using Secure Lookup Table Computation
Hamza Saleem, Amir Ziashahabi, Muhammad Naveed, Salman Avestimehr
arxiv.org/abs/2403.17296

@arXiv_csDC_bot@mastoxiv.page
2024-02-26 06:48:32

Convergence Analysis of Split Federated Learning on Heterogeneous Data
Pengchao Han, Chao Huang, Geng Tian, Ming Tang, Xin Liu
arxiv.org/abs/2402.15166

@arXiv_csLG_bot@mastoxiv.page
2024-02-23 06:51:43

Robust Training of Federated Models with Extremely Label Deficiency
Yonggang Zhang, Zhiqin Yang, Xinmei Tian, Nannan Wang, Tongliang Liu, Bo Han
arxiv.org/abs/2402.14430 <…

@arXiv_csCR_bot@mastoxiv.page
2024-04-24 07:20:51

Leverage Variational Graph Representation For Model Poisoning on Federated Learning
Kai Li, Xin Yuan, Jingjing Zheng, Wei Ni, Falko Dressler, Abbas Jamalipour
arxiv.org/abs/2404.15042

@arXiv_csCL_bot@mastoxiv.page
2024-02-28 08:30:18

This arxiv.org/abs/2402.16458 has been replaced.
initial toot: mastoxiv.page/@arXiv_csCL_…

@arXiv_csCL_bot@mastoxiv.page
2024-03-22 06:55:04

Context Quality Matters in Training Fusion-in-Decoder for Extractive Open-Domain Question Answering
Kosuke Akimoto, Kunihiro Takeoka, Masafumi Oyamada
arxiv.org/abs/2403.14197

@arXiv_eessIV_bot@mastoxiv.page
2024-02-27 06:54:04

Photon-counting CT using a Conditional Diffusion Model for Super-resolution and Texture-preservation
Christopher Wiedeman, Chuang Niu, Mengzhou Li, Bruno De Man, Jonathan S Maltz, Ge Wang
arxiv.org/abs/2402.16212

@arXiv_eessIV_bot@mastoxiv.page
2024-03-28 08:33:43

This arxiv.org/abs/2403.16335 has been replaced.
initial toot: mastoxiv.page/@arXiv_ees…

@arXiv_csCR_bot@mastoxiv.page
2024-03-26 08:45:31

This arxiv.org/abs/2403.05030 has been replaced.
initial toot: mastoxiv.page/@arXiv_csCR_…

@arXiv_eessIV_bot@mastoxiv.page
2024-02-27 06:54:20

Investigating the Robustness of Vision Transformers against Label Noise in Medical Image Classification
Bidur Khanal, Prashant Shrestha, Sanskar Amgain, Bishesh Khanal, Binod Bhattarai, Cristian A. Linte
arxiv.org/abs/2402.16734

@arXiv_csLG_bot@mastoxiv.page
2024-04-24 06:52:11

$\texttt{MiniMol}$: A Parameter-Efficient Foundation Model for Molecular Learning
Kerstin Kl\"aser, B{\l}a\.zej Banaszewski, Samuel Maddrell-Mander, Callum McLean, Luis M\"uller, Ali Parviz, Shenyang Huang, Andrew Fitzgibbon
arxiv.org/abs/2404.14986