Tootfinder

Opt-in global Mastodon full text search. Join the index!

@ErikJonker@mastodon.social
2026-03-30 13:40:13

Fun playing ARC-AGI-3 , puzzles that the most advanced AI-models can only solve for 1% 😀
Illustrates how AI models look extremely smart but are at the same time quite dumb.
#AI

@arXiv_csCR_bot@mastoxiv.page
2026-03-31 09:30:12

Democratizing Federated Learning with Blockchain and Multi-Task Peer Prediction
Leon Witt, Kentaroh Toyoda, Wojciech Samek, Dan Li
arxiv.org/abs/2603.28434

@arXiv_csCL_bot@mastoxiv.page
2026-03-31 10:11:22

Structural-Ambiguity-Aware Translation from Natural Language to Signal Temporal Logic
Kosei Fushimi, Kazunobu Serizawa, Junya Ikemoto, Kazumune Hashimoto
arxiv.org/abs/2603.28426 arxiv.org/pdf/2603.28426 arxiv.org/html/2603.28426
arXiv:2603.28426v1 Announce Type: new
Abstract: Signal Temporal Logic (STL) is widely used to specify timed and safety-critical tasks for cyber-physical systems, but writing STL formulas directly is difficult for non-expert users. Natural language (NL) provides a convenient interface, yet its inherent structural ambiguity makes one-to-one translation into STL unreliable. In this paper, we propose an \textit{ambiguity-preserving} method for translating NL task descriptions into STL candidate formulas. The key idea is to retain multiple plausible syntactic analyses instead of forcing a single interpretation at the parsing stage. To this end, we develop a three-stage pipeline based on Combinatory Categorial Grammar (CCG): ambiguity-preserving $n$-best parsing, STL-oriented template-based semantic composition, and canonicalization with score aggregation. The proposed method outputs a deduplicated set of STL candidates with plausibility scores, thereby explicitly representing multiple possible formal interpretations of an ambiguous instruction. In contrast to existing one-best NL-to-logic translation methods, the proposed approach is designed to preserve attachment and scope ambiguity. Case studies on representative task descriptions demonstrate that the method generates multiple STL candidates for genuinely ambiguous inputs while collapsing unambiguous or canonically equivalent derivations to a single STL formula.
toXiv_bot_toot

@arXiv_csGR_bot@mastoxiv.page
2026-01-30 08:28:26

JUST-DUB-IT: Video Dubbing via Joint Audio-Visual Diffusion
Anthony Chen, Naomi Ken Korem, Tavi Halperin, Matan Ben Yosef, Urska Jelercic, Ofir Bibi, Or Patashnik, Daniel Cohen-Or
arxiv.org/abs/2601.22143 arxiv.org/pdf/2601.22143 arxiv.org/html/2601.22143
arXiv:2601.22143v1 Announce Type: new
Abstract: Audio-Visual Foundation Models, which are pretrained to jointly generate sound and visual content, have recently shown an unprecedented ability to model multi-modal generation and editing, opening new opportunities for downstream tasks. Among these tasks, video dubbing could greatly benefit from such priors, yet most existing solutions still rely on complex, task-specific pipelines that struggle in real-world settings. In this work, we introduce a single-model approach that adapts a foundational audio-video diffusion model for video-to-video dubbing via a lightweight LoRA. The LoRA enables the model to condition on an input audio-video while jointly generating translated audio and synchronized facial motion. To train this LoRA, we leverage the generative model itself to synthesize paired multilingual videos of the same speaker. Specifically, we generate multilingual videos with language switches within a single clip, and then inpaint the face and audio in each half to match the language of the other half. By leveraging the rich generative prior of the audio-visual model, our approach preserves speaker identity and lip synchronization while remaining robust to complex motion and real-world dynamics. We demonstrate that our approach produces high-quality dubbed videos with improved visual fidelity, lip synchronization, and robustness compared to existing dubbing pipelines.
toXiv_bot_toot

@blackknight95857669@social.linux.pizza
2026-01-28 20:46:08

It's been 40 years. I still remember it well. I was in school. It's a few days till my 10th bday. Our (very small, maybe 14 kids) 4th grade class stopped our studies and tuned in the launch. The surreal moment of watching the explosion grow while the announcer calmly continued reporting the stats before realizing there was a problem. The teacher having to explain what we just watched. The day we learned that going to space is still a very difficult task.

The famous pic of the Challenger explosion, the rocket boosters forming a V at the top of the pic as they veer away from the expanding cloud that used to be the space shuttle.
@UP8@mastodon.social
2026-02-27 17:46:58

🎧 Earbuds can be used to monitor brain health
#sensors

@jake4480@c.im
2026-03-21 17:57:10

New Bingo Boys album just came out today, and of COURSE it's a ripper.
#punk

@cyrevolt@mastodon.social
2026-03-25 22:02:31

Your task for today:
Opt out of #Copilot, because #Microslop forces you into it soon otherwise.
github.com/settings…

@metacurity@infosec.exchange
2026-02-25 15:18:08

Don't miss my latest CSO feature that examines how boards don't need more cyber metrics; they need risk signals so they can better understand the exposure, trajectory, and consequences of the threats their organizations face.
Thanks to Richard Bejtlich, Mike Hamilton, Wendy Nather, George Tsantes, and Bernard Brantley for their insights.

@arXiv_csCL_bot@mastoxiv.page
2026-03-31 11:13:03

Replaced article(s) found for cs.CL. arxiv.org/list/cs.CL/new
[4/5]:
- Retrieving Climate Change Disinformation by Narrative
Upravitelev, Solopova, Jakob, Sahitaj, M\"oller, Schmitt
arxiv.org/abs/2603.22015 mastoxiv.page/@arXiv_csCL_bot/
- PaperVoyager : Building Interactive Web with Visual Language Models
Dasen Dai, Biao Wu, Meng Fang, Wenhao Wang
arxiv.org/abs/2603.22999 mastoxiv.page/@arXiv_csCL_bot/
- Continual Robot Skill and Task Learning via Dialogue
Weiwei Gu, Suresh Kondepudi, Anmol Gupta, Lixiao Huang, Nakul Gopalan
arxiv.org/abs/2409.03166 mastoxiv.page/@arXiv_csRO_bot/
- Shifting Perspectives: Steering Vectors for Robust Bias Mitigation in LLMs
Zara Siddique, Irtaza Khalid, Liam D. Turner, Luis Espinosa-Anke
arxiv.org/abs/2503.05371 mastoxiv.page/@arXiv_csLG_bot/
- SkillFlow: Scalable and Efficient Agent Skill Retrieval System
Fangzhou Li, Pagkratios Tagkopoulos, Ilias Tagkopoulos
arxiv.org/abs/2504.06188 mastoxiv.page/@arXiv_csAI_bot/
- Large Language Models for Computer-Aided Design: A Survey
Licheng Zhang, Bach Le, Naveed Akhtar, Siew-Kei Lam, Tuan Ngo
arxiv.org/abs/2505.08137 mastoxiv.page/@arXiv_csLG_bot/
- Structured Agent Distillation for Large Language Model
Liu, Kong, Dong, Yang, Li, Tang, Yuan, Niu, Zhang, Zhao, Lin, Huang, Wang
arxiv.org/abs/2505.13820 mastoxiv.page/@arXiv_csLG_bot/
- VLM-3R: Vision-Language Models Augmented with Instruction-Aligned 3D Reconstruction
Fan, Zhang, Li, Zhang, Chen, Hu, Wang, Qu, Zhou, Wang, Yan, Xu, Theiss, Chen, Li, Tu, Wang, Ranjan
arxiv.org/abs/2505.20279 mastoxiv.page/@arXiv_csCV_bot/
- Learning to Diagnose Privately: DP-Powered LLMs for Radiology Report Classification
Bhattacharjee, Tian, Rubin, Lo, Merchant, Hanson, Gounley, Tandon
arxiv.org/abs/2506.04450 mastoxiv.page/@arXiv_csCR_bot/
- L-MARS: Legal Multi-Agent Workflow with Orchestrated Reasoning and Agentic Search
Ziqi Wang, Boqin Yuan
arxiv.org/abs/2509.00761 mastoxiv.page/@arXiv_csAI_bot/
- Your Models Have Thought Enough: Training Large Reasoning Models to Stop Overthinking
Han, Huang, Liao, Jiang, Lu, Zhao, Wang, Zhou, Jiang, Liang, Zhou, Sun, Yu, Xiao
arxiv.org/abs/2509.23392 mastoxiv.page/@arXiv_csAI_bot/
- Person-Centric Annotations of LAION-400M: Auditing Bias and Its Transfer to Models
Leander Girrbach, Stephan Alaniz, Genevieve Smith, Trevor Darrell, Zeynep Akata
arxiv.org/abs/2510.03721 mastoxiv.page/@arXiv_csCV_bot/
- Agentic Context Engineering: Evolving Contexts for Self-Improving Language Models
Zhang, Hu, Upasani, Ma, Hong, Kamanuru, Rainton, Wu, Ji, Li, Thakker, Zou, Olukotun
arxiv.org/abs/2510.04618 mastoxiv.page/@arXiv_csLG_bot/
- Mitigating Premature Exploitation in Particle-based Monte Carlo for Inference-Time Scaling
Giannone, Xu, Nayak, Awhad, Sudalairaj, Xu, Srivastava
arxiv.org/abs/2510.05825 mastoxiv.page/@arXiv_csLG_bot/
- Complete asymptotic type-token relationship for growing complex systems with inverse power-law co...
Pablo Rosillo-Rodes, Laurent H\'ebert-Dufresne, Peter Sheridan Dodds
arxiv.org/abs/2511.02069 mastoxiv.page/@arXiv_physicsso
- ViPRA: Video Prediction for Robot Actions
Sandeep Routray, Hengkai Pan, Unnat Jain, Shikhar Bahl, Deepak Pathak
arxiv.org/abs/2511.07732 mastoxiv.page/@arXiv_csRO_bot/
- AISAC: An Integrated multi-agent System for Transparent, Retrieval-Grounded Scientific Assistance
Chandrachur Bhattacharya, Sibendu Som
arxiv.org/abs/2511.14043
- VideoARM: Agentic Reasoning over Hierarchical Memory for Long-Form Video Understanding
Yufei Yin, Qianke Meng, Minghao Chen, Jiajun Ding, Zhenwei Shao, Zhou Yu
arxiv.org/abs/2512.12360 mastoxiv.page/@arXiv_csCV_bot/
- RadImageNet-VQA: A Large-Scale CT and MRI Dataset for Radiologic Visual Question Answering
L\'eo Butsanets, Charles Corbi\`ere, Julien Khlaut, Pierre Manceron, Corentin Dancette
arxiv.org/abs/2512.17396 mastoxiv.page/@arXiv_csCV_bot/
- Measuring all the noises of LLM Evals
Sida Wang
arxiv.org/abs/2512.21326 mastoxiv.page/@arXiv_csLG_bot/
toXiv_bot_toot

@danyork@mastodon.social
2026-01-16 20:16:33

Forty years ago, 21 people gathered for the first meeting of what became the Internet Engineering Task Force or #IETF . Every day billions of people use the open standards and technologies developed in the IETF. And nearly 8000 volunteer IETF participants from around the world collaborate in more than 100 working groups evolving those open standards and making the Internet work better!

@newsie@darktundra.xyz
2026-01-13 17:03:38

Senior military cyber operator removed from Russia task force therecord.media/senior-militar

@arXiv_csLG_bot@mastoxiv.page
2026-02-25 10:45:01

Statistical Query Lower Bounds for Smoothed Agnostic Learning
Ilias Diakonikolas, Daniel M. Kane
arxiv.org/abs/2602.21191 arxiv.org/pdf/2602.21191 arxiv.org/html/2602.21191
arXiv:2602.21191v1 Announce Type: new
Abstract: We study the complexity of smoothed agnostic learning, recently introduced by~\cite{CKKMS24}, in which the learner competes with the best classifier in a target class under slight Gaussian perturbations of the inputs. Specifically, we focus on the prototypical task of agnostically learning halfspaces under subgaussian distributions in the smoothed model. The best known upper bound for this problem relies on $L_1$-polynomial regression and has complexity $d^{\tilde{O}(1/\sigma^2) \log(1/\epsilon)}$, where $\sigma$ is the smoothing parameter and $\epsilon$ is the excess error. Our main result is a Statistical Query (SQ) lower bound providing formal evidence that this upper bound is close to best possible. In more detail, we show that (even for Gaussian marginals) any SQ algorithm for smoothed agnostic learning of halfspaces requires complexity $d^{\Omega(1/\sigma^{2} \log(1/\epsilon))}$. This is the first non-trivial lower bound on the complexity of this task and nearly matches the known upper bound. Roughly speaking, we show that applying $L_1$-polynomial regression to a smoothed version of the function is essentially best possible. Our techniques involve finding a moment-matching hard distribution by way of linear programming duality. This dual program corresponds exactly to finding a low-degree approximating polynomial to the smoothed version of the target function (which turns out to be the same condition required for the $L_1$-polynomial regression to work). Our explicit SQ lower bound then comes from proving lower bounds on this approximation degree for the class of halfspaces.
toXiv_bot_toot

@geant@mstdn.social
2026-01-13 10:20:17

What if our networks could do more than just carry data?
In December, the Fibre Sensing Task of the GÉANT (GN5-2) Project, together with SURF @… turned 57 km of live optical fibre into a sensor.
The result? Detecting everything from trams to a plane landing at Amsterdam Airport Schiphol.
🎥 Watch Chris Atherton walk us through the experiment.

Christ Atherton (GÉANT) explains a recent fibre sensing experiment carried on the GÉANT network.

In December, the Fibre Sensing Task of the GÉANT (GN5-2) Project, with support from SURF, the Dutch National Research and Education Network (NREN), carried out an hour-long experiment using Distributed Acoustic Sensing (DAS).

During the experiment, a laser signal was injected into the same optical fibre that carries live internet traffic. This effectively turned the fibre optic cable into a sensor…
@compfu@mograph.social
2026-02-15 22:37:58

Oh well, our upcoming client doesn't provide cc files for each shot. Instead, we need to extract the grading values from an EDL file. Fortunately I've already written a script to do that a few years ago for another show.
The time to write a script might be more than what it takes to do the task manually (here it would be copying values from a text file to an xml file). But it pays off if you have to repeat the task. Even if that is 7 years later.

screenshot of a git repository showing a commit from December of 2018 for a tool called edl_to_cc.py
@anderelampe@chaos.social
2026-01-20 09:52:57

Oh happy task. #acadamicchatter

Happy Seal Meme: Close up foto of a seal, closed eyes and a smile on its face, like it is enjyoing something very much. at the top of the image in outlined imapct font: "The Feeling" and at the bottom of the image in outlinded impact font: "proof reading the accepted paper"
@qurlyjoe@mstdn.social
2026-01-17 23:59:21

So I’ve got a new gig of sorts. I’ll be a volunteer photographer for the city parks system. The task will be to take pics of folks participating in various programs in the parks and natural areas run by other volunteers, to try and capture that attendees are having fun, especially the kids. The difficulty level is that I’ve never liked photographing people. Go out of my way to keep them out of shots. I’ve done a couple events now, and still just feel intrusive. Hope it gets easier.

@arXiv_physicschemph_bot@mastoxiv.page
2026-03-27 08:44:52

Automating Computational Chemistry Workflows via OpenClaw and Domain-Specific Skills
Mingwei Ding, Chen Huang, Yibo Hu, Yifan Li, Zitian Lu, Xingtai Yu, Duo Zhang, Wenxi Zhai, Tong Zhu, Qiangqiang Gu, Jinzhe Zeng
arxiv.org/abs/2603.25522 arxiv.org/pdf/2603.25522 arxiv.org/html/2603.25522
arXiv:2603.25522v1 Announce Type: new
Abstract: Automating multistep computational chemistry tasks remains challenging because reasoning, workflow specification, software execution, and high-performance computing (HPC) execution are often tightly coupled. We demonstrate a decoupled agent-skill design for computational chemistry automation leveraging OpenClaw. Specifically, OpenClaw provides centralized control and supervision; schema-defined planning skills translate scientific goals into executable task specifications; domain skills encapsulate specific computational chemistry procedures; and DPDispatcher manages job execution across heterogeneous HPC environments. In a molecular dynamics (MD) case study of methane oxidation, the system completed cross-tool execution, bounded recovery from runtime failures, and reaction network extraction, illustrating a scalable and maintainable approach to multistep computational chemistry automation.
toXiv_bot_toot

@rachel@norfolk.social
2026-01-17 14:00:13

Quite tempted to occupy a section of Tony Blair’s property and build a house on it. You know, so he can really understand the task he has so gleefully taken on.
Obviously, I wouldn’t really do this. I’d be stuck with dreadful neighbours.

@arXiv_csDC_bot@mastoxiv.page
2026-01-22 07:36:07

Exploring Performance-Productivity Trade-offs in AMT Runtimes: A Task Bench Study of Itoyori, ItoyoriFBC, HPX, and MPI
Torben R. Lahnor, Mia Reitz, Jonas Posner, Patrick Diehl
arxiv.org/abs/2601.14608

@drbruced@aus.social
2026-02-17 02:19:47

Today I made a 2 line change to a file on GitHub. Copilot suggested a spectacularly incorrect summary of my change for the commit message, so I deleted it, finished the commit, and asked CoPilot “how do I disable CoPilot commit message suggestions.” THAT task was in its wheelhouse. #AIslop

@NFL@darktundra.xyz
2026-02-19 13:50:34

Cowboys DC Christian Parker on new scheme: 'You build it around the players' nfl.com/news/cowboys-dc-christ

@TFG@social.linux.pizza
2026-03-09 06:39:43

OK..news from the "involuntary admin" front.
Infos I had from my partners mother:
"I can do nothing on my laptop any more. There's always a message 'no permission' when I try X or Y. And it said deleting cookies may help so I tried. But something went wrong and I was afraid to continue"
What I saw when I checked the laptop:
- Firefox icon in task bar and desktop blank
- Chrome icon in task bar and desktop blank
- Virus scanner…

‪@Richard@worklifepsych.social‬
2026-03-19 10:52:42

🎧 Here's a short excerpt from this week's #podcast, where I use my own example of addressing procrastination- finally taking action on a task that's been on my list for weeks!
youtu.be/KJSaXw4CUEs

@bencurthoys@mastodon.social
2026-01-13 20:55:47

For balance, this is my favourite ever Dilbert strip, back from when it was good.

Dilbert comic strip. His boss asks him to drop everything for an urgent task. Dilbert asks whether it is more urgent than what he is doing now. The PHB doesn't know and gets Alice to do it. Dilbert is actually playing quake.
@bthalpin@mastodon.social
2026-02-09 14:15:40

I've just finished a modestly onerous administrative task.
It lead me to think about effort and the 3 legs of my job: research, teaching and admin: While there are intrinsic reasons to do research and teaching to the best of your ability, for admin "good enough" is enough. Focus on efficiency not excellence, where efficiency includes making things work, not causing extra hassle in the medium term.

@theodric@social.linux.pizza
2026-02-16 09:36:34

Grok gets it

Grok output: 

-----------

Q: I want to wash my car. The car wash is 50 meters away. Should I walk or drive?

[40 pages]

-----------

Drive.

The goal is to wash the car, not to get yourself to the car wash.

Walking 50 meters gets you there in about 30-45 seconds (maybe 20 if you're speed-walking), but your dirty car remains exactly where it is-still dirty. To actually accomplish the task, the car has to travel those 50 meters.
@nobodyinperson@fosstodon.org
2026-01-07 14:15:50

Has anyone ever gotten `services.webdav` to work on :nixos: #NixOS?
I have a very simple and probably very common task: Expose a directory (in my case `/var/lib/paperless/consume`) via #WebDAV so my scanner can upload their PDFs there. I'm pretty much doing this¹ here, but also this person has p…

@iam_jfnklstrm@social.linux.pizza
2026-03-05 07:52:31

Vänder på min task-priority list: Väntar på en kollega eftersom han är admin på en server som jag fortf inte kommer åt. Så jag skrev ett bashscript som han kan köra för att lägga mig i sudoers. Svårt att fixa till css på en server när en vare sig kommer in eller får göra ändringar i nano (jag vet, det finns vim också, men det är inte där mitt muskelminne finns)

@blackknight95857669@social.linux.pizza
2026-03-22 13:23:02

Been another eventful week around the "new" house. Got the shed built. What a pain in the ass metal sheds are. This one was no different. Shout-out to whoever decided it was a great idea to plastic wrap every painted panel like they were PC case panels. I hope you stub a pinky toe every other day for the rest of your life.
With that done, next task was to put up the shelf frames I brought with me. Was able to get 3 shelves cut out of the former back porch ramp plywood. Got 3…

@adamhotep@infosec.exchange
2026-03-18 15:37:47

RE: #uspol

@arXiv_csLG_bot@mastoxiv.page
2026-02-25 10:45:31

Learning from Trials and Errors: Reflective Test-Time Planning for Embodied LLMs
Yining Hong, Huang Huang, Manling Li, Li Fei-Fei, Jiajun Wu, Yejin Choi
arxiv.org/abs/2602.21198 arxiv.org/pdf/2602.21198 arxiv.org/html/2602.21198
arXiv:2602.21198v1 Announce Type: new
Abstract: Embodied LLMs endow robots with high-level task reasoning, but they cannot reflect on what went wrong or why, turning deployment into a sequence of independent trials where mistakes repeat rather than accumulate into experience. Drawing upon human reflective practitioners, we introduce Reflective Test-Time Planning, which integrates two modes of reflection: \textit{reflection-in-action}, where the agent uses test-time scaling to generate and score multiple candidate actions using internal reflections before execution; and \textit{reflection-on-action}, which uses test-time training to update both its internal reflection model and its action policy based on external reflections after execution. We also include retrospective reflection, allowing the agent to re-evaluate earlier decisions and perform model updates with hindsight for proper long-horizon credit assignment. Experiments on our newly-designed Long-Horizon Household benchmark and MuJoCo Cupboard Fitting benchmark show significant gains over baseline models, with ablative studies validating the complementary roles of reflection-in-action and reflection-on-action. Qualitative analyses, including real-robot trials, highlight behavioral correction through reflection.
toXiv_bot_toot

@raiders@darktundra.xyz
2026-02-10 19:17:38

Raiders Get Compelling Words Over Defensive Coordinator Search heavy.com/sports/nfl/las-vegas

@jeang3nie@social.linux.pizza
2026-03-06 16:10:24

#Sunstone #browser now remembers your open tabs when you close it and re-opens them the next time you launch it. Another task knocked off the todo list.

@CubitOom@social.linux.pizza
2026-02-03 21:50:08

AOC: It’s our task to figure out how to claw back what has essentially supercharged this agency into becoming a relentless domestic paramilitary that is also a blank check to Palantir to create facial-recognition scans on US citizens.
Source:
reddit.com/comments/1qug0xd

@chris@mstdn.chrisalemany.ca
2026-01-20 17:53:23

Prime Minister of Canada Mark Carney speech at Davos. It was a good one. This is how he ended it, but it is worth watching in full, including the Q&A afterward.
“We know the old order is not coming back. We shouldn’t mourn it. Nostalgia is not a strategy, but we believe that from the fracture we can build something bigger, better, stronger, more just. This is the task of the middle powers, the countries that have the most to lose from a world of fortresses and the most to gain from genuine cooperation.
The powerful have their power. But we have something too: the capacity to stop pretending, to name realities, to build our strength at home, and to act together.
That is Canada’s path. We choose it openly and confidently, and it is a path wide open to any country willing to take it with us.”
#CanPoli #CdnPoli #Canada #USA

@arXiv_csLG_bot@mastoxiv.page
2026-02-25 10:38:31

From Isolation to Integration: Building an Adaptive Expert Forest for Pre-Trained Model-based Class-Incremental Learning
Ruiqi Liu, Boyu Diao, Hangda Liu, Zhulin An, Fei Wang, Yongjun Xu
arxiv.org/abs/2602.20911 arxiv.org/pdf/2602.20911 arxiv.org/html/2602.20911
arXiv:2602.20911v1 Announce Type: new
Abstract: Class-Incremental Learning (CIL) requires models to learn new classes without forgetting old ones. A common method is to freeze a pre-trained model and train a new, lightweight adapter for each task. While this prevents forgetting, it treats the learned knowledge as a simple, unstructured collection and fails to use the relationships between tasks. To this end, we propose the Semantic-guided Adaptive Expert Forest (SAEF), a new method that organizes adapters into a structured hierarchy for better knowledge sharing. SAEF first groups tasks into conceptual clusters based on their semantic relationships. Then, within each cluster, it builds a balanced expert tree by creating new adapters from merging the adapters of similar tasks. At inference time, SAEF finds and activates a set of relevant experts from the forest for any given input. The final prediction is made by combining the outputs of these activated experts, weighted by how confident each expert is. Experiments on several benchmark datasets show that SAEF achieves SOTA performance.
toXiv_bot_toot

@arXiv_csGR_bot@mastoxiv.page
2026-02-03 07:44:55

Genus-0 Surface Parameterization using Spherical Beltrami Differentials
Zhehao Xu, Lok Ming Lui
arxiv.org/abs/2602.01589 arxiv.org/pdf/2602.01589 arxiv.org/html/2602.01589
arXiv:2602.01589v1 Announce Type: new
Abstract: Spherical surface parameterization is a fundamental tool in geometry processing and imaging science. For a genus-0 closed surface, many efficient algorithms can map the surface to the sphere; consequently, a broad class of task-driven genus-0 mapping problems can be reduced to constructing a high-quality spherical self-map. However, existing approaches often face a trade-off between satisfying task objectives (e.g., landmark or feature alignment), maintaining bijectivity, and controlling geometric distortion. We introduce the Spherical Beltrami Differential (SBD), a two-chart representation of quasiconformal self-maps of the sphere, and establish its correspondence with spherical homeomorphisms up to conformal automorphisms. Building on the Spectral Beltrami Network (SBN), we propose a neural optimization framework BOOST that optimizes two Beltrami fields on hemispherical stereographic charts and enforces global consistency through explicit seam-aware constraints. Experiments on large-deformation landmark matching and intensity-based spherical registration demonstrate the effectiveness of our proposed framework. We further apply the method to brain cortical surface registration, aligning sulcal landmarks and jointly matching cortical sulci depth maps, showing improved task fidelity with controlled distortion and robust bijective behavior.
toXiv_bot_toot

@compfu@mograph.social
2026-03-05 21:11:39

Things you're able to do in a VFX pipeline but it would suck:
1. work with internal shot names that are different from what the client is using.
2. change version numbers of files sent out to the client to hide your internal number of revisions.
Things that make a good VFX pipeline:
1. have artists work with the same task names ("comp_v03") across shows and have scripts rename files you upload if a client demands it ("cmp_v0003")
The forme…

@arXiv_csLG_bot@mastoxiv.page
2026-02-25 10:37:11

Exploring the Impact of Parameter Update Magnitude on Forgetting and Generalization of Continual Learning
JinLi He, Liang Bai, Xian Yang
arxiv.org/abs/2602.20796 arxiv.org/pdf/2602.20796 arxiv.org/html/2602.20796
arXiv:2602.20796v1 Announce Type: new
Abstract: The magnitude of parameter updates are considered a key factor in continual learning. However, most existing studies focus on designing diverse update strategies, while a theoretical understanding of the underlying mechanisms remains limited. Therefore, we characterize model's forgetting from the perspective of parameter update magnitude and formalize it as knowledge degradation induced by task-specific drift in the parameter space, which has not been fully captured in previous studies due to their assumption of a unified parameter space. By deriving the optimal parameter update magnitude that minimizes forgetting, we unify two representative update paradigms, frozen training and initialized training, within an optimization framework for constrained parameter updates. Our theoretical results further reveals that sequence tasks with small parameter distances exhibit better generalization and less forgetting under frozen training rather than initialized training. These theoretical insights inspire a novel hybrid parameter update strategy that adaptively adjusts update magnitude based on gradient directions. Experiments on deep neural networks demonstrate that this hybrid approach outperforms standard training strategies, providing new theoretical perspectives and practical inspiration for designing efficient and scalable continual learning algorithms.
toXiv_bot_toot

@arXiv_csGR_bot@mastoxiv.page
2026-01-21 08:02:08

Proc3D: Procedural 3D Generation and Parametric Editing of 3D Shapes with Large Language Models
Fadlullah Raji, Stefano Petrangeli, Matheus Gadelha, Yu Shen, Uttaran Bhattacharya, Gang Wu
arxiv.org/abs/2601.12234 arxiv.org/pdf/2601.12234 arxiv.org/html/2601.12234
arXiv:2601.12234v1 Announce Type: new
Abstract: Generating 3D models has traditionally been a complex task requiring specialized expertise. While recent advances in generative AI have sought to automate this process, existing methods produce non-editable representation, such as meshes or point clouds, limiting their adaptability for iterative design. In this paper, we introduce Proc3D, a system designed to generate editable 3D models while enabling real-time modifications. At its core, Proc3D introduces procedural compact graph (PCG), a graph representation of 3D models, that encodes the algorithmic rules and structures necessary for generating the model. This representation exposes key parameters, allowing intuitive manual adjustments via sliders and checkboxes, as well as real-time, automated modifications through natural language prompts using Large Language Models (LLMs). We demonstrate Proc3D's capabilities using two generative approaches: GPT-4o with in-context learning (ICL) and a fine-tuned LLAMA-3 model. Experimental results show that Proc3D outperforms existing methods in editing efficiency, achieving more than 400x speedup over conventional approaches that require full regeneration for each modification. Additionally, Proc3D improves ULIP scores by 28%, a metric that evaluates the alignment between generated 3D models and text prompts. By enabling text-aligned 3D model generation along with precise, real-time parametric edits, Proc3D facilitates highly accurate text-based image editing applications.
toXiv_bot_toot

@arXiv_csLG_bot@mastoxiv.page
2026-02-25 16:08:08

Replaced article(s) found for cs.LG. arxiv.org/list/cs.LG/new
[4/6]:
- Neural Proposals, Symbolic Guarantees: Neuro-Symbolic Graph Generation with Hard Constraints
Chuqin Geng, Li Zhang, Mark Zhang, Haolin Ye, Ziyu Zhao, Xujie Si
arxiv.org/abs/2602.16954 mastoxiv.page/@arXiv_csLG_bot/
- Multi-Probe Zero Collision Hash (MPZCH): Mitigating Embedding Collisions and Enhancing Model Fres...
Ziliang Zhao, et al.
arxiv.org/abs/2602.17050 mastoxiv.page/@arXiv_csLG_bot/
- MASPO: Unifying Gradient Utilization, Probability Mass, and Signal Reliability for Robust and Sam...
Fu, Lin, Fang, Zheng, Hu, Shao, Qin, Pan, Zeng, Cai
arxiv.org/abs/2602.17550 mastoxiv.page/@arXiv_csLG_bot/
- A Theoretical Framework for Modular Learning of Robust Generative Models
Corinna Cortes, Mehryar Mohri, Yutao Zhong
arxiv.org/abs/2602.17554 mastoxiv.page/@arXiv_csLG_bot/
- Multi-Round Human-AI Collaboration with User-Specified Requirements
Sima Noorani, Shayan Kiyani, Hamed Hassani, George Pappas
arxiv.org/abs/2602.17646 mastoxiv.page/@arXiv_csLG_bot/
- NEXUS: A compact neural architecture for high-resolution spatiotemporal air quality forecasting i...
Rampunit Kumar, Aditya Maheshwari
arxiv.org/abs/2602.19654 mastoxiv.page/@arXiv_csLG_bot/
- Augmenting Lateral Thinking in Language Models with Humor and Riddle Data for the BRAINTEASER Task
Mina Ghashami, Soumya Smruti Mishra
arxiv.org/abs/2405.10385 mastoxiv.page/@arXiv_csCL_bot/
- Watermarking Language Models with Error Correcting Codes
Patrick Chao, Yan Sun, Edgar Dobriban, Hamed Hassani
arxiv.org/abs/2406.10281 mastoxiv.page/@arXiv_csCR_bot/
- Learning to Control Unknown Strongly Monotone Games
Siddharth Chandak, Ilai Bistritz, Nicholas Bambos
arxiv.org/abs/2407.00575 mastoxiv.page/@arXiv_csMA_bot/
- Classification and reconstruction for single-pixel imaging with classical and quantum neural netw...
Sofya Manko, Dmitry Frolovtsev
arxiv.org/abs/2407.12506 mastoxiv.page/@arXiv_quantph_b
- Statistical Inference for Temporal Difference Learning with Linear Function Approximation
Weichen Wu, Gen Li, Yuting Wei, Alessandro Rinaldo
arxiv.org/abs/2410.16106 mastoxiv.page/@arXiv_statML_bo
- Big data approach to Kazhdan-Lusztig polynomials
Abel Lacabanne, Daniel Tubbenhauer, Pedro Vaz
arxiv.org/abs/2412.01283 mastoxiv.page/@arXiv_mathRT_bo
- MoEMba: A Mamba-based Mixture of Experts for High-Density EMG-based Hand Gesture Recognition
Mehran Shabanpour, Kasra Rad, Sadaf Khademi, Arash Mohammadi
arxiv.org/abs/2502.17457 mastoxiv.page/@arXiv_eessSP_bo
- Tightening Optimality gap with confidence through conformal prediction
Miao Li, Michael Klamkin, Russell Bent, Pascal Van Hentenryck
arxiv.org/abs/2503.04071 mastoxiv.page/@arXiv_statML_bo
- SEED: Towards More Accurate Semantic Evaluation for Visual Brain Decoding
Juhyeon Park, Peter Yongho Kim, Jiook Cha, Shinjae Yoo, Taesup Moon
arxiv.org/abs/2503.06437 mastoxiv.page/@arXiv_csCV_bot/
- How much does context affect the accuracy of AI health advice?
Prashant Garg, Thiemo Fetzer
arxiv.org/abs/2504.18310 mastoxiv.page/@arXiv_econGN_bo
- Reproducing and Improving CheXNet: Deep Learning for Chest X-ray Disease Classification
Daniel J. Strick, Carlos Garcia, Anthony Huang, Thomas Gardos
arxiv.org/abs/2505.06646 mastoxiv.page/@arXiv_eessIV_bo
- Sharp Gaussian approximations for Decentralized Federated Learning
Soham Bonnerjee, Sayar Karmakar, Wei Biao Wu
arxiv.org/abs/2505.08125 mastoxiv.page/@arXiv_statML_bo
- HoloLLM: Multisensory Foundation Model for Language-Grounded Human Sensing and Reasoning
Chuhao Zhou, Jianfei Yang
arxiv.org/abs/2505.17645 mastoxiv.page/@arXiv_csCV_bot/
- A Copula Based Supervised Filter for Feature Selection in Diabetes Risk Prediction Using Machine ...
Agnideep Aich, Md Monzur Murshed, Sameera Hewage, Amanda Mayeaux
arxiv.org/abs/2505.22554 mastoxiv.page/@arXiv_statML_bo
- Synthesis of discrete-continuous quantum circuits with multimodal diffusion models
Florian F\"urrutter, Zohim Chandani, Ikko Hamamura, Hans J. Briegel, Gorka Mu\~noz-Gil
arxiv.org/abs/2506.01666 mastoxiv.page/@arXiv_quantph_b
toXiv_bot_toot