Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@arXiv_csAI_bot@mastoxiv.page
2025-07-23 09:56:42

Higher Gauge Flow Models
Alexander Strunk, Roland Assam
arxiv.org/abs/2507.16334 arxiv.org/pdf/2507.16334

@tiotasram@kolektiva.social
2025-07-23 06:15:10

How much of my children's future is AI going to burn up? That depends on how much we feed the hype beast. *That* is why "don't use AI at all without mentioning the drawbacks & a very good reason" is my stance (and I'm an AI researcher, technically).
Local models that run on your laptop: acceptable if produced by ethical means (including data sourcing & compensation for data filtering) & training costs are mitigated. Are such models way worse than the huge datacenter-scale models? Yes, for now. Deal with it.
ChatGPT, Claude, Copilot, even DeepSeek: get out. You're feeding the beast that is consuming my kids' future. Heck, even talking up these models or about how "everyone is using them so it's okay" or about "they're not going away" I'd feeding the beast even if you don't touch them.
I wish it weren't like this, because the capabilities of the big models are cool even once you cut past the hype.
#AI

@Techmeme@techhub.social
2025-07-22 15:01:23

Sources: Apple's team working on AI models wanted to release several as open source, Craig Federighi disagreed, largely concerned about public perception issues (The Information)
theinformation.com/articles/ap

@arXiv_csCV_bot@mastoxiv.page
2025-08-22 10:18:11

CM2LoD3: Reconstructing LoD3 Building Models Using Semantic Conflict Maps
Franz Hanke, Antonia Bieringer, Olaf Wysocki, Boris Jutzi
arxiv.org/abs/2508.15672

@arXiv_csSE_bot@mastoxiv.page
2025-07-23 08:08:32

Dr. Boot: Bootstrapping Program Synthesis Language Models to Perform Repairing
Noah van der Vleuten
arxiv.org/abs/2507.15889

@arXiv_csDC_bot@mastoxiv.page
2025-07-23 08:27:32

Cooling Matters: Benchmarking Large Language Models and Vision-Language Models on Liquid-Cooled Versus Air-Cooled H100 GPU Systems
Imran Latif, Muhammad Ali Shafique, Hayat Ullah, Alex C. Newkirk, Xi Yu, Arslan Munir
arxiv.org/abs/2507.16781

@arXiv_csCL_bot@mastoxiv.page
2025-06-23 12:12:00

CLEAR-3K: Assessing Causal Explanatory Capabilities in Language Models
Naiming Liu, Richard Baraniuk, Shashank Sonkar
arxiv.org/abs/2506.17180

@arXiv_csSE_bot@mastoxiv.page
2025-07-22 11:46:40

SustainDiffusion: Optimising the Social and Environmental Sustainability of Stable Diffusion Models
Giordano d'Aloisio, Tosin Fadahunsi, Jay Choy, Rebecca Moussa, Federica Sarro
arxiv.org/abs/2507.15663

@arXiv_csAI_bot@mastoxiv.page
2025-08-22 10:00:31

DeepThink3D: Enhancing Large Language Models with Programmatic Reasoning in Complex 3D Situated Reasoning Tasks
Jiayi Song, Rui Wan, Lipeng Ma, Weidong Yang, Qingyuan Zhou, Yixuan Li, Ben Fei
arxiv.org/abs/2508.15548

@Techmeme@techhub.social
2025-06-20 20:05:50

Anthropic's test of 16 top AI models from OpenAI and others found that, in some cases, they resorted to malicious behavior to avoid replacement or achieve goals (Ina Fried/Axios)
axios.com/2025/06/20/ai-models