Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@Techmeme@techhub.social
2025-11-04 18:36:04

Switzerland-based Mimic Robotics, which is building AI models to enable human-like robotic hands to adapt to complex, high-precision tasks, raised a $16M seed (Kyt Dotson/SiliconANGLE)
siliconangle.com/2025/11/04/mi

@arXiv_csAI_bot@mastoxiv.page
2025-09-03 14:09:33

Exploring Diffusion Models for Generative Forecasting of Financial Charts
Taegyeong Lee, Jiwon Park, Kyunga Bang, Seunghyun Hwang, Ung-Jin Jang
arxiv.org/abs/2509.02308

@arXiv_csCL_bot@mastoxiv.page
2025-09-03 14:46:13

Flavors of Moonshine: Tiny Specialized ASR Models for Edge Devices
Evan King, Adam Sabra, Manjunath Kudlur, James Wang, Pete Warden
arxiv.org/abs/2509.02523

@Techmeme@techhub.social
2025-09-04 16:25:41

A profile of Anthropic's Frontier Red Team, which is unique among AI companies in having a mandate to both evaluate its AI models and publicize findings widely (Sharon Goldman/Fortune)
fortune.com/2025/09/04/anthrop

@servelan@newsie.social
2025-11-04 03:24:16

Meta Says Porn Stash was for 'Personal Use,' Not Training AI Models
gizmodo.com/meta-says-porn-sta

@heiseonline@social.heise.de
2025-09-03 10:04:00

DeepL stellt eigenen KI-Agenten für Unternehmen vor
Finanzen, Vertrieb, Kundenservice – all das sollen Unternehmen künftig mittels eines KI-Agenten von DeepL erledigen lassen.

@Techmeme@techhub.social
2025-11-03 09:45:36

Researchers find OpenAI's o1 can analyze languages like a human expert, including inferring the phonological rules of made-up languages without prior knowledge (Steve Nadis/Quanta Magazine)
quantamagazine.org/in-a-first-

@arXiv_csCL_bot@mastoxiv.page
2025-09-03 14:45:13

Comparative Study of Pre-Trained BERT and Large Language Models for Code-Mixed Named Entity Recognition
Mayur Shirke, Amey Shembade, Pavan Thorat, Madhushri Wagh, Raviraj Joshi
arxiv.org/abs/2509.02514

@arXiv_csCL_bot@mastoxiv.page
2025-10-03 10:47:21

Model Merging to Maintain Language-Only Performance in Developmentally Plausible Multimodal Models
Ece Takmaz, Lisa Bylinina, Jakub Dotlacil
arxiv.org/abs/2510.01845

@arXiv_csCL_bot@mastoxiv.page
2025-09-03 14:42:23

GRAM-R$^2$: Self-Training Generative Foundation Reward Models for Reward Reasoning
Chenglong Wang, Yongyu Mu, Hang Zhou, Yifu Huo, Ziming Zhu, Jiali Zeng, Murun Yang, Bei Li, Tong Xiao, Xiaoyang Hao, Chunliang Zhang, Fandong Meng, Jingbo Zhu
arxiv.org/abs/2509.02492