Switzerland-based Mimic Robotics, which is building AI models to enable human-like robotic hands to adapt to complex, high-precision tasks, raised a $16M seed (Kyt Dotson/SiliconANGLE)
https://siliconangle.com/2025/11/04/mimic-raises-16m-build-a…
Exploring Diffusion Models for Generative Forecasting of Financial Charts
Taegyeong Lee, Jiwon Park, Kyunga Bang, Seunghyun Hwang, Ung-Jin Jang
https://arxiv.org/abs/2509.02308 …
Flavors of Moonshine: Tiny Specialized ASR Models for Edge Devices
Evan King, Adam Sabra, Manjunath Kudlur, James Wang, Pete Warden
https://arxiv.org/abs/2509.02523 https://
A profile of Anthropic's Frontier Red Team, which is unique among AI companies in having a mandate to both evaluate its AI models and publicize findings widely (Sharon Goldman/Fortune)
https://fortune.com/2025/09/04/anthrop
Meta Says Porn Stash was for 'Personal Use,' Not Training AI Models
https://gizmodo.com/meta-says-porn-stash-was-for-personal-use-not-training-ai-models-2000679672
DeepL stellt eigenen KI-Agenten für Unternehmen vor
Finanzen, Vertrieb, Kundenservice – all das sollen Unternehmen künftig mittels eines KI-Agenten von DeepL erledigen lassen.
https://www.…
Researchers find OpenAI's o1 can analyze languages like a human expert, including inferring the phonological rules of made-up languages without prior knowledge (Steve Nadis/Quanta Magazine)
https://www.quantamagazine.org/in-a-first-
Comparative Study of Pre-Trained BERT and Large Language Models for Code-Mixed Named Entity Recognition
Mayur Shirke, Amey Shembade, Pavan Thorat, Madhushri Wagh, Raviraj Joshi
https://arxiv.org/abs/2509.02514
Model Merging to Maintain Language-Only Performance in Developmentally Plausible Multimodal Models
Ece Takmaz, Lisa Bylinina, Jakub Dotlacil
https://arxiv.org/abs/2510.01845 htt…
GRAM-R$^2$: Self-Training Generative Foundation Reward Models for Reward Reasoning
Chenglong Wang, Yongyu Mu, Hang Zhou, Yifu Huo, Ziming Zhu, Jiali Zeng, Murun Yang, Bei Li, Tong Xiao, Xiaoyang Hao, Chunliang Zhang, Fandong Meng, Jingbo Zhu
https://arxiv.org/abs/2509.02492