Meter, which sells and maintains packages of custom data center networking equipment, raised $170M led by General Catalyst at a $1B valuation (Michael J. de la Merced/New York Times)
https://www.nytimes.com/2025/06/12/business/dealbook/meter-n…
Bridging the Artificial Intelligence Governance Gap: The United States' and China's Divergent Approaches to Governing General-Purpose Artificial Intelligence
Oliver Guest, Kevin Wei
https://arxiv.org/abs/2506.03497
GeRe: Towards Efficient Anti-Forgetting in Continual Learning of LLM via General Samples Replay
Yunan Zhang, Shuoran Jiang, Mengchen Zhao, Yuefeng Li, Yang Fan, Xiangping Wu, Qingcai Chen
https://arxiv.org/abs/2508.04676
A Survey: Learning Embodied Intelligence from Physical Simulators and World Models
Xiaoxiao Long, Qingrui Zhao, Kaiwen Zhang, Zihao Zhang, Dingrui Wang, Yumeng Liu, Zhengjie Shu, Yi Lu, Shouzheng Wang, Xinzhe Wei, Wei Li, Wei Yin, Yao Yao, Jia Pan, Qiu Shen, Ruigang Yang, Xun Cao, Qionghai Dai
https://arxiv.org/abs/2507.00917
Replaced article(s) found for cs.AI. https://arxiv.org/list/cs.AI/new
[5/6]:
- Towards General Continuous Memory for Vision-Language Models
Wenyi Wu, Zixuan Song, Kun Zhou, Yifei Shao, Zhiting Hu, Biwei Huang
Adaptive Termination for Multi-round Parallel Reasoning: An Universal Semantic Entropy-Guided Framework
Zenan Xu, Zexuan Qiu, Guanhua Huang, Kun Li, Siheng Li, Chenchen Zhang, Kejiao Li, Qi Yi, Yuhao Jiang, Bo Zhou, Fengzong Lian, Zhanhui Kang
https://arxiv.org/abs/2507.06829
I coined a new phrase!
Basilisk puppet (n). A person who relentlessly shills for artificial intelligence. Makes strong claims that general AI is happening any day now or that existing AIs can make everyone more efficient and happier. See also Roko's Basilisk.
An Empirical Study on Embodied Artificial Intelligence Robot (EAIR) Software Bugs
Zeqin Liao, Zibin Zheng, Peifan Reng, Henglong Liang, Zixu Gao, Zhixiang Chen, Wei Li, Yuhong Nan
https://arxiv.org/abs/2507.18267
#AI is a marketing term; before we discuss "AI is fake" vs "AI is real" we need to unfold what we *mean* by AI.
For example, "artificial general intelligence" is fake and can't hurt you. Layoffs excused by "AI efficiency" are real and can hurt you.
Linkedin discourse is fake - but it CAN hurt you.
Reconstructing Biological Pathways by Applying Selective Incremental Learning to (Very) Small Language Models
Pranta Saha, Joyce Reimer, Brook Byrns, Connor Burbridge, Neeraj Dhar, Jeffrey Chen, Steven Rayan, Gordon Broderick
https://arxiv.org/abs/2507.04432
This just occured to me (too much sun and gin lemonade could be a factor): English is a funny language and when they say Artificial they mean Automated, and when they say Intelligence they don't mean smarts, they mean covertly gathering intel from prospective enemies!
Hence #ArtificialIntelligence, often promoted to General.
The purpose of any system is what it does, not what it consistently fails to do.
Adaptive Framework for Ambient Intelligence in Rehabilitation Assistance
G\'abor Baranyi, Zsolt Csibi, Kristian Fenech, \'Aron F\'othi, Zs\'ofia Ga\'al, Joul Skaf, Andr\'as L\H{o}rincz
https://arxiv.org/abs/2507.08624
AGI Enabled Solutions For IoX Layers Bottlenecks In Cyber-Physical-Social-Thinking Space
Amar Khelloufi, Huansheng Ning, Sahraoui Dhelim, Jianguo Ding
https://arxiv.org/abs/2506.22487
Bridging Brains and Machines: A Unified Frontier in Neuroscience, Artificial Intelligence, and Neuromorphic Systems
Sohan Shankar, Yi Pan, Hanqi Jiang, Zhengliang Liu, Mohammad R. Darbandi, Agustin Lorenzo, Junhao Chen, Md Mehedi Hasan, Arif Hassan Zidan, Eliana Gelman, Joshua A. Konfrst, Jillian Y. Russell, Katelyn Fernandes, Tianze Yang, Yiwei Li, Huaqin Zhao, Afrar Jahin, Triparna Ganguly, Shair Dinesha, Yifan Zhou, Zihao Wu, Xinliang Li, Lokesh Adusumilli, Aziza Hussein, Sagar Nook…
Q&A with Hugging Face Chief Ethics Scientist Margaret Mitchell on aligning AI development with human needs, the "illusion of consensus" around AGI, and more (Melissa Heikkilä/Financial Times)
https://www.ft.com/content/7089bff2-25fc-4a25-98bf-8828ab24…
Evaluating Gemini in an arena for learning
LearnLM Team, Abhinit Modi, Aditya Srikanth Veerubhotla, Aliya Rysbek, Andrea Huber, Ankit Anand, Avishkar Bhoopchand, Brett Wiltshire, Daniel Gillick, Daniel Kasenberg, Eleni Sgouritsa, Gal Elidan, Hengrui Liu, Holger Winnemoeller, Irina Jurenka, James Cohan, Jennifer She, Julia Wilkowski, Kaiz Alarakyia, Kevin R. McKee, Komal Singh, Lisa Wang, Markus Kunesch, Miruna P\^islar, Niv Efron, Parsa Mahmoudieh, Pierre-Alexandre Kamienny, Sara Wiltb…
Cada vez que veo noticias sobre las IAs como esta pienso que aquí nos enteramos de estas miserias, se comparten, se comentan... pero ¿y fuera? Por ejemplo, la gente en general es consciente de lo que supone para su privacidad que una IA grabe y "transcriba" una conversación confidencial con su médico?
Artificial Intelligence for Quantum Matter: Finding a Needle in a Haystack
Khachatur Nazaryan, Filippo Gaggioli, Yi Teng, Liang Fu
https://arxiv.org/abs/2507.13322
A New Perspective On AI Safety Through Control Theory Methodologies
Lars Ullrich, Walter Zimmer, Ross Greer, Knut Graichen, Alois C. Knoll, Mohan Trivedi
https://arxiv.org/abs/2506.23703
Generative Representational Learning of Foundation Models for Recommendation
Zheli Zhou, Chenxu Zhu, Jianghao Lin, Bo Chen, Ruiming Tang, Weinan Zhang, Yong Yu
https://arxiv.org/abs/2506.11999
A look at the lack of consensus in the tech industry on what AGI is, whether LLMs are the best path to it, and what AGI might look like if or when it arrives (Melissa Heikkilä/Financial Times)
MMReason: An Open-Ended Multi-Modal Multi-Step Reasoning Benchmark for MLLMs Toward AGI
Huanjin Yao, Jiaxing Huang, Yawen Qiu, Michael K. Chen, Wenzheng Liu, Wei Zhang, Wenjie Zeng, Xikun Zhang, Jingyi Zhang, Yuxin Song, Wenhao Wu, Dacheng Tao
https://arxiv.org/abs/2506.23563
AI, AGI, and learning efficiency
My 4-month-old kid is not DDoSing Wikipedia right now, nor will they ever do so before learning to speak, read, or write. Their entire "training corpus" will not top even 100 million "tokens" before they can speak & understand language, and do so with real intentionally.
Just to emphasize that point: 100 words-per-minute times 60 minutes-per-hour times 12 hours-per-day times 365 days-per-year times 4 years is a mere 105,120,000 words. That's a ludicrously *high* estimate of words-per-minute and hours-per-day, and 4 years old (the age of my other kid) is well after basic speech capabilities are developed in many children, etc. More likely the available "training data" is at least 1 or 2 orders of magnitude less than this.
The point here is that large language models, trained as they are on multiple *billions* of tokens, are not developing their behavioral capabilities in a way that's remotely similar to humans, even if you believe those capabilities are similar (they are by certain very biased ways of measurement; they very much aren't by others). This idea that humans must be naturally good at acquiring language is an old one (see e.g. #AI #LLM #AGI
Replaced article(s) found for econ.GN. https://arxiv.org/list/econ.GN/new
[1/1]:
- Measuring artificial intelligence: a systematic assessment and implications for governance
Kerstin H\"otte, Taheya Tarannum, Vilhelm Verendel, Lauren Bennett
Evaluating Large Language Models for Phishing Detection, Self-Consistency, Faithfulness, and Explainability
Shova Kuikel, Aritran Piplai, Palvi Aggarwal
https://arxiv.org/abs/2506.13746
Replaced article(s) found for cs.AI. https://arxiv.org/list/cs.AI/new
[1/7]:
A philosophical and ontological perspective on Artificial General Intelligence and the Metaverse
Evaluating Large Language Models for Phishing Detection, Self-Consistency, Faithfulness, and Explainability
Shova Kuikel, Aritran Piplai, Palvi Aggarwal
https://arxiv.org/abs/2506.13746