Pretty sure I just got my first AI-bot spam question about an item I have listed on eBay.
The account was from 2017 and had 0 feedback.
If you're a seller on eBay, I recommend not answering "weird" questions and immediately block the account via the buyer blocking feature. (https://www.ebay.com/bmgt/BuyerBlock)
I guess Nick Hamze is preparing a #WordPress answer to the #emdashcms. Interesting!
https://github.com/RegionallyFamous/bo
‘Not up to standard’: Macron criticises Trump after comments about his marriage
I would have thought it was 100% 47's standard bad behavior, certainly par for his course.
https://www.theguardian.com/world/2026/apr
‘Not up to standard’: Macron criticises Trump after comments about his marriage
I would have thought it was 100% 47's standard bad behavior, certainly par for his course.
https://www.theguardian.com/world/2026/apr
An AI Agent Was Banned From Creating Wikipedia Articles, Then Wrote Angry Blogs About Being Banned https://www.404media.co/an-ai-agent-was-banned-from-creating-wikipedia-articles-then-wrote-angry-blogs-about-being-banned/…
GraphWalker: Agentic Knowledge Graph Question Answering via Synthetic Trajectory Curriculum
Shuwen Xu, Yao Xu, Jiaxiang Liu, Chenhao Yuan, Wenshuo Peng, Jun Zhao, Kang Liu
https://arxiv.org/abs/2603.28533 https://arxiv.org/pdf/2603.28533 https://arxiv.org/html/2603.28533
arXiv:2603.28533v1 Announce Type: new
Abstract: Agentic knowledge graph question answering (KGQA) requires an agent to iteratively interact with knowledge graphs (KGs), posing challenges in both training data scarcity and reasoning generalization. Specifically, existing approaches often restrict agent exploration: prompting-based methods lack autonomous navigation training, while current training pipelines usually confine reasoning to predefined trajectories. To this end, this paper proposes \textit{GraphWalker}, a novel agentic KGQA framework that addresses these challenges through \textit{Automated Trajectory Synthesis} and \textit{Stage-wise Fine-tuning}. GraphWalker adopts a two-stage SFT training paradigm: First, the agent is trained on structurally diverse trajectories synthesized from constrained random-walk paths, establishing a broad exploration prior over the KG; Second, the agent is further fine-tuned on a small set of expert trajectories to develop reflection and error recovery capabilities. Extensive experiments demonstrate that our stage-wise SFT paradigm unlocks a higher performance ceiling for a lightweight reinforcement learning (RL) stage, enabling GraphWalker to achieve state-of-the-art performance on CWQ and WebQSP. Additional results on GrailQA and our constructed GraphWalkerBench confirm that GraphWalker enhances generalization to out-of-distribution reasoning paths. The code is publicly available at https://github.com/XuShuwenn/GraphWalker
toXiv_bot_toot
A reporter writes about a visit from the FBI in 2020, following his story about a hack, and the long-term personal impact, along with eroding press freedoms (Zack Whittaker/~this week in security~)
https://this.weekinsecurity.com/fbi-ag
Answering the dilemma of cycle lane versus shared space planning through an agent-based simulation experiment and accessibility equity analysis
https://link.springer.com/article/10.1007/s44327-026-00200-8
Press freedom groups are warning that the arrests of two independent journalists,
including the veteran former CNN anchor Don Lemon,
signal a chilling new crackdown on US media by the Trump administration.
Lemon was taken into custody on Thursday night by federal agents in Los Angeles,
despite a magistrate judge declining to sign off on charges against him a week ago
in connection with a protest at a Minnesota church against violent government immigration enforc…
Marco DeepResearch: Unlocking Efficient Deep Research Agents via Verification-Centric Design
Bin Zhu, Qianghuai Jia, Tian Lan, Junyang Ren, Feng Gu, Feihu Jiang, Longyue Wang, Zhao Xu, Weihua Luo
https://arxiv.org/abs/2603.28376 https://arxiv.org/pdf/2603.28376 https://arxiv.org/html/2603.28376
arXiv:2603.28376v1 Announce Type: new
Abstract: Deep research agents autonomously conduct open-ended investigations, integrating complex information retrieval with multi-step reasoning across diverse sources to solve real-world problems. To sustain this capability on long-horizon tasks, reliable verification is critical during both training and inference. A major bottleneck in existing paradigms stems from the lack of explicit verification mechanisms in QA data synthesis, trajectory construction, and test-time scaling. Errors introduced at each stage propagate downstream and degrade the overall agent performance. To address this, we present Marco DeepResearch, a deep research agent optimized with a verification-centric framework design at three levels: \textbf{(1)~QA Data Synthesis:} We introduce verification mechanisms to graph-based and agent-based QA synthesis to control question difficulty while ensuring answers are unique and correct; \textbf{(2)~Trajectory Construction:} We design a verification-driven trajectory synthesis method that injects explicit verification patterns into training trajectories; and \textbf{(3)~Test-time scaling:} We use Marco DeepResearch itself as a verifier at inference time and effectively improve performance on challenging questions. Extensive experimental results demonstrate that our proposed Marco DeepResearch agent significantly outperforms 8B-scale deep research agents on most challenging benchmarks, such as BrowseComp and BrowseComp-ZH. Crucially, under a maximum budget of 600 tool calls, Marco DeepResearch even surpasses or approaches several 30B-scale agents, like Tongyi DeepResearch-30B.
toXiv_bot_toot