2026-02-26 12:16:00
Perplexity AI: Agentische KI in der Sandbox mit 19 Modellen
Perplexity AI stellt mit „Perplexity Computer“ eine neue agentische KI-Plattform vor, die KI-Modelle in einer sicheren Cloud-Sandbox orchestriert.
https…
Perplexity AI: Agentische KI in der Sandbox mit 19 Modellen
Perplexity AI stellt mit „Perplexity Computer“ eine neue agentische KI-Plattform vor, die KI-Modelle in einer sicheren Cloud-Sandbox orchestriert.
https…
Perplexity launches Perplexity Computer, "a general-purpose digital worker" that can route work across 19 AI models, available initially for Max subscribers (Jason Hiner/The Deep View)
https://www.thedeepview.com/articles/perplexity-may-have-built-a…
Perplexity AI: "Musk has been careful to set realistic expectations. Initial vision would resemble "early Nintendo graphics" or "Atari graphics"—pixelated and low-resolution—but would improve over time as the brain adapts to the neural signals." https://www.perp…
Perplexity signs a deal with Microsoft; sources say the $750M, three-year commitment will let Perplexity deploy AI models through Microsoft's Foundry service (Bloomberg)
https://www.bloomberg.com/news/articles/2026-01-29…
Perplexity's retreat from ads signals a strategic shift as it recognizes its product is not for a mass audience and expects growth to come from enterprise sales (Maxwell Zeff/Wired)
https://www.wired.com/story/perplexity-ads-shift-search-google/
Samsung will add Perplexity to Galaxy AI on the upcoming S26 series; users can launch the Perplexity agent by saying "Hey Plex" or with a physical helper button (Cheyenne MacDonald/Engadget)
https://www.engadget.com/ai/samsung-is-add
To Perplexity AI: Short-term, the money may be in implantable visual prostheses for the blind. Longer-term, the money may be in training and support for noninvasive visual prostheses for the blind. Microsoft Windows vs Linux, Neuralink Blindsight vs The vOICe. https://www.perplexity.ai/s…
Ich nutze meist Perplexity oder ChatGPT statt klassischer Suchmaschinen. Schnell, kompakt, bequem. Aber auch fehleranfällig. Ein Blick auf Chancen, Grenzen und die Frage, wie viel wir Antwortmaschinen wirklich überlassen können und sollten. #LLM #KIAgenten
The Diffusion Duality, Chapter II: $\Psi$-Samplers and Efficient Curriculum
Justin Deschenaux, Caglar Gulcehre, Subham Sekhar Sahoo
https://arxiv.org/abs/2602.21185 https://arxiv.org/pdf/2602.21185 https://arxiv.org/html/2602.21185
arXiv:2602.21185v1 Announce Type: new
Abstract: Uniform-state discrete diffusion models excel at few-step generation and guidance due to their ability to self-correct, making them preferred over autoregressive or Masked diffusion models in these settings. However, their sampling quality plateaus with ancestral samplers as the number of steps increases. We introduce a family of Predictor-Corrector (PC) samplers for discrete diffusion that generalize prior methods and apply to arbitrary noise processes. When paired with uniform-state diffusion, our samplers outperform ancestral sampling on both language and image modeling, achieving lower generative perplexity at matched unigram entropy on OpenWebText and better FID/IS scores on CIFAR10. Crucially, unlike conventional samplers, our PC methods continue to improve with more sampling steps. Taken together, these findings call into question the assumption that Masked diffusion is the inevitable future of diffusion-based language modeling. Beyond sampling, we develop a memory-efficient curriculum for the Gaussian relaxation training phase, reducing training time by 25% and memory by 33% compared to Duo while maintaining comparable perplexity on OpenWebText and LM1B and strong downstream performance. We release code, checkpoints, and a video-tutorial on: https://s-sahoo.com/duo-ch2
toXiv_bot_toot
Perplexity's retreat from ads signals a strategic shift as it recognizes its product is not for a mass audience and expects growth to come from enterprise sales (Maxwell Zeff/Wired)
https://www.wired.com/story/perplexity-ads-shift-search-google/
The Wikimedia Foundation says Microsoft, Meta, Amazon, Perplexity, and Mistral joined Wikimedia Enterprise to get "tuned" API access; Google is already a member (Emma Roth/The Verge)
https://www.theverge.com/news/862109/wikip
Statt Google: Perplexity oder ChatGPT.
Eine klare Antwort statt zehn Links. Bequem? Ja. Verlässlich? Nicht garantiert.
Die KI erklärte mir, das ZDF habe X verlassen. Klang logisch – war falsch.
KI sammelt Informationen. Aber sie bewertet keine journalistische Qualität.
Warum Tempo kein Ersatz für Prüfung ist – in #60Sekunden.
👉 Mehr in meinem Blog unter:
From a pragmatic standpoint I get Wikimedia making deals with AI companies: They will scrape anyways, this way you might get some money.
But it still _feels_ off. Telling all volunteers "you are working for Microsoft/Perplexity/etc for free now" _feels_ wrong.
Run your own local chat on your laptop. Plus add web search like Perplexity. I do this on my Mac to to save a few AI DC BTUs and to learn stuff based on real web pages rather than hallucinated LLMs.
• Run your own chat: https://carlosvaz.com/posts/running-ll
The NYT sues Perplexity, claiming the AI startup violated its copyrights and failed to stop using its content despite repeated demands over the past 18 months (New York Times)
https://www.nytimes.com/2025/12/05/technology/new-york-times-perplexity-ai-l…
KI-Modelle: New York Times klagt gegen Perplexity, Meta schließt Lizenzabkommen
Der Streit um Medieninhalte in KI-Modellen spitzt sich zu. OpenAI muss Chatprotokolle herausgeben, NYT klagt gegen Perplexity. Meta setzt auf Lizenzabkommen.
The Wikimedia Foundation says Microsoft, Meta, Amazon, Perplexity, and Mistral joined Wikimedia Enterprise to get "tuned" API access; Google is already a member (Emma Roth/The Verge)
https://www.theverge.com/news/862109/wikip
Replaced article(s) found for cs.LG. https://arxiv.org/list/cs.LG/new
[1/6]:
- Towards Attributions of Input Variables in a Coalition
Xinhao Zheng, Huiqi Deng, Quanshi Zhang
https://arxiv.org/abs/2309.13411
- Knee or ROC
Veronica Wendt, Jacob Steiner, Byunggu Yu, Caleb Kelly, Justin Kim
https://arxiv.org/abs/2401.07390
- Rethinking Disentanglement under Dependent Factors of Variation
Antonio Almud\'evar, Alfonso Ortega
https://arxiv.org/abs/2408.07016 https://mastoxiv.page/@arXiv_csLG_bot/112959235461894530
- Minibatch Optimal Transport and Perplexity Bound Estimation in Discrete Flow Matching
Etrit Haxholli, Yeti Z. Gurbuz, Ogul Can, Eli Waxman
https://arxiv.org/abs/2411.00759 https://mastoxiv.page/@arXiv_csLG_bot/113423933393275133
- Predicting Subway Passenger Flows under Incident Situation with Causality
Xiannan Huang, Shuhan Qiu, Quan Yuan, Chao Yang
https://arxiv.org/abs/2412.06871 https://mastoxiv.page/@arXiv_csLG_bot/113632934357523592
- Characterizing LLM Inference Energy-Performance Tradeoffs across Workloads and GPU Scaling
Paul Joe Maliakel, Shashikant Ilager, Ivona Brandic
https://arxiv.org/abs/2501.08219 https://mastoxiv.page/@arXiv_csLG_bot/113831081884570770
- Universality of Benign Overfitting in Binary Linear Classification
Ichiro Hashimoto, Stanislav Volgushev, Piotr Zwiernik
https://arxiv.org/abs/2501.10538 https://mastoxiv.page/@arXiv_csLG_bot/113872351652969955
- Safe Reinforcement Learning for Real-World Engine Control
Julian Bedei, Lucas Koch, Kevin Badalian, Alexander Winkler, Patrick Schaber, Jakob Andert
https://arxiv.org/abs/2501.16613 https://mastoxiv.page/@arXiv_csLG_bot/113910356206562660
- A Statistical Learning Perspective on Semi-dual Adversarial Neural Optimal Transport Solvers
Roman Tarasov, Petr Mokrov, Milena Gazdieva, Evgeny Burnaev, Alexander Korotin
https://arxiv.org/abs/2502.01310
- Improving the Convergence of Private Shuffled Gradient Methods with Public Data
Shuli Jiang, Pranay Sharma, Zhiwei Steven Wu, Gauri Joshi
https://arxiv.org/abs/2502.03652 https://mastoxiv.page/@arXiv_csLG_bot/113961314098841096
- Using the Path of Least Resistance to Explain Deep Networks
Sina Salek, Joseph Enguehard
https://arxiv.org/abs/2502.12108 https://mastoxiv.page/@arXiv_csLG_bot/114023706252106865
- Distributional Vision-Language Alignment by Cauchy-Schwarz Divergence
Wenzhe Yin, Zehao Xiao, Pan Zhou, Shujian Yu, Jiayi Shen, Jan-Jakob Sonke, Efstratios Gavves
https://arxiv.org/abs/2502.17028 https://mastoxiv.page/@arXiv_csLG_bot/114063477202397951
- Armijo Line-search Can Make (Stochastic) Gradient Descent Provably Faster
Sharan Vaswani, Reza Babanezhad
https://arxiv.org/abs/2503.00229 https://mastoxiv.page/@arXiv_csLG_bot/114103018985567633
- Semantic Parallelism: Redefining Efficient MoE Inference via Model-Data Co-Scheduling
Yan Li, Zhenyu Zhang, Zhengang Wang, Pengfei Chen, Pengfei Zheng
https://arxiv.org/abs/2503.04398 https://mastoxiv.page/@arXiv_csLG_bot/114120014622063602
- A Survey on Federated Fine-tuning of Large Language Models
Wu, Tian, Li, Sun, Tam, Zhou, Liao, Xiong, Guo, Li, Xu
https://arxiv.org/abs/2503.12016 https://mastoxiv.page/@arXiv_csLG_bot/114182234054681647
- Towards Trustworthy GUI Agents: A Survey
Yucheng Shi, Wenhao Yu, Jingyuan Huang, Wenlin Yao, Wenhu Chen, Ninghao Liu
https://arxiv.org/abs/2503.23434 https://mastoxiv.page/@arXiv_csLG_bot/114263024618476521
- CONTINA: Confidence Interval for Traffic Demand Prediction with Coverage Guarantee
Chao Yang, Xiannan Huang, Shuhan Qiu, Yan Cheng
https://arxiv.org/abs/2504.13961 https://mastoxiv.page/@arXiv_csLG_bot/114380404041503229
- Regularity and Stability Properties of Selective SSMs with Discontinuous Gating
Nikola Zubi\'c, Davide Scaramuzza
https://arxiv.org/abs/2505.11602 https://mastoxiv.page/@arXiv_csLG_bot/114538965060456498
- RECON: Robust symmetry discovery via Explicit Canonical Orientation Normalization
Alonso Urbano, David W. Romero, Max Zimmer, Sebastian Pokutta
https://arxiv.org/abs/2505.13289 https://mastoxiv.page/@arXiv_csLG_bot/114539124884913788
- RefLoRA: Refactored Low-Rank Adaptation for Efficient Fine-Tuning of Large Models
Yilang Zhang, Bingcong Li, Georgios B. Giannakis
https://arxiv.org/abs/2505.18877 https://mastoxiv.page/@arXiv_csLG_bot/114578778213033886
- SuperMAN: Interpretable and Expressive Networks over Temporally Sparse Heterogeneous Data
Bechler-Speicher, Zerio, Huri, Vestergaard, Gilad-Bachrach, Jess, Bhatt, Sazonovs
https://arxiv.org/abs/2505.19193 https://mastoxiv.page/@arXiv_csLG_bot/114578790124778172
toXiv_bot_toot
OpenAI and Perplexity, armed with significant AI capabilities, have targeted the shopping domain. And have run into many problems around data access and semantics.
This raises three thoughts:
First, companies which are bullish about AI “changing everything”, are stumbling in maybe the most traditional, mundane domain: shopping. This doesn’t inspire confidence in their technology.
1/5
#ai
KI-Übersichten: Medienwächter nehmen Google und Perplexity ins Visier
Der Aufstieg der KI-Antwortmaschinen ruft Regulierer auf den Plan. Gleich zwei Landesmedienanstalten haben Verfahren gegen Google und Perplexity eingeleitet.
The NYT sues Perplexity, claiming the AI startup violated its copyrights and failed to stop using its content despite repeated demands over the past 18 months (New York Times)
https://www.nytimes.com/2025/12/05/technology/new-york-times-perplexity-ai-l…
Wikipedia: Verträge mit Mistral, Perplexity & Co. für KI-Trainingszugriff
Lange haben KI-Firmen für das KI-Training auf Wikipedia-Inhalte zugegriffen und dort die Serverlast steigen lassen. Nun nutzen immer mehr eine Alternative.
NYT reporter John Carreyrou and five other writers sue xAI, Anthropic, Google, OpenAI, Meta, and Perplexity, accusing them of pirating their books to train AI (Blake Brittain/Reuters)
https://www.reuters.com/legal/government/n
As startups flood the market with AI shopping agents, Amazon is playing defense by blocking agents' access to its site and investing heavily in its own tools (Annie Palmer/CNBC)
https://www.cnbc.com/2025/12/24/amazon-faces-a-dilemma-fight…
People who say the current generation of generative AI is nothing / pure snake oil and hypegrift are wrong. True, ChatGPT is pretty much slop, but instances like Grok Expert and Perplexity Pro consistently produce meaningful, correct results in less time than it would take to manually comb the Internet for the information. Claude Code and Cursor can radically accelerate certain software development, especially in the hands of someone who can already write code and redirect the model when it&…
The Chicago Tribune sues Perplexity for copyright infringement, alleging Perplexity's platforms lift Tribune content verbatim and divert traffic from the paper (Robert Channick/Chicago Tribune)
https://www.chicagotribune.com/2025/12/04/chic…
Perplexity updates its iPad app to improve multitasking and focus on research tools, part of a push to add business customers, after launching Comet on Android (Natalie Lung/Bloomberg)
https://www.bloomberg.com/news/articles/20
Perplexity „halluziniert wie ein Weltmeister“, Le Chat ist „europäisch brav“, und CapCut macht 60-Sekunden-Reels, die eher remixen als erzählen. Ich nutze KI-Tools intensiv – aber nicht als Orakel. Mein KI-Tools 2025 - ein Jahresrückblick mit Augenzwinkern. 🔗 https://stefanpfeiffer.blog/2026/01/07
Perplexity says it has no plans to further pursue advertising, which it introduced in 2024, phasing out ads late last year over fears it would erode user trust (Cristina Criddle/Financial Times)
https://www.ft.com/content/6eec07a5-34a8-4f78-a9ed-93ab4263d43c
Perplexity says it has no plans to further pursue advertising, which it introduced in 2024, phasing out ads late last year over fears it would erode user trust (Cristina Criddle/Financial Times)
https://www.ft.com/content/6eec07a5-34a8-4f78-a9ed-93ab4263d43c
heise | Comet für Android: Wie Perplexitys KI-Browser die Google-Suche ersetzen will
Comet von Perplexity gibt es nun auch für Android: Der KI-Browser klickt sich „agentisch“ durchs Web, beantwortet Fragen und E-Mails. Da ist Vertrauen gefragt.