2026-02-23 07:37:00
Samsung will add Perplexity to Galaxy AI on the upcoming S26 series; users can launch the Perplexity agent by saying "Hey Plex" or with a physical helper button (Cheyenne MacDonald/Engadget)
https://www.engadget.com/ai/samsung-is-add
Nach OpenAI und Microsoft: Perplexity stellt Gesundheits-KI vor
Perplexity AI stellt mit „Perplexity Health“ einen Gesundheitsdienst vor, der Daten aus unterschiedlichen Quellen zusammenführen soll.
https://w…
Why do headlines phrase "can be given access to" as "has access" when they *know* there's a lot anxiety about both AI and third-party access to personal data?
> Perplexity Can Now Access Your Apple Health Data to Answer Medical Questions
https://www.macrumors.com/…
Statt Google: Perplexity oder ChatGPT.
Eine klare Antwort statt zehn Links. Bequem? Ja. Verlässlich? Nicht garantiert.
Die KI erklärte mir, das ZDF habe X verlassen. Klang logisch – war falsch.
KI sammelt Informationen. Aber sie bewertet keine journalistische Qualität.
Warum Tempo kein Ersatz für Prüfung ist – in #60Sekunden.
👉 Mehr in meinem Blog unter:
Perplexity's retreat from ads signals a strategic shift as it recognizes its product is not for a mass audience and expects growth to come from enterprise sales (Maxwell Zeff/Wired)
https://www.wired.com/story/perplexity-ads-shift-search-google/
Perplexity baut „Personal Computer“ auf Mac-mini-Basis
Perplexity Computer nennt sich Perplexitys persönlicher Assistent. Der soll im OpenClaw-Stil nun auf den Mac kommen. Derzeit gibt es eine Warteliste.
https://www.
Perplexity's retreat from ads signals a strategic shift as it recognizes its product is not for a mass audience and expects growth to come from enterprise sales (Maxwell Zeff/Wired)
https://www.wired.com/story/perplexity-ads-shift-search-google/
Perplexity releases its Comet browser app for iOS and iPadOS with a built-in AI assistant, four months after launching on Android (Laurent Giret/Thurrott)
https://www.thurrott.com/a-i/333936/perplexity-launches-its-comet-browser-on-ios-and-ipados…
Ich nutze meist Perplexity oder ChatGPT statt klassischer Suchmaschinen. Schnell, kompakt, bequem. Aber auch fehleranfällig. Ein Blick auf Chancen, Grenzen und die Frage, wie viel wir Antwortmaschinen wirklich überlassen können und sollten. #LLM #KIAgenten
KI-Update kompakt: AI Act, Groundsource, NemoClaw, Perplexity Computer
Das "KI-Update" liefert drei mal pro Woche eine Zusammenfassung der wichtigsten KI-Entwicklungen.
https://www.
As soon as users log into Perplexity’s home page, trackers are downloaded onto their devices, giving Meta and Google full access to the conversations between them and Perplexity’s AI Machine search engine.
https://www.bloomberg.com/news/articles/20
Perplexity announces Personal Computer, an OpenClaw-like AI agent that can run on a Mac, and an enterprise version of Perplexity Computer (Ina Fried/Axios)
https://www.axios.com/2026/03/11/perplexity-personal-computer-mac
Hm. Google Gemini has humans reading sampled chats for "quality assurance." And you can't opt out. How nice...
I wonder, how ethical would it be to flood Gemini with chats that say mean and nasty things about the human reviewers, for them to see? Or to encourage them to find a better job?
Not that I recommend being cruel to anyone, ever. But... guerilla tactics can be effective.
Perplexity asks a US judge to force Dow Jones and the New York Post to hand over the queries they made to "fish" for a basis to sue for copyright infringement (Charlotte Tobitt/Press Gazette)
https://pressgazette.co.uk/media_law/p
Sources: a Snap-Perplexity AI search deal calling for Perplexity to pay Snap $400M has fallen apart; Snap is set to announce significant layoffs on Wednesday (Alex Heath/Sources)
https://sources.news/p/snap-crucible-moment
Run your own local chat on your laptop. Plus add web search like Perplexity. I do this on my Mac to to save a few AI DC BTUs and to learn stuff based on real web pages rather than hallucinated LLMs.
• Run your own chat: https://carlosvaz.com/posts/running-ll
Perplexity AI: Agentische KI in der Sandbox mit 19 Modellen
Perplexity AI stellt mit „Perplexity Computer“ eine neue agentische KI-Plattform vor, die KI-Modelle in einer sicheren Cloud-Sandbox orchestriert.
https…
Perplexity's answer to *Claw agents. Mac only. They position it as something you run on a dedicated Mac so it's always available, but it's all driven by the desktop. Probably because no one in their right mind should run it on their daily driver.
https://www.youtube.com/watch?v=f9mjOnznkN
A US appeals court puts on hold an earlier ruling that had blocked Perplexity from using its agentic shopping tool to shop on Amazon's marketplace (Blake Brittain/Reuters)
https://www.reuters.com/legal/litigation/court-te…
Perplexity AI: "Musk has been careful to set realistic expectations. Initial vision would resemble "early Nintendo graphics" or "Atari graphics"—pixelated and low-resolution—but would improve over time as the brain adapts to the neural signals." https://www.perp…
Perplexity says it has no plans to further pursue advertising, which it introduced in 2024, phasing out ads late last year over fears it would erode user trust (Cristina Criddle/Financial Times)
https://www.ft.com/content/6eec07a5-34a8-4f78-a9ed-93ab4263d43c
KI-Übersichten: Medienwächter nehmen Google und Perplexity ins Visier
Der Aufstieg der KI-Antwortmaschinen ruft Regulierer auf den Plan. Gleich zwei Landesmedienanstalten haben Verfahren gegen Google und Perplexity eingeleitet.
Perplexity signs a multiyear deal with CoreWeave to use dedicated clusters powered by Nvidia Grace Blackwell chips for AI inference; CRWV jumps 5% pre-market (Ina Fried/Axios)
https://www.axios.com/2026/03/04/perplexity-coreweave-data-center-nvidia
Perplexity launches Perplexity Computer, "a general-purpose digital worker" that can route work across 19 AI models, available initially for Max subscribers (Jason Hiner/The Deep View)
https://www.thedeepview.com/articles/perplexity-may-have-built-a…
Perplexity.ai:
"Wo kann man im/am Hannover Hauptbahnhof einen Mietwagen herkriege, den man dann entweder in Hamburg oder in Lübeck wieder abgeben kann?"
Antwort: Sixt: Raschplatz 1 oder Europcar: Ernst-August-Platz 1, 30159 Hannover können beide "Einwegmiete"
People who say the current generation of generative AI is nothing / pure snake oil and hypegrift are wrong. True, ChatGPT is pretty much slop, but instances like Grok Expert and Perplexity Pro consistently produce meaningful, correct results in less time than it would take to manually comb the Internet for the information. Claude Code and Cursor can radically accelerate certain software development, especially in the hands of someone who can already write code and redirect the model when it&…
Perplexity says it has no plans to further pursue advertising, which it introduced in 2024, phasing out ads late last year over fears it would erode user trust (Cristina Criddle/Financial Times)
https://www.ft.com/content/6eec07a5-34a8-4f78-a9ed-93ab4263d43c
The Diffusion Duality, Chapter II: $\Psi$-Samplers and Efficient Curriculum
Justin Deschenaux, Caglar Gulcehre, Subham Sekhar Sahoo
https://arxiv.org/abs/2602.21185 https://arxiv.org/pdf/2602.21185 https://arxiv.org/html/2602.21185
arXiv:2602.21185v1 Announce Type: new
Abstract: Uniform-state discrete diffusion models excel at few-step generation and guidance due to their ability to self-correct, making them preferred over autoregressive or Masked diffusion models in these settings. However, their sampling quality plateaus with ancestral samplers as the number of steps increases. We introduce a family of Predictor-Corrector (PC) samplers for discrete diffusion that generalize prior methods and apply to arbitrary noise processes. When paired with uniform-state diffusion, our samplers outperform ancestral sampling on both language and image modeling, achieving lower generative perplexity at matched unigram entropy on OpenWebText and better FID/IS scores on CIFAR10. Crucially, unlike conventional samplers, our PC methods continue to improve with more sampling steps. Taken together, these findings call into question the assumption that Masked diffusion is the inevitable future of diffusion-based language modeling. Beyond sampling, we develop a memory-efficient curriculum for the Gaussian relaxation training phase, reducing training time by 25% and memory by 33% compared to Duo while maintaining comparable perplexity on OpenWebText and LM1B and strong downstream performance. We release code, checkpoints, and a video-tutorial on: https://s-sahoo.com/duo-ch2
toXiv_bot_toot
In a preliminary ruling, a US judge orders Perplexity to stop using Comet browser to make purchases on behalf of users from password-protected Amazon accounts (Bloomberg)
https://www.bloomberg.com/news/articles/2026-03-10/amaz…
Le Monde says it has seen a "significant amount of new revenue", including via subscriptions, after licensing content to OpenAI, Perplexity, and Meta (Alice Brooker/Press Gazette)
https://pressgazette.co.uk/publishers/
Auf der Webseite der #BSR sind die Leerungstermine der diversen Mülltonnen so schwer zu finden, dass ich seit Jahren da immer dran scheitere.
Ich hab jetzt perplexity gefragt wo die Termine stehen, das war schneller als auf der Seite rum zu suchen oder deren eigene Suche zu bedienen, die nur Sachen findet, wenn man den genau richtigen Wortlaut eingibt.
Ich glaube, ein Usability-Team hat da noch nie drüber gesehen.
Perplexity signs a deal with Microsoft; sources say the $750M, three-year commitment will let Perplexity deploy AI models through Microsoft's Foundry service (Bloomberg)
https://www.bloomberg.com/news/articles/2026-01-29…
Sources: Perplexity's estimated ARR rose to over $450M in March, jumping 50% in a month after the launch of a new agent tool and a shift to usage-based pricing (Cristina Criddle/Financial Times)
https://www.ft.com/content/e9c28d31-a962-4684-8b58-c9e6bc68401f
Samsung's consumer device chief TM Roh says it was "open to strategic co-operation" with more AI groups, having recently added Perplexity to its mobile OS (Michael Acton/Financial Times)
https://www.ft.com/content/3752d058-d3ee-41a4-b702-d49ae7f61b5c
Replaced article(s) found for cs.LG. https://arxiv.org/list/cs.LG/new
[1/6]:
- Towards Attributions of Input Variables in a Coalition
Xinhao Zheng, Huiqi Deng, Quanshi Zhang
https://arxiv.org/abs/2309.13411
- Knee or ROC
Veronica Wendt, Jacob Steiner, Byunggu Yu, Caleb Kelly, Justin Kim
https://arxiv.org/abs/2401.07390
- Rethinking Disentanglement under Dependent Factors of Variation
Antonio Almud\'evar, Alfonso Ortega
https://arxiv.org/abs/2408.07016 https://mastoxiv.page/@arXiv_csLG_bot/112959235461894530
- Minibatch Optimal Transport and Perplexity Bound Estimation in Discrete Flow Matching
Etrit Haxholli, Yeti Z. Gurbuz, Ogul Can, Eli Waxman
https://arxiv.org/abs/2411.00759 https://mastoxiv.page/@arXiv_csLG_bot/113423933393275133
- Predicting Subway Passenger Flows under Incident Situation with Causality
Xiannan Huang, Shuhan Qiu, Quan Yuan, Chao Yang
https://arxiv.org/abs/2412.06871 https://mastoxiv.page/@arXiv_csLG_bot/113632934357523592
- Characterizing LLM Inference Energy-Performance Tradeoffs across Workloads and GPU Scaling
Paul Joe Maliakel, Shashikant Ilager, Ivona Brandic
https://arxiv.org/abs/2501.08219 https://mastoxiv.page/@arXiv_csLG_bot/113831081884570770
- Universality of Benign Overfitting in Binary Linear Classification
Ichiro Hashimoto, Stanislav Volgushev, Piotr Zwiernik
https://arxiv.org/abs/2501.10538 https://mastoxiv.page/@arXiv_csLG_bot/113872351652969955
- Safe Reinforcement Learning for Real-World Engine Control
Julian Bedei, Lucas Koch, Kevin Badalian, Alexander Winkler, Patrick Schaber, Jakob Andert
https://arxiv.org/abs/2501.16613 https://mastoxiv.page/@arXiv_csLG_bot/113910356206562660
- A Statistical Learning Perspective on Semi-dual Adversarial Neural Optimal Transport Solvers
Roman Tarasov, Petr Mokrov, Milena Gazdieva, Evgeny Burnaev, Alexander Korotin
https://arxiv.org/abs/2502.01310
- Improving the Convergence of Private Shuffled Gradient Methods with Public Data
Shuli Jiang, Pranay Sharma, Zhiwei Steven Wu, Gauri Joshi
https://arxiv.org/abs/2502.03652 https://mastoxiv.page/@arXiv_csLG_bot/113961314098841096
- Using the Path of Least Resistance to Explain Deep Networks
Sina Salek, Joseph Enguehard
https://arxiv.org/abs/2502.12108 https://mastoxiv.page/@arXiv_csLG_bot/114023706252106865
- Distributional Vision-Language Alignment by Cauchy-Schwarz Divergence
Wenzhe Yin, Zehao Xiao, Pan Zhou, Shujian Yu, Jiayi Shen, Jan-Jakob Sonke, Efstratios Gavves
https://arxiv.org/abs/2502.17028 https://mastoxiv.page/@arXiv_csLG_bot/114063477202397951
- Armijo Line-search Can Make (Stochastic) Gradient Descent Provably Faster
Sharan Vaswani, Reza Babanezhad
https://arxiv.org/abs/2503.00229 https://mastoxiv.page/@arXiv_csLG_bot/114103018985567633
- Semantic Parallelism: Redefining Efficient MoE Inference via Model-Data Co-Scheduling
Yan Li, Zhenyu Zhang, Zhengang Wang, Pengfei Chen, Pengfei Zheng
https://arxiv.org/abs/2503.04398 https://mastoxiv.page/@arXiv_csLG_bot/114120014622063602
- A Survey on Federated Fine-tuning of Large Language Models
Wu, Tian, Li, Sun, Tam, Zhou, Liao, Xiong, Guo, Li, Xu
https://arxiv.org/abs/2503.12016 https://mastoxiv.page/@arXiv_csLG_bot/114182234054681647
- Towards Trustworthy GUI Agents: A Survey
Yucheng Shi, Wenhao Yu, Jingyuan Huang, Wenlin Yao, Wenhu Chen, Ninghao Liu
https://arxiv.org/abs/2503.23434 https://mastoxiv.page/@arXiv_csLG_bot/114263024618476521
- CONTINA: Confidence Interval for Traffic Demand Prediction with Coverage Guarantee
Chao Yang, Xiannan Huang, Shuhan Qiu, Yan Cheng
https://arxiv.org/abs/2504.13961 https://mastoxiv.page/@arXiv_csLG_bot/114380404041503229
- Regularity and Stability Properties of Selective SSMs with Discontinuous Gating
Nikola Zubi\'c, Davide Scaramuzza
https://arxiv.org/abs/2505.11602 https://mastoxiv.page/@arXiv_csLG_bot/114538965060456498
- RECON: Robust symmetry discovery via Explicit Canonical Orientation Normalization
Alonso Urbano, David W. Romero, Max Zimmer, Sebastian Pokutta
https://arxiv.org/abs/2505.13289 https://mastoxiv.page/@arXiv_csLG_bot/114539124884913788
- RefLoRA: Refactored Low-Rank Adaptation for Efficient Fine-Tuning of Large Models
Yilang Zhang, Bingcong Li, Georgios B. Giannakis
https://arxiv.org/abs/2505.18877 https://mastoxiv.page/@arXiv_csLG_bot/114578778213033886
- SuperMAN: Interpretable and Expressive Networks over Temporally Sparse Heterogeneous Data
Bechler-Speicher, Zerio, Huri, Vestergaard, Gilad-Bachrach, Jess, Bhatt, Sazonovs
https://arxiv.org/abs/2505.19193 https://mastoxiv.page/@arXiv_csLG_bot/114578790124778172
toXiv_bot_toot