Tootfinder

Opt-in global Mastodon full text search. Join the index!

@arXiv_csDC_bot@mastoxiv.page
2025-08-11 07:34:29

Snowpark: Performant, Secure, User-Friendly Data Engineering and AI/ML Next To Your Data
Brandon Baker, Elliott Brossard, Chenwei Xie, Zihao Ye, Deen Liu, Yijun Xie, Arthur Zwiegincew, Nitya Kumar Sharma, Gaurav Jain, Eugene Retunsky, Mike Halcrow, Derek Denny-Brown, Istvan Cseri, Tyler Akidau, Yuxiong He
arxiv.org/abs/2508.05904…

@arXiv_csCR_bot@mastoxiv.page
2025-08-11 07:54:09

DINA: A Dual Defense Framework Against Internal Noise and External Attacks in Natural Language Processing
Ko-Wei Chuang, Hen-Hsen Huang, Tsai-Yen Li
arxiv.org/abs/2508.05671

@Techmeme@techhub.social
2025-08-02 00:36:44

Cerebras announces the $50/month Code Pro and the $200/month Code Max plans, offering users access to Qwen3-Coder at speeds of up to 2,000 tokens per second (Daniel Kim/Cerebras)
cerebras.ai/blog/introducing-c

Cloudflare will now block AI crawlers by default
The internet architecture provider will also let some publishers make known AI scrapers pay to crawl their sites
theverge.com/news/695501/cloud

@aral@mastodon.ar.al
2025-06-26 14:45:00

Hey @…, how’s this for a CC signal for AI, you clowns?
🖕
creativecommons.org/2025/06/25

@ErikJonker@mastodon.social
2025-06-27 12:59:19

Google is quite agressive with Gemini CLI , offering 1000 free Gemini 2.5 Pro requests per day for individual users. Essentially trying to push Anthropic and OpenAI away.
blog.google/technology/develop

@UP8@mastodon.social
2025-07-27 07:43:07

🤑 Introducing pay per crawl: enabling content owners to charge AI crawlers for access
#ai

@me@mastodon.peterjanes.ca
2025-08-05 20:18:08

Canada's "AI Minister" won't be far behind. (Or maybe he's already ahead... it _would_ explain a lot....) bsky.app/profile/did:plc:35jwg

@gedankenstuecke@scholar.social
2025-06-26 11:59:41

The writing that Creative Commons is also "AI"-pilled has been on the wall for a while, and here we are are, wasting time and money on some useless signalling to "AI" scrapers that already don't care for any existing limitations…
creativecommons.org/2025/06/25

@Techmeme@techhub.social
2025-07-01 17:10:49

Meta expands voice calls on WhatsApp to large businesses and explores AI-powered product recommendations for merchants' sites (Ivan Mehta/TechCrunch)
techcrunch.com/2025/07/01/meta

@frankel@mastodon.top
2025-05-25 08:24:00

#Devstral, introducing the best #OpenSource model for #codingagents.

@gadgetboy@gadgetboy.social
2025-07-02 11:49:35

👀
theverge.com/news/695501/cloud

@jtk@infosec.exchange
2025-07-15 11:38:07

syslog-ng statement on #AI shortly after rsyslog AI announcement: fosstodon.org/@PCzanik/1148562

@michabbb@social.vivaldi.net
2025-08-01 19:58:12

🏢 Code Max: 5,000 messages/day for full-time development and multi-agent systems
⚡ Instant code generation with 131k-token context window and no weekly limits
cerebras.ai/blog/introducing-c

@dnddeutsch@pnpde.social
2025-06-26 10:24:22

Kontext: creativecommons.org/2025/06/25

@arXiv_csCY_bot@mastoxiv.page
2025-05-30 09:52:10

This arxiv.org/abs/2502.16644 has been replaced.
link: scholar.google.com/scholar?q=a

@tiotasram@kolektiva.social
2025-07-22 00:03:45

Overly academic/distanced ethical discussions
Had a weird interaction with @/brainwane@social.coop just now. I misinterpreted one of their posts quoting someone else and I think the combination of that plus an interaction pattern where I'd assume their stance on something and respond critically to that ended up with me getting blocked. I don't have hard feelings exactly, and this post is only partly about this particular person, but I noticed something interesting by the end of the conversation that had been bothering me. They repeatedly criticized me for assuming what their position was, but never actually stated their position. They didn't say: "I'm bothered you assumed my position was X, it's actually Y." They just said "I'm bothered you assumed my position was X, please don't assume my position!" I get that it's annoying to have people respond to a straw man version of your argument, but when I in response asked some direct questions about what their position was, they gave some non-answers and then blocked me. It's entirely possible it's a coincidence, and they just happened to run out of patience on that iteration, but it makes me take their critique of my interactions a bit less seriously. I suspect that they just didn't want to hear what I was saying, while at the same time they wanted to feel as if they were someone who values public critique and open discussion of tricky issues (if anyone reading this post also followed our interaction and has a different opinion of my behavior, I'd be glad to hear it; it's possible In effectively being an asshole here and it would be useful to hear that if so).
In any case, the fact that at the end of the entire discussion, I'm realizing I still don't actually know their position on whether they think the AI use case in question is worthwhile feels odd. They praised the system on several occasions, albeit noting some drawbacks while doing so. They said that the system was possibly changing their anti-AI stance, but then got mad at me for assuming this meant that they thought this use-case was justified. Maybe they just haven't made up their mind yet but didn't want to say that?
Interestingly, in one of their own blog posts that got linked in the discussion, they discuss a different AI system, and despite listing a bunch of concrete harms, conclude that it's okay to use it. That's fine; I don't think *every* use of AI is wrong on balance, but what bothered me was that their post dismissed a number of real ethical issues by saying essentially "I haven't seen calls for a boycott over this issue, so it's not a reason to stop use." That's an extremely socially conformist version of ethics that doesn't sit well with me. The discussion also ended up linking this post: chelseatroy.com/2024/08/28/doe which bothered me in a related way. In it, Troy describes classroom teaching techniques for introducing and helping students explore the ethics of AI, and they seem mostly great. They avoid prescribing any particular correct stance, which is important when teaching given the power relationship, and they help students understand the limitations of their perspectives regarding global impacts, which is great. But the overall conclusion of the post is that "nobody is qualified to really judge global impacts, so we should focus on ways to improve outcomes instead of trying to judge them." This bothers me because we actually do have a responsibility to make decisive ethical judgments despite limitations of our perspectives. If we never commit to any ethical judgment against a technology because we think our perspective is too limited to know the true impacts (which I'll concede it invariably is) then we'll have to accept every technology without objection, limiting ourselves to trying to improve their impacts without opposing them. Given who currently controls most of the resources that go into exploration for new technologies, this stance is too permissive. Perhaps if our objection to a technology was absolute and instantly effective, I'd buy the argument that objecting without a deep global view of the long-term risks is dangerous. As things stand, I think that objecting to the development/use of certain technologies in certain contexts is necessary, and although there's a lot of uncertainly, I expect strongly enough that the overall outcomes of objection will be positive that I think it's a good thing to do.
The deeper point here I guess is that this kind of "things are too complicated, let's have a nuanced discussion where we don't come to any conclusions because we see a lot of unknowns along with definite harms" really bothers me.

@arXiv_csSE_bot@mastoxiv.page
2025-06-30 08:07:30

Using Generative AI in Software Design Education: An Experience Report
Victoria Jackson, Susannah Liu, Andre van der Hoek
arxiv.org/abs/2506.21703

@timbray@cosocial.ca
2025-07-29 16:53:50

1/2: Eeek!
blog.google/products/chrome/st
Just me, or is this a horrifying prospect? The SEO wars and the Amazon homepage have taught us that businesses will do anything - ANYTHING, ethics be damned, to game their online reputations. This …

@aral@mastodon.ar.al
2025-06-25 20:25:56

Creative Commons jumps the shark.
*smdh*
#AI #CreativeCommons mastodon.social/@creativeco…

@arXiv_csMA_bot@mastoxiv.page
2025-06-04 07:22:46

MAEBE: Multi-Agent Emergent Behavior Framework
Sinem Erisken (Independent Researcher), Timothy Gothard (Independent Researcher), Martin Leitgab (Independent Researcher), Ram Potham (Independent Researcher)
arxiv.org/abs/2506.03053

@arXiv_csAR_bot@mastoxiv.page
2025-07-29 07:47:41

Demystifying the 7-D Convolution Loop Nest for Data and Instruction Streaming in Reconfigurable AI Accelerators
Md Rownak Hossain Chowdhury, Mostafizur Rahman
arxiv.org/abs/2507.20420

@penguin42@mastodon.org.uk
2025-07-16 23:18:29

This looks like great fun; big big construction machines driven by AI - that's what we want our AI to be doing, driving huge machines with shovels on.
That must be great fun to debug.
bedrockrobotics.com/news/intro

@ErikJonker@mastodon.social
2025-07-16 18:18:34

Nice new models from Mistral
mistral.ai/news/voxtral?utm_so

@arXiv_statML_bot@mastoxiv.page
2025-06-30 08:38:10

Critically-Damped Higher-Order Langevin Dynamics
Benjamin Sterling, Chad Gueli, M\'onica F. Bugallo
arxiv.org/abs/2506.21741

@michabbb@social.vivaldi.net
2025-07-31 12:27:38

Introducing Horizon Alpha, a new stealth #LLM 🌅
currently #FREE 👀
openrouter.ai/openrouter/horiz

@lysander07@sigmoid.social
2025-05-21 16:04:40

In the #ISE2025 lecture today we were introducing our students to the concept of distributional semantics as the foundation of modern large language models. Historically, Wittgenstein was one of the important figures in the Philosophy of Language stating thet "The meaning of a word is its use in the language."

An AI-generated image of Ludwig Wittgenstein as a comic strip character. A speech bubble show his famous quote "The meaning of a word is its use in the language."
Bibliographical Reference: Wittgenstein, Ludwig. Philosophical Investigations, Blackwell Publishing (1953).
Ludwig Wittgenstein (1889–1951)
@Techmeme@techhub.social
2025-06-28 21:30:54

[Thread] Cluely unveils a desktop AI assistant that it says can help users cheat on meetings, sales, lectures, interviews, learning new software, and more (Roy/@im_roy_lee)
x.com/im_roy_lee/status/193871

@pbloem@sigmoid.social
2025-05-15 07:51:13

I'm not shocked that Musk would try to manipulate Grok to vent his opinions. We've seen his manipulation before in the twitter recommender system.
I am kind of fascinated by how difficult it is. I think that broadly trained LLMs tend to converge to the same "worldview", which is mostly left-libertarian.
You can't really force them away from this on one issue without introducing inconsistencies, and breaking the guardrails.

@arXiv_csNI_bot@mastoxiv.page
2025-07-22 10:49:10

Agentic Satellite-Augmented Low-Altitude Economy and Terrestrial Networks: A Survey on Generative Approaches
Xiaozheng Gao, Yichen Wang, Bosen Liu, Xiao Zhou, Ruichen Zhang, Jiacheng Wang, Dusit Niyato, Dong In Kim, Abbas Jamalipour, Chau Yuen, Jianping An, Kai Yang
arxiv.org/abs/2507.14633

@arXiv_csCR_bot@mastoxiv.page
2025-07-17 09:07:30

Exploiting Jailbreaking Vulnerabilities in Generative AI to Bypass Ethical Safeguards for Facilitating Phishing Attacks
Rina Mishra, Gaurav Varshney
arxiv.org/abs/2507.12185

@arXiv_csAR_bot@mastoxiv.page
2025-06-18 08:01:55

Tensor Manipulation Unit (TMU): Reconfigurable, Near-Memory Tensor Manipulation for High-Throughput AI SoC
Weiyu Zhou, Zheng Wang, Chao Chen, Yike Li, Yongkui Yang, Zhuoyu Wu, Anupam Chattopadhyay
arxiv.org/abs/2506.14364

@Techmeme@techhub.social
2025-07-23 18:11:00

Google DeepMind unveils Aeneas, an AI model for contextualizing ancient Latin inscriptions, to help historians interpret and restore fragmentary texts (Google DeepMind)
deepmind.google/discover/blog/

@arXiv_qbiobm_bot@mastoxiv.page
2025-07-14 07:59:42

Unraveling the Potential of Diffusion Models in Small Molecule Generation
Peining Zhang, Daniel Baker, Minghu Song, Jinbo Bi
arxiv.org/abs/2507.08005

@arXiv_csMA_bot@mastoxiv.page
2025-06-17 09:41:44

IndoorWorld: Integrating Physical Task Solving and Social Simulation in A Heterogeneous Multi-Agent Environment
Dekun Wu, Frederik Brudy, Bang Liu, Yi Wang
arxiv.org/abs/2506.12331