Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@mlippert@vmst.io
2025-07-21 15:04:32

#Wordle 1,493 6/6*
⬜⬜⬜⬜⬜ <1% of 216,876 (969)
⬜⬜⬜⬜⬜ 1% of 77 (110)
⬜🟨⬜🟨⬜ 0 of 2 (10)
⬜🟩⬜⬜🟨 0 of 0 (3)
⬜🟩⬜🟨🟩 0 of 0 (1)
🟩🟩🟩🟩🟩
WordleBot
Skill 86/99
Luck 15/99
after guess 4, I'd eliminated 16 letters and knew 2 of the remaining 10 were in the word. I thought of 2 words that fit, 1st didn't work as 5th guess, 2nd was the answer as 6th guess.
Wow, that's a pretty low luck score for 6 guesses, I see the luck of the 6 guesses was 15, 6, 4, 16, 33, -
I'm quite happy with the progression of my guesses today, not sure what the bot will think...
Well I missed a word, but my 5th guess did eliminate it.

@heiseonline@social.heise.de
2025-09-22 12:23:00

EU-Cyberbehörde bestätigt Ransomware-Attacke auf Flughafen-Software
Die EU-Behörde für Cybersicherheit, Enisa, hat am Montag bestätigt, dass die Angriffe auf Software von Collins ein Erpressungsversuch sind.

@tiotasram@kolektiva.social
2025-07-22 00:03:45

Overly academic/distanced ethical discussions
Had a weird interaction with @/brainwane@social.coop just now. I misinterpreted one of their posts quoting someone else and I think the combination of that plus an interaction pattern where I'd assume their stance on something and respond critically to that ended up with me getting blocked. I don't have hard feelings exactly, and this post is only partly about this particular person, but I noticed something interesting by the end of the conversation that had been bothering me. They repeatedly criticized me for assuming what their position was, but never actually stated their position. They didn't say: "I'm bothered you assumed my position was X, it's actually Y." They just said "I'm bothered you assumed my position was X, please don't assume my position!" I get that it's annoying to have people respond to a straw man version of your argument, but when I in response asked some direct questions about what their position was, they gave some non-answers and then blocked me. It's entirely possible it's a coincidence, and they just happened to run out of patience on that iteration, but it makes me take their critique of my interactions a bit less seriously. I suspect that they just didn't want to hear what I was saying, while at the same time they wanted to feel as if they were someone who values public critique and open discussion of tricky issues (if anyone reading this post also followed our interaction and has a different opinion of my behavior, I'd be glad to hear it; it's possible In effectively being an asshole here and it would be useful to hear that if so).
In any case, the fact that at the end of the entire discussion, I'm realizing I still don't actually know their position on whether they think the AI use case in question is worthwhile feels odd. They praised the system on several occasions, albeit noting some drawbacks while doing so. They said that the system was possibly changing their anti-AI stance, but then got mad at me for assuming this meant that they thought this use-case was justified. Maybe they just haven't made up their mind yet but didn't want to say that?
Interestingly, in one of their own blog posts that got linked in the discussion, they discuss a different AI system, and despite listing a bunch of concrete harms, conclude that it's okay to use it. That's fine; I don't think *every* use of AI is wrong on balance, but what bothered me was that their post dismissed a number of real ethical issues by saying essentially "I haven't seen calls for a boycott over this issue, so it's not a reason to stop use." That's an extremely socially conformist version of ethics that doesn't sit well with me. The discussion also ended up linking this post: chelseatroy.com/2024/08/28/doe which bothered me in a related way. In it, Troy describes classroom teaching techniques for introducing and helping students explore the ethics of AI, and they seem mostly great. They avoid prescribing any particular correct stance, which is important when teaching given the power relationship, and they help students understand the limitations of their perspectives regarding global impacts, which is great. But the overall conclusion of the post is that "nobody is qualified to really judge global impacts, so we should focus on ways to improve outcomes instead of trying to judge them." This bothers me because we actually do have a responsibility to make decisive ethical judgments despite limitations of our perspectives. If we never commit to any ethical judgment against a technology because we think our perspective is too limited to know the true impacts (which I'll concede it invariably is) then we'll have to accept every technology without objection, limiting ourselves to trying to improve their impacts without opposing them. Given who currently controls most of the resources that go into exploration for new technologies, this stance is too permissive. Perhaps if our objection to a technology was absolute and instantly effective, I'd buy the argument that objecting without a deep global view of the long-term risks is dangerous. As things stand, I think that objecting to the development/use of certain technologies in certain contexts is necessary, and although there's a lot of uncertainly, I expect strongly enough that the overall outcomes of objection will be positive that I think it's a good thing to do.
The deeper point here I guess is that this kind of "things are too complicated, let's have a nuanced discussion where we don't come to any conclusions because we see a lot of unknowns along with definite harms" really bothers me.

@ukraine_live_tagesschau@mastodon.social
2025-08-21 05:50:16

Polen schützt eigenen Luftraum wegen russischer Angriffe auf Ukraine
Polen lässt wegen russischer Angriffe auf die westliche Ukraine Flugzeuge zum Schutz des eigenen Luftraums aufsteigen. Angesichts der Aktivitäten russischer Langstreckenflieger, die Angriffe auf ukrainisches Gebiet fliegen würden, seien Flugzeuge der polnischen Luftwaffe und verbündeter Staaten im polnischen Luftraum im Einsatz, te…
🔗

@arXiv_csCL_bot@mastoxiv.page
2025-08-22 10:10:51

HebID: Detecting Social Identities in Hebrew-language Political Text
Guy Mor-Lan, Naama Rivlin-Angert, Yael R. Kaplan, Tamir Sheafer, Shaul R. Shenhav
arxiv.org/abs/2508.15483

@chris@mstdn.chrisalemany.ca
2025-07-20 13:56:12

Interesting and fulsome interview on exactly how NORAD reacted to the plane hijacking in Victoria/Vancouver on Tuesday. They speak to the commander of NORAD, currently a Canadian.
I appreciated the last section most though:
“It's the only bi-national command in the world; it's been a strong bi-national command since 1958, and nothing has changed.
We don't ever talk politics at work. It's not something that we do, nor does it affect what we do.
I would say that we are as tight, and probably tighter than we've ever been. As the world around us gets to be more dangerous, I would say that NORAD is even closer than it's ever been.
But one last thing — we have the watch. That's the slogan here for NORAD.
To give you a great example, all of the assessors, we all live on-base in homes that actually have a safe, we call it the SCIF. It's basically a classified room that has all of our systems. The days that you're on duty, you're either at work or in your house. Because the timelines are so small for answering the phone, you don't walk the dog; you don't do all these other things, and someone covers for you when you're going between work and home.
That's how important this mission is to us down here. It's really important for everybody in Canada to know that at NORAD, we have the watch”
#canpoli #norad #cf18
cbc.ca/news/canada/british-col

@PaulWermer@sfba.social
2025-08-22 13:45:38

With an important reminder from @…

Today's Non Sequitur cartoon,  of God, behind a golden podium, with an open laptop and a stack of brown books next to the laptop.   To the left, at the top of some stairs, is a man in a blue suit.  The podium and the man are standing on clouds, with the sky above the clouds pale blue.  A sign, black text on a pale yellow background, reads "shortcut in the final C.E.O. performance review" and on the podium, in front of the man, is a sign that reads  "FAQ Answer: Legal does not equal moral"   (Do…
@paulwermer@sfba.social
2025-08-22 13:45:38

With an important reminder from @…

Today's Non Sequitur cartoon,  of God, behind a golden podium, with an open laptop and a stack of brown books next to the laptop.   To the left, at the top of some stairs, is a man in a blue suit.  The podium and the man are standing on clouds, with the sky above the clouds pale blue.  A sign, black text on a pale yellow background, reads "shortcut in the final C.E.O. performance review" and on the podium, in front of the man, is a sign that reads  "FAQ Answer: Legal does not equal moral"   (Do…
@arXiv_csHC_bot@mastoxiv.page
2025-09-22 09:48:11

AnchoredAI: Contextual Anchoring of AI Comments Improves Writer Agency and Ownership
Martin Lou, Jackie Crowley, Samuel Dodson, Dongwook Yoon
arxiv.org/abs/2509.16128

@arXiv_csCL_bot@mastoxiv.page
2025-08-21 08:07:29

Confidence Estimation for Text-to-SQL in Large Language Models
Sepideh Entezari Maleki, Mohammadreza Pourreza, Davood Rafiei
arxiv.org/abs/2508.14056