Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@floheinstein@chaos.social
2025-07-02 05:59:43

Ransomware Group Anubis has leaked what they extracted from Disneyland Paris
#ransomware

Blurred screenshot of blueprints of Disneyland Paris attractions, text above it says
"Attractions


In the data archive there are detailed drawings of many attractions of the park. In order not to consider them all separately - we offer you a detailed study of the flagship attraction Frozen.

The main pdf document with the plan contains 395 pages and includes general drawings, more detailed division by zones, as well as electrical and water supply plans."
@benb@osintua.eu
2025-07-25 00:04:52

Ukrainian MP, businessman Yaroslav Rushchyshyn dies in motorcycle accident: benborges.xyz/2025/07/25/ukrai

@arXiv_quantph_bot@mastoxiv.page
2025-07-31 10:02:21

Solitons, chaos, and quantum phenomena: a deterministic approach to the Schr\"odinger equation
Dami\`a Gomila
arxiv.org/abs/2507.22868

@tiotasram@kolektiva.social
2025-07-28 13:55:54

How popular media gets love wrong
Okay, my attempt at (hopefully widely-applicable) advice about relationships based on my mental "engineering" model and how it differs from the popular "fire" and "appeal" models:
1. If you're looking for a partner, don't focus too much on external qualities, but instead ask: "Do they respect me?" "Are they interested in active consent in all aspects of our relationship?" "Are they willing to commit a little now, and open to respectfully negotiating deeper commitment?" "Are they trustworthy, and willing to trust me?" Finding your partner attractive can come *from* trusting/appreciating/respecting them, rather than vice versa.
2. If you're looking for a partner, don't wait for infatuation to start before you try building a relationship. Don't wait to "fall in love;" if you "fall" into love you could just as easily "fall" out, but if you build up love, it won't be so easy to destroy. If you're feeling lonely and want a relationship, pick someone who seems interesting and receptive in your social circles and ask if they'd like to do something with you (doesn't have to be a date at first). *Pursue active consent* at each stage (if they're not interested; ask someone else, this will be easier if you're not already infatuated). If they're judging you by the standards in point 1, this is doubly important.
3. When building a relationship, try to synchronize your levels of commitment & trust even as you're trying to deepen them, or at least try to be honest and accepting when they need to be out-of-step. Say things and do things that show your partner the things (like trust, commitment, affection, etc.) that are important in your relationship, and ask them to do the same (or ideally you don't have to ask if they're conscious of this too). Do these things not as a chore or a transaction when your partner does them, but because they're the work of building the relationship that you value for its own sake (and because you value your partner for themselves too).
4. When facing big external challenges to your commitment to a relationship, like a move, ensure that your partner has an appropriate level of commitment too, but then don't undervalue the relationship relative to other things in life. Everyone is different, but *to me*, my committed relationship has been far more rewarding than e.g., a more "successful" career would have been. Of course worth noting here that non-men are taught by our society to undervalue their careers & other aspects of their life and sacrifice everything for their partners, which is toxic. I'm not saying "don't value other things" but especially for men, *do* value romantic relationships and be prepared to make decisions that prioritize them over other things, assuming a partner who is comfortable with that commitment and willing to reciprocate.
Okay, this thread is complete for now, until I think of something else that I've missed. I hope this advice is helpful in some way (or at least not harmful). Feel free to chime in if you've got different ideas...
#relationships #love

@metacurity@infosec.exchange
2025-08-27 18:26:21

The DOGE team at SSA might have violated FISMA and other laws by not following security protocols as spelled out in NIST's SP 800-53, which are mandatory for all government agencies.
It's no surprise then that a whistleblower is warning that we have lost the ability to see who is accessing 300 million Americans' most sensitive information after DOGE moved SSA data to their own Amazon cloud instance.
Thanks to John Skinner, former project lead for 18F, for his expert i…

@PaulWermer@sfba.social
2025-06-27 14:12:09

"the operations at the two new Laguna Street sites will be contracted out which means they will no longer be eligible to work with their clients."
Somehow, I get the feeling that the great minds behind this project fail to understand the importance of community and trust.

@paulwermer@sfba.social
2025-06-27 14:12:09

"the operations at the two new Laguna Street sites will be contracted out which means they will no longer be eligible to work with their clients."
Somehow, I get the feeling that the great minds behind this project fail to understand the importance of community and trust.

@randy_@social.linux.pizza
2025-07-28 08:09:15

I haven’t been very active in the last month. I haven’t watched many movies or taken much photography. All of this is because I finally found a new flat away from city life, close to nature. One thing that has changed is that I’m back to being very active on Couchsurfing. I love meeting new people from around the world, fellow travelers with great stories, and maybe even friends for life. This is much more interesting and important than following every footstep on what’s happening in the new…

@tgpo@social.linux.pizza
2025-06-19 19:36:53

Y'all that have been following me for a bit know that I'm slowly adding customizable colors to #Jellyfin for #Roku.
This has been a HUGE undertaking!
So will someone explain to me why on top of that I'm now adding the ability to import the colors from the custom CSS setti…

Demo video showing me importing color values from the Jellyfin server and using the colors in the Jellyfin for Roku client.
@tiotasram@kolektiva.social
2025-07-22 00:03:45

Overly academic/distanced ethical discussions
Had a weird interaction with @/brainwane@social.coop just now. I misinterpreted one of their posts quoting someone else and I think the combination of that plus an interaction pattern where I'd assume their stance on something and respond critically to that ended up with me getting blocked. I don't have hard feelings exactly, and this post is only partly about this particular person, but I noticed something interesting by the end of the conversation that had been bothering me. They repeatedly criticized me for assuming what their position was, but never actually stated their position. They didn't say: "I'm bothered you assumed my position was X, it's actually Y." They just said "I'm bothered you assumed my position was X, please don't assume my position!" I get that it's annoying to have people respond to a straw man version of your argument, but when I in response asked some direct questions about what their position was, they gave some non-answers and then blocked me. It's entirely possible it's a coincidence, and they just happened to run out of patience on that iteration, but it makes me take their critique of my interactions a bit less seriously. I suspect that they just didn't want to hear what I was saying, while at the same time they wanted to feel as if they were someone who values public critique and open discussion of tricky issues (if anyone reading this post also followed our interaction and has a different opinion of my behavior, I'd be glad to hear it; it's possible In effectively being an asshole here and it would be useful to hear that if so).
In any case, the fact that at the end of the entire discussion, I'm realizing I still don't actually know their position on whether they think the AI use case in question is worthwhile feels odd. They praised the system on several occasions, albeit noting some drawbacks while doing so. They said that the system was possibly changing their anti-AI stance, but then got mad at me for assuming this meant that they thought this use-case was justified. Maybe they just haven't made up their mind yet but didn't want to say that?
Interestingly, in one of their own blog posts that got linked in the discussion, they discuss a different AI system, and despite listing a bunch of concrete harms, conclude that it's okay to use it. That's fine; I don't think *every* use of AI is wrong on balance, but what bothered me was that their post dismissed a number of real ethical issues by saying essentially "I haven't seen calls for a boycott over this issue, so it's not a reason to stop use." That's an extremely socially conformist version of ethics that doesn't sit well with me. The discussion also ended up linking this post: chelseatroy.com/2024/08/28/doe which bothered me in a related way. In it, Troy describes classroom teaching techniques for introducing and helping students explore the ethics of AI, and they seem mostly great. They avoid prescribing any particular correct stance, which is important when teaching given the power relationship, and they help students understand the limitations of their perspectives regarding global impacts, which is great. But the overall conclusion of the post is that "nobody is qualified to really judge global impacts, so we should focus on ways to improve outcomes instead of trying to judge them." This bothers me because we actually do have a responsibility to make decisive ethical judgments despite limitations of our perspectives. If we never commit to any ethical judgment against a technology because we think our perspective is too limited to know the true impacts (which I'll concede it invariably is) then we'll have to accept every technology without objection, limiting ourselves to trying to improve their impacts without opposing them. Given who currently controls most of the resources that go into exploration for new technologies, this stance is too permissive. Perhaps if our objection to a technology was absolute and instantly effective, I'd buy the argument that objecting without a deep global view of the long-term risks is dangerous. As things stand, I think that objecting to the development/use of certain technologies in certain contexts is necessary, and although there's a lot of uncertainly, I expect strongly enough that the overall outcomes of objection will be positive that I think it's a good thing to do.
The deeper point here I guess is that this kind of "things are too complicated, let's have a nuanced discussion where we don't come to any conclusions because we see a lot of unknowns along with definite harms" really bothers me.