Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@davidaugust@mastodon.online
2025-07-19 13:14:26

Well done Alexandra Petri!
Pretty sure if you wanted human rights, you don't go getting rounded up by ice or dhs or cbp or bureau of prisons who will take away your human rights. /s
Reminds me of the old "they were killed by the drone strike, therefore they were a combatant."

@pbloem@sigmoid.social
2025-07-18 09:25:22

Now out in #TMLR:
🍇 GRAPES: Learning to Sample Graphs for Scalable Graph Neural Networks 🍇
There's lots of work on sampling subgraphs for GNNs, but relatively little on making this sampling process _adaptive_. That is, learning to select the data from the graph that is relevant for your task.
We introduce an RL-based and a GFLowNet-based sampler and show that the approach perf…

A diagram of the GRAPES pipeline. It shows a subgraph being sampled in two steps and being fed to a GNN, with a blue line showing the learning signal. The caption reads Figure 1: Overview of GRAPES. First, GRAPES processes a target node (green) by computing node inclusion probabilities on its 1-hop neighbors (shown by node color shade) with a sampling GNN. Given these probabilities, GRAPES samples k nodes. Then, GRAPES repeats this process over nodes in the 2-hop neighborhood. We pass the sampl…
A results table for node classification on heterophilious graphs. Table 2: F1-scores (%) for different sampling methods trained on heterophilous graphs for a batch size of 256, and a sample size of 256 per layer. We report the mean and standard deviation over 10 runs. The best values among the sampling baselines (all except GAS) are in bold, and the second best are underlined. MC stands for multi-class and ML stands for multi-label classification. OOM indicates out of memory.
Performance of samples vs sampling size showing that GRAPES generally performs well across sample sizes, while other samplers often show more variance across sample sizes. The caption reads Figure 4: Comparative analysis of classification accuracy across different sampling sizes for sampling baseline
and GRAPES. We repeated each experiment five times: The shaded regions show the 95% confidence intervals.
A diagrammatic illustration of a graph classification task used in one of the theorems. The caption reads Figure 9: An example of a graph for Theorem 1 with eight nodes. Red edges belong to E1, features xi and labels yi are shown beside every node. For nodes v1 and v2 we show the edge e12 as an example. As shown, the label of each node is the second feature of its neighbor, where a red edge connects them. The edge homophily ratio is h=12/28 = 0.43.
@hex@kolektiva.social
2025-06-14 10:21:24

I have my share of issues with Parkrose Permaculture, but she has a lot of things I do strongly agree with. I can't stress enough that you never dehumanize your enemies. You can respond appropriately to violence. You can defend yourself from them by any means necessary. But you do not dehumanize them. You always limit your response to the minimum necessary to defend yourself.
There are a number of former Nazi skins who became antifascists after realizing they were wrong. Those folks tend to be some of the most dedicated because they feel a debt, and some of the most knowledgeable because they were there. Coming out of these types of cults, police included, is hard and takes time. A lot of us don't have the ability to work with them. But some do.
By repeatedly humanizing your opponent, you can break some of them. The #Seattle Police Department was not defunded but saw a massive reduction in numbers because their morale was destroyed. Some people will never change. Some people are broken and feel like they need the power. But if you change one person's mind, even give them something to think about, it's a crack. If even one cop quits, that's one less trained gun pointed at you in the future.
The 18 year old marines and federalized national guard troops out there are literally kids. A lot of them came from poor communities. They are being used in a way they haven't been trained to do, doing things they (should) have been told are not legal. They joined to get out of poverty, to go to college, or to "defend the American people" (regardless of how misguided that is). Few, if any, of them joined to abuse people. They will be especially open to persuasion.
Remind those troops that they are carrying out illegal orders, that they are being called on to violate their oath to protect the constitution, that they are suppressing the free speech of the fellow Americans they swore to defend. Remind them that the people they could be illegally arresting now are just like their parents, their neighbors, their families, the friends who didn't join. Remind them that this is the first step. They will be called on to kill Americans if they let this keep going.
Remind them ICE sleeps in hotels while they sleep on the ground. Remind them that their drunk and incompetent leadership thinks of them as disposable tools. Remind them that some of these people are out protesting *for them* against cuts to the VA and other services. Remind them that the people they're defending refuse to make college free so they can recruit from poor schools. Remind them that they will always be welcome when they're ready to join the side of freedom and justice.
When you dehumanize your enemies, you unify them. When you humanize your enemies, you can divide them. There is no weapon available to us right now so powerful as compassion.
youtu.be/YtWOYUDMsBw

@crell@phpc.social
2025-06-20 13:51:23

Any time someone says "I asked ChatGPT and it said the answer was..." I just assume it's wrong and lower my opinion of the person saying it.
I wonder if this is how people in the late 90s felt about Yahoo, or Google, or early 2000s about Wikipedia. At least those were/showed primary sources, though. ChatGPT is not.

@adamhotep@infosec.exchange
2025-05-20 03:53:52

I made a helper for Proximity (a word association game like #Semantle). It lets you poke around the database to see what's close to what:
github.com/adamhotep/userscrip

Screen shot of a game of Proximity in progress. There's a text box at the top where you enter your guesses, a "Guess" button beside it, then the list of guesses so far, with a colored bar indicating how close it is; the top guess is the most recent (264 away) while later guesses are ranked from closest (44 away) to farthest (tepid). The content is blurred so today's game isn't spoiled for you. 

Below the guesses is a panel of buttons including "Hint" and "Nearby...", which is circled by hand…
Another screenshot, this time of the "Nearby words" view, normally shown after completing a puzzle. A text box with its "Nearby" button is again circled by hand. Below that, it says "Nearby words" and it lists the nearest words to "proximity", including their similarity metric (proximity 1 is "nearness" with a similarity of 67.19, proximity 10 is "located" with a similarity of 45.07).
@tiotasram@kolektiva.social
2025-05-15 17:02:17

The full formula for the probability of "success" is:
p = {
1/(2^(-n 1)) if n is negative, or
1 - (1/(2^(n 1))) if n is zero or positive
}
(Both branches have the same value when n is 0, so the behavior is smooth around the origin.)
How can we tweak this?
First, we can introduce fixed success and/or failure chances unaffected by level, with this formula only taking effect if those don't apply. For example, you could do 10% failure, 80% by formula, and 10% success to keep things from being too sure either way even when levels are very high or low. On the other hand, this flattening makes the benefit of extra advantage levels even less exciting.
Second, we could allow for gradations of success/failure, and treat the coin pools I used to explain that math like dice pools a bit. An in-between could require linearly more success flips to achieve the next higher grade of success at each grade. For example, simple success on a crit role might mean dealing 1.5x damage, but if you succeed on 2 of your flips, you get 9/4 damage, or on 4 flips 27/8, or on 7 flips 81/16. In this world, stacking crit levels might be a viable build, and just giving up on armor would be super dangerous. In the particular case I was using this for just now, I can't easily do gradations of success (that's the reason I turned to probabilities in the first place) but I think I'd favor this approach when feasible.
The main innovation here over simple dice pools is how to handle situations where the number of dice should be negative. I'm almost certain it's not a truly novel innovation though, and some RPG fan can point out which system already does this (please actually do this, I'm an RPG nerd too at heart).
I'll leave this with one more tweak we could do: what if the number 2 in the probability equation were 3, or 2/3? I think this has a similar effect to just scaling all the modifiers a bit, but the algebra escapes me in this moment and I'm a bit lazy. In any case, reducing the base of the probability exponent should let you get a few more gradations near 50%, which is probably a good thing, since the default goes from 25% straight to 50% and then to 75% with no integer stops in between.

@mgorny@social.treehouse.systems
2025-07-14 16:39:18

About morbid thriftiness (Autism Spectrum Condition)
As you may have noticed, I am morbidly thrifty. Usually I don't buy stuff that I don't need — and if I decide that I actually need something, I am going to ponder about it for a while, look for value products, and for the best price. And with some luck, I'm going to decide I don't need it that bad after all.
One reason for that is probably how I was raised. My parents taught me to be thrifty, so I have to be. It doesn't matter that, from retrospective, I see that their thriftiness was applied rather arbitrarily to some spendings and not others, or that perhaps they were greedy — spending less on individual things so that they could buy more. Well, I can't delude myself like that, so I have to be thrifty for real. And when I fail, when I pay too much, when I get cheated — I feel quite bad about it.
The other reason is that I keep worrying about my future. It doesn't matter how rich I may end up — I'll keep worrying that I'll run out of money in the future. Perhaps I'll lose a job and won't be able to find anything for a long time, Perhaps something terrible will happen and I'm going to need to pay a lot suddenly.
Another thing is that I easily get attached to objects. Well, it's easier to be thrifty when you really don't want to replace stuff. Over time you also learn to avoid getting new stuff at all, since the more stuff you have, the more stuff may break and need to be thrown away.
Finally, there's my environmental responsibility. I admit that I don't do enough — but at least the things I can do, I do.
[EDIT: and yes, I feel bad about how expensive my new phone was, even though it's of much higher quality than the last one. Also, I got a worse deal because I waited too long.]
#ActuallyAutistic

@arXiv_csNE_bot@mastoxiv.page
2025-06-19 08:27:34

Estimate Hitting Time by Hitting Probability for Elitist Evolutionary Algorithms
Jun He, Siang Yew Chong, Xin Yao
arxiv.org/abs/2506.15602

@tiotasram@kolektiva.social
2025-05-16 10:45:55

To dig slightly deeper here, I think that there's a feedback loop between "fall in love/wait for your perfect match (and by the way girls the only career you should aspire to is the literally unattainable 'princess'" Disney stuff and this "my characters are complex I'm so sophisticated; they suffer but it's not intolerable and their lives are good enough despite the imperfections" crap that gets praised as so evocative of the human condition. In fact, I think it merely evokes the condition of its authors & fans who were poisoned by the Disney in their youth and who have remained bad as relationships ever since, though this is not exactly their fault. In any case, their white middle-aged wisdom-shaped-but-quite bitter and intricately-constructed-so-it's-hard-to-see-the-really-untrue-character-facets work ends up keeping their audience within the "romance is luck" cult by way of reassuring them that a middling romance with lots of doubt and complications is "just life" even though the author doesn't actually have any broader perspective on what life is than anyone else.
This has turned into a bit of a rant, but I think I'll just add that reading Mama by Nikkya Hargrove just before Dream State helped immensely to see how the distant & awkward parent-child relationships of the latter are not a product of human nature but instead of white western culture & capitalism.
(The defeatism about climate change is a whole nother dimension of wrong about Dream State, by that's a separate rant.)

@hex@kolektiva.social
2025-06-12 13:13:31

I'm pretty sure all the white folks (and anyone else who didn't learn the underlying lessons first hand) were assigned to learn about Red Summer, the Chinese Exclusion Acts, Wilmington 1898, and more than a few other things that came up in cultural conversation during the last Trump presidency. This is all on the test, and you're taking it now.
But in case anyone missed the assignment, I'll give you the TL;DR: ethnic cleansing has been central to American politics basically forever, which shouldn't be surprising given it's a nation founded on genocide and the belief in the right to commit it without constraint.
If you haven't done the math yet, I'll help you out. The "Haitian Immigrants" lets them grab black folks, they've been grabbing folks from Mexico south and lumping in indigenous Americans (just so they don't skip out on the oldest American genocide), and the Muslim ban/Hamas rhetoric lets them grab anyone who else they see fit.
The lack of due process lets them grab anyone and they don't have to prove anything. They're talking about deporting "one million" and possibly"millions" of people. So how do they get those numbers?
There are already reports that they're just grabbing random brown folks, trying to take 3k people per day. They fly to blue cities and grab as many black and brown people as they can, then send them to death camps in foreign countries and pretend they have no way to get them back. That's it. That's the game.
This isn't new. The big difference now is that the cops aren't hiding their uniforms under white hoods this time. Do you get it yet?