Tootfinder

Opt-in global Mastodon full text search. Join the index!

@aardrian@toot.cafe
2025-08-19 10:45:35

karlgroves.com/how-much-should
“… a good rule of thumb is to treat accessibility as a core part of your compliance strategy. Aiming for 5%–10% of your compliance budget is a solid starting point. For some, that may mean 0.1%–0…

@Techmeme@techhub.social
2025-07-17 17:08:42

OpenAI debuts ChatGPT Agent, which can control an entire computer and perform multi-step tasks, powered by a new dedicated model, rolling out to paid users (Hayden Field/The Verge)
theverge.com/ai-artificial-int

@Tupp_ed@mastodon.ie
2025-06-16 07:31:05

Hello!
I have a happy favour to ask.
Last year, I had a US intern working with us and she now is looking for help that you- good reader of this account- could give.
(She needs academic survey participants)
Check out her flyer below, click the link, and please RT

PARTICIPANTS NEEDED
***-
Research Study on Data Privacy and Collective Redress
A researcher at University College Dublin wants to
conduct anonymized interviews to learn about why
you would or would not participate in collective legal
action to rectify a data protection violation. This
information can be used to help nonprofit
organizations better represent you when your data
protection rights are violated.
Requirements:
• 18 + years of age
• Residing in Ireland or the EU
• English speaking
FOR …
@tiotasram@kolektiva.social
2025-07-28 13:06:20

How popular media gets love wrong
Now a bit of background about why I have this "engineered" model of love:
First, I'm a white straight cis man. I've got a few traits that might work against my relationship chances (e.g., neurodivergence; I generally fit pretty well into the "weird geek" stereotype), but as I was recently reminded, it's possible my experience derives more from luck than other factors, and since things are tilted more in my favor than most people on the planet, my advice could be worse than useless if it leads people towards strategies that would only have worked for someone like me. I don't *think* that's the case, but it's worth mentioning explicitly.
When I first started dating my now-wife, we were both in graduate school. I was 26, and had exactly zero dating/romantic experience though that point in my life. In other words, a pretty stereotypical "incel" although I definitely didn't subscribe to incel ideology at all. I felt lonely, and vaguely wanted a romantic relationship (I'm neither aromantic nor asexual), but had never felt socially comfortable enough to pursue one before. I don't drink and dislike most social gatherings like parties or bars; I mostly hung around the fringes of the few college parties I attended, and although I had a reasonable college social life in terms of friends, I didn't really do anything to pursue romance, feeling too awkward to know where to start. I had the beginnings of crushes in both high school and college, but never developed a really strong crush, probably correlated with not putting myself in many social situations outside of close all-male friend gatherings. I never felt remotely comfortable enough to act on any of the proto-crushes I did have. I did watch porn and masturbate, so one motivation for pursuing a relationship was physical intimacy, but loneliness was as much of a motivating factor, and of course the social pressure to date was a factor too, even though I'm quite contrarian.
When I first started dating my now-wife, we were both in graduate school. I was 26, and had exactly zero dating/romantic experience though that point in my life. In other words, a pretty stereotypical "incel" although I definitely didn't subscribe to incel ideology at all. I felt lonely, and vaguely wanted a romantic relationship (I'm neither aromantic nor asexual), but had never felt socially comfortable enough to pursue one before. I don't drink and dislike most social gatherings like parties or bars; I mostly hung around the fringes of the few college parties I attended, and although I had a reasonable college social life in terms of friends, I didn't really do anything to pursue romance, feeling too awkward to know where to start. I had the beginnings of crushes in both high school and college, but never developed a really strong crush, probably correlated with not putting myself in many social situations outside of close all-male friend gatherings. I never felt remotely comfortable enough to act on any of the proto-crushes I did have. I did watch porn and masturbate, so one motivation for pursuing a relationship was physical intimacy, but loneliness was as much of a motivating factor, and of course the social pressure to date was a factor too, even though I'm quite contrarian.
I'm lucky in that I had some mixed-gender social circles already like intramural soccer and a graduate-student housing potluck. Graduate school makes a *lot* more of these social spaces accessible, so I recognize that those not in school of some sort have a harder time of things, especially if like me they don't feel like they fit in in typical adult social spaces like bars.
However, at one point I just decided that my desire for a relationship would need action on my part and so I'd try to build a relationship and see what happened. I worked up my courage and asked one of the people in my potluck if she'd like to go for a hike (pretty much clearly a date but not explicitly one; in retrospect not the best first-date modality in a lot of ways, but it made a little more sense in our setting where we could go for a hike from our front door). To emphasize this point: I was not in love with (or even infatuated with) my now-wife at that point. I made a decision to be open to building a relationship, but didn't follow the typical romance story formula beyond that. Now of course, in real life as opposed to popular media, this isn't anything special. People ask each other out all the time just because they're lonely, and some of those relationships turn out fine (although many do not).
I was lucky in that some aspects of who I am and what I do happened to be naturally comforting to my wife (natural advantage in the "appeal" model of love) but of course there are some aspects of me that annoy my wife, and we negotiate that. In the other direction, there's some things I instantly liked about my wife, and other things that still annoy me. We've figured out how to accept a little, change a little, and overall be happy with each other (though we do still have arguments; it's not like the operation/construction/maintenance of the "love mechanism" is always perfectly smooth). In particular though, I approached the relationship with the attitude of "I want to try to build a relationship with this person," at first just because of my own desires for *any* relationship, and then gradually more and more through my desire to build *this specific* relationship as I enjoyed the rewards of companionship.
So for example, while I think my wife is objectively beautiful, she's also *subjectively* very beautiful *to me* because having decided to build a relationship with her, I actively tried to see her as beautiful, rather than trying to judge whether I wanted a relationship with her based on her beauty. In other words, our relationship is more causative of her beauty-to-me than her beauty-to-me is causative of our relationship. This is the biggest way I think the "engineered" model of love differs from the "fire" and "appeal" models: you can just decide to build love independent of factors we typically think of as engendering love (NOT independent of your partner's willingness to participate, of course), and then all of those things like "thinking your partner is beautiful" can be a result of the relationship you're building. For sure those factors might affect who is willing to try building a relationship with you in the first place, but if more people were willing to jump into relationship building (not necessarily with full commitment from the start) without worrying about those other factors, they might find that those factors can come out of the relationship instead of being prerequisites for it. I think this is the biggest failure of the "appeal" model in particular: yes you *do* need to do things that appeal to your partner, but it's not just "make myself lovable" it's also: is your partner putting in the effort to see the ways that you are beautiful/lovable/etc., or are they just expecting you to become exactly some perfect person they've imagined (and/or been told to desire by society)? The former is perfectly possible, and no less satisfying than the latter.
To cut off my rambling a bit here, I'll just add that in our progress from dating through marriage through staying-married, my wife and I have both talked at times explicitly about commitment, and especially when deciding to get married, I told her that I knew I couldn't live up to the perfect model of a husband that I'd want to be, but that if she wanted to deepen our commitment, I was happy to do that, and so we did. I also rearranged my priorities at that point, deciding that I knew I wanted to prioritize this relationship above things like my career or my research interests, and while I've not always been perfect at that in my little decisions, I've been good at holding to that in my big decisions at least. In the end, *once we had built a somewhat-committed relationship*, we had something that we both recognized was worth more than most other things in life, and that let us commit even more, thus getting even more out of it in the long term. Obviously you can't start the first date with an expectation of life-long commitment, and you need to synchronize your increasing commitment to a relationship so that it doesn't become lopsided, which is hard. But if you take the commitment as an active decision and as the *precursor* to things like infatuation, attraction, etc., you can build up to something that's incredibly strong and rewarding.
I'll follow this up with one more post trying to distill some advice from my ramblings.
#relationships #love

@thomasfuchs@hachyderm.io
2025-07-07 01:38:13

Even if “AI” worked (it doesn’t), there’s many reasons why you shouldn’t use it:
1. It’s destroying Internet sites that you love as you use chat bots instead of actually going to sources of information—this will cause them to be less active and eventually shut down.
2. Pollution and water use from server farms cause immediate harm; often—just like other heavy industry—these are built in underprivileged communities and harming poor people. Without any benefits as the big tech companies get tax breaks and don’t pay for power, while workers aren’t from the community but commute in.
3. The basic underlying models of any LLM rely on stolen data, even when specific extra data is obtained legally. Chatbots can’t learn to speak English just by reading open source code.
4. You’re fueling a speculation bubble that is costing many people their jobs—because the illusion of “efficiency” is kept up by firing people and counting that as profit.
5. Whenever you use the great cheat machine in the cloud you’re robbing yourself from doing real research, writing or coding—literally atrophying your brain and making you stupider.
It’s a grift, through and through.

@muz4now@mastodon.world
2025-08-03 23:07:33

Your Breath Controls Your Vision: New Research Reveals Surprising Connection techfixated.com/your-breath-co

@gwire@mastodon.social
2025-06-16 12:39:13

Openreach (who operates a big chunk of internet access in the UK) collects a lot of data about end-user internet use and can generate press releases based on analysis of it.
This is like an advert for VPNs.
openreach.com/news/how-do-you-

@tiotasram@kolektiva.social
2025-08-19 13:29:37

If you've been paying attention, this is a *very* strong signal that OpenAI is hitting the limits of improved capability with more compute/data and they're (predictably) all out of other ideas. The quiet "exponential model capabilities" lie here is what Altmann promised but is now starting not to be able to deliver, even in cherry-picked demo terms.
cnbc.com/2025/08/11/sam-altman
The "agentic" turn was never going to pan out, because it exposes the unreliability of LLMs too directly, and it turns out that no amount of yelling at your text vending machine to "Be smarter! Think harder!" will actually get you anything more than vended text.
I'm *praying* that we get into this crash sooner rather than later, since the faster it comes, the less painful it will be.
My recent reading in actual research papers corroborates this, for example, asking LLMs to play games exposes their utter lack of anything that can be termed "reasoning":
arxiv.org/pdf/2508.08501v1

@azonenberg@ioc.exchange
2025-07-04 07:52:35

Seriously, linkedin spammers?
You say you came across my REcon talk. About semiconductor reverse engineering.
And this somehow makes me a good candidate for a role involving cheating at online poker???

Hey Andrew,

| was researching top reverse engineers and came across your Recon2025 profile.

I'm looking for an experienced C++ developer specializing in Windows reverse engineering/ security research. We need someone proficient in DLL injection, function hooking, disassembly analysis, and Windows internals.

The role involves injecting DLLs into poker clients, hooking Win32 functions, intercepting packets and Ul events, and developing automation tools. We've built a poker solver and are …
@pbloem@sigmoid.social
2025-06-09 21:41:44

If you want to help people in #academia who are maybe less fortunate than you, who have less famous supervisors, or work at less prestigious universities, here's one simple thing you can do:
Do proper literature research.
That means complete forward and backward snowballing from a decent seed set. Find everything that is relevant to your paper and cite it. Budget a couple of full…

@tiotasram@kolektiva.social
2025-07-17 13:31:49

To add a single example here (feel free to chime in with your own):
Problem: editing code is sometimes tedious because external APIs require boilerplate.
Solutions:
- Use LLM-generated code. Downsides: energy use, code theft, potential for legal liability, makes mistakes, etc. Upsides: popular among some peers, seems easy to use.
- Pick a better library (not always possible).
- Build internal functions to centralize boilerplate code, then use those (benefits: you get a better understanding of the external API, and a more-unit-testable internal code surface; probably less amortized effort).
- Develop a non-LLM system that actually reasons about code at something like the formal semantics level and suggests boilerplate fill-ins based on rules, while foregrounding which rules it's applying so you can see the logic behind the suggestions (needs research).
Obviously LLM use in coding goes beyond this single issue, but there are similar analyses for each potential use of LLMs in coding. I'm all cases there are:
1. Existing practical solutions that require more effort (or in many cases just seem to but are less-effort when amortized).
2. Near-term researchable solutions that directly address the problem and which would be much more desirable in the long term.
Thus in addition to disastrous LLM effects on the climate, on data laborers, and on the digital commons, they tend to suck us into cheap-seeming but ultimately costly design practices while also crowding out better long-term solutions. Next time someone suggests how useful LLMs are for some task, try asking yourself (or them) what an ideal solution for that task would look like, and whether LLM use moves us closer to or father from a world in which that solution exists.

@arXiv_csHC_bot@mastoxiv.page
2025-08-04 08:51:41

Your Model Is Unfair, Are You Even Aware? Inverse Relationship Between Comprehension and Trust in Explainability Visualizations of Biased ML Models
Zhanna Kaufman, Madeline Endres, Cindy Xiong Bearfield, Yuriy Brun
arxiv.org/abs/2508.00140

@cellfourteen@social.petertoushkov.eu
2025-06-30 07:35:29

Went to Threads to check if we're there yet with the federation of the eurotrash portion of the userbase. Found Zuck's currently busy no-commenting to a bunch of angry writers whose books he used to train his AI.

https://www.threads.com/@zuck/post/DHbau1vvgLl

vanbadham
24/03/2025
You stole five of my books and I demand compensation
jamesrbenn
25/03/2025
You have stolen my intellectual property for your AI research.
stevezettlerauthor
27/03/2025
You and everyone else in print.
isabelle_broom
22/03/2025
I’m into NOT having every word I’ve ever written stolen by you to train AI. You’re just a guy with a llama and no morals.
dwadamsauthor
23/03/2025
I'm into billionaire puppets not stealing my work to trai…
@thomasfuchs@hachyderm.io
2025-08-10 13:30:14

One for the scientists:
You should only use tools and processes on your data that you completely understand.
LLMs are—by design—black boxes being trained to create algorithms that are so complex that it is strictly impossible to understand them.
Therefore they are incompatible with handling data (such transforming data or generating synthetic data) for scientific research.

@arXiv_csSE_bot@mastoxiv.page
2025-06-26 08:29:20

Ten simple rules for PIs to integrate Research Software Engineering into their research group
Stuart M. Allen, Neil Chue Hong, Stephan Druskat, Toby Hodges, Daniel S. Katz, Jan Linxweiler, Frank L\"offler, Lars Grunske, Heidi Seibold, Jan Philipp Thiele, Samantha Wittke
arxiv.org/abs/2506.20217

@arXiv_csRO_bot@mastoxiv.page
2025-08-05 11:49:21

Would you let a humanoid play storytelling with your child? A usability study on LLM-powered narrative Humanoid-Robot Interaction
Maria Lombardi, Carmela Calabrese, Davide Ghiglino, Caterina Foglino, Davide De Tommaso, Giulia Da Lisca, Lorenzo Natale, Agnieszka Wykowska
arxiv.org/abs/2508.02505

@SmartmanApps@dotnet.social
2025-07-28 23:33:43

"AI companies, which are depending more and more on synthetic data as they rapidly run out of material that was human-made and not polluted by AI drivel" - so, you knew there were problems, and released it to the public anyway (slow clap). And news-flash, because you released it to the public, even some of your "human-made" data is now polluted by AI drivel in the first place.

@paulomalley@c.im
2025-07-30 07:33:25

What if you could skip the most boring parts of your research? 🤔
I spent the last week testing the SciSpace AI Agent, and it's honestly wild. This feels like the future for students and researchers. I documented the whole thing so you can see it in action.
🎥 youtu.be/5hS28-f2Vgk
✨ I …

@tiotasram@kolektiva.social
2025-07-30 17:56:35

Just read this post by @… on an optimistic AGI future, and while it had some interesting and worthwhile ideas, it's also in my opinion dangerously misguided, and plays into the current AGI hype in a harmful way.
social.coop/@eloquence/1149406
My criticisms include:
- Current LLM technology has many layers, but the biggest most capable models are all tied to corporate datacenters and require inordinate amounts of every and water use to run. Trying to use these tools to bring about a post-scarcity economy will burn up the planet. We urgently need more-capable but also vastly more efficient AI technologies if we want to use AI for a post-scarcity economy, and we are *not* nearly on the verge of this despite what the big companies pushing LLMs want us to think.
- I can see that permacommons.org claims a small level of expenses on AI equates to low climate impact. However, given current deep subsidies on place by the big companies to attract users, that isn't a great assumption. The fact that their FAQ dodges the question about which AI systems they use isn't a great look.
- These systems are not free in the same way that Wikipedia or open-source software is. To run your own model you need a data harvesting & cleaning operation that costs millions of dollars minimum, and then you need millions of dollars worth of storage & compute to train & host the models. Right now, big corporations are trying to compete for market share by heavily subsidizing these things, but it you go along with that, you become dependent on them, and you'll be screwed when they jack up the price to a profitable level later. I'd love to see open dataset initiatives SBD the like, and there are some of these things, but not enough yet, and many of the initiatives focus on one problem while ignoring others (fine for research but not the basis for a society yet).
- Between the environmental impacts, the horrible labor conditions and undercompensation of data workers who filter the big datasets, and the impacts of both AI scrapers and AI commons pollution, the developers of the most popular & effective LLMs have a lot of answer for. This project only really mentions environmental impacts, which makes me think that they're not serious about ethics, which in turn makes me distrustful of the whole enterprise.
- Their language also ends up encouraging AI use broadly while totally ignoring several entire classes of harm, so they're effectively contributing to AI hype, especially with such casual talk of AGI and robotics as if embodied AGI were just around the corner. To be clear about this point: we are several breakthroughs away from AGI under the most optimistic assumptions, and giving the impression that those will happen soon plays directly into the hands of the Sam Altmans of the world who are trying to make money off the impression of impending huge advances in AI capabilities. Adding to the AI hype is irresponsible.
- I've got a more philosophical criticism that I'll post about separately.
I do think that the idea of using AI & other software tools, possibly along with robotics and funded by many local cooperatives, in order to make businesses obsolete before they can do the same to all workers, is a good one. Get your local library to buy a knitting machine alongside their 3D printer.
Lately I've felt too busy criticizing AI to really sit down and think about what I do want the future to look like, even though I'm a big proponent of positive visions for the future as a force multiplier for criticism, and this article is inspiring to me in that regard, even if the specific project doesn't seem like a good one.

@keen456@infosec.exchange
2025-07-26 19:10:58

@… Thought you might like this
oldbytes.space/@gloriouscow/11

@NathanALV@social.linux.pizza
2025-07-31 19:52:32

I feel that I might have some decent experiece and research around Iems to at least make a simple article about them, its been a bit as I have been busy with work, but I now have the time to do so. You may see it on your feed in augest :blobcatrainbow: