Tootfinder

Opt-in global Mastodon full text search. Join the index!

@muz4now@mastodon.world
2025-08-04 14:18:03

Brian Eno Announces Together for Palestine Benefit Concert in London
pitchfork.com/news/brian-eno-a

@raiders@darktundra.xyz
2025-09-05 19:04:53

Raiders mailbag: Does WR’s surprise retirement benefit these rookies? reviewjournal.com/sports/raide

@Techmeme@techhub.social
2025-08-05 00:01:26

SEO is being supplanted by generative-engine optimization, or GEO, which focuses on AI chatbots and does not benefit from longstanding SEO tricks (John Herrman/New York Magazine)
nymag.com/intelligencer/articl

@davidaugust@mastodon.online
2025-07-04 22:39:16

"We started this country by grabbing rifles we had laying around and starting to shoot at the soldiers of the one of the most powerful empires on earth, all because they taxed our breakfast beverage."
medium.com/the-geopolitical-ec

@crell@phpc.social
2025-09-05 20:24:01

Today I would very much benefit from extension functions in #PHP...

@gedankenstuecke@scholar.social
2025-09-05 01:55:20

Oh, fun: Someone did some personal science to evaluate if _they personally_ benefit from "AI", and this is what they found:
«Yes, it’s a limited sample and could be chance, but also so far AI appears to slow me down by a median of 21%, exactly in line with the METR study. I can say definitively that I’m not seeing any massive increase in speed»
Where's the Shovelware? Why AI Coding Claims Don't Add Up
#quantifiedself

@Mediagazer@mstdn.social
2025-08-05 07:10:46

SEO is being supplanted by generative-engine optimization, or GEO, which focuses on AI chatbots and does not benefit from longstanding SEO tricks (John Herrman/New York Magazine)
nymag.com/intelligencer/articl

Tax Cuts Now, Benefit Cuts Later:
The Timeline in the Republican Megabill
👉 Republicans deferred some of their most painful spending cuts until after the midterm election
To pay for the tax policies,
which confer their greatest benefits on the wealthy,
Republican lawmakers have looked to slash programs that are both popular and widely used,
discomfiting even some within their own ranks.
💥The savings from the safety net cuts still are not enough to of…

@primonatura@mstdn.social
2025-08-04 15:00:20

"Climate change: new method can more accurately attribute environmental harm to individual polluter"
#Climate #ClimateChange

@mxp@mastodon.acm.org
2025-08-05 17:15:50

“Assuming we steward it safely and responsibly into the world…”—Right, bro, as if.
theguardian.com/technology/202

@smurthys@hachyderm.io
2025-07-05 12:42:44

"You happen to have the benefit of knowing what you're talking about."
An actual "compliment" I received on a conference talk, only to realize a few days later how loaded it was. It still hurts when I remember it, and it happened in 2006. 😖
#AlmostACompliment
#HashTagGames

@tiotasram@kolektiva.social
2025-08-04 15:49:00

Should we teach vibe coding? Here's why not.
Should AI coding be taught in undergrad CS education?
1/2
I teach undergraduate computer science labs, including for intro and more-advanced core courses. I don't publish (non-negligible) scholarly work in the area, but I've got years of craft expertise in course design, and I do follow the academic literature to some degree. In other words, In not the world's leading expert, but I have spent a lot of time thinking about course design, and consider myself competent at it, with plenty of direct experience in what knowledge & skills I can expect from students as they move through the curriculum.
I'm also strongly against most uses of what's called "AI" these days (specifically, generative deep neutral networks as supplied by our current cadre of techbro). There are a surprising number of completely orthogonal reasons to oppose the use of these systems, and a very limited number of reasonable exceptions (overcoming accessibility barriers is an example). On the grounds of environmental and digital-commons-pollution costs alone, using specifically the largest/newest models is unethical in most cases.
But as any good teacher should, I constantly question these evaluations, because I worry about the impact on my students should I eschew teaching relevant tech for bad reasons (and even for his reasons). I also want to make my reasoning clear to students, who should absolutely question me on this. That inspired me to ask a simple question: ignoring for one moment the ethical objections (which we shouldn't, of course; they're very stark), at what level in the CS major could I expect to teach a course about programming with AI assistance, and expect students to succeed at a more technically demanding final project than a course at the same level where students were banned from using AI? In other words, at what level would I expect students to actually benefit from AI coding "assistance?"
To be clear, I'm assuming that students aren't using AI in other aspects of coursework: the topic of using AI to "help you study" is a separate one (TL;DR it's gross value is not negative, but it's mostly not worth the harm to your metacognitive abilities, which AI-induced changes to the digital commons are making more important than ever).
So what's my answer to this question?
If I'm being incredibly optimistic, senior year. Slightly less optimistic, second year of a masters program. Realistic? Maybe never.
The interesting bit for you-the-reader is: why is this my answer? (Especially given that students would probably self-report significant gains at lower levels.) To start with, [this paper where experienced developers thought that AI assistance sped up their work on real tasks when in fact it slowed it down] (arxiv.org/abs/2507.09089) is informative. There are a lot of differences in task between experienced devs solving real bugs and students working on a class project, but it's important to understand that we shouldn't have a baseline expectation that AI coding "assistants" will speed things up in the best of circumstances, and we shouldn't trust self-reports of productivity (or the AI hype machine in general).
Now we might imagine that coding assistants will be better at helping with a student project than at helping with fixing bugs in open-source software, since it's a much easier task. For many programming assignments that have a fixed answer, we know that many AI assistants can just spit out a solution based on prompting them with the problem description (there's another elephant in the room here to do with learning outcomes regardless of project success, but we'll ignore this over too, my focus here is on project complexity reach, not learning outcomes). My question is about more open-ended projects, not assignments with an expected answer. Here's a second study (by one of my colleagues) about novices using AI assistance for programming tasks. It showcases how difficult it is to use AI tools well, and some of these stumbling blocks that novices in particular face.
But what about intermediate students? Might there be some level where the AI is helpful because the task is still relatively simple and the students are good enough to handle it? The problem with this is that as task complexity increases, so does the likelihood of the AI generating (or copying) code that uses more complex constructs which a student doesn't understand. Let's say I have second year students writing interactive websites with JavaScript. Without a lot of care that those students don't know how to deploy, the AI is likely to suggest code that depends on several different frameworks, from React to JQuery, without actually setting up or including those frameworks, and of course three students would be way out of their depth trying to do that. This is a general problem: each programming class carefully limits the specific code frameworks and constructs it expects students to know based on the material it covers. There is no feasible way to limit an AI assistant to a fixed set of constructs or frameworks, using current designs. There are alternate designs where this would be possible (like AI search through adaptation from a controlled library of snippets) but those would be entirely different tools.
So what happens on a sizeable class project where the AI has dropped in buggy code, especially if it uses code constructs the students don't understand? Best case, they understand that they don't understand and re-prompt, or ask for help from an instructor or TA quickly who helps them get rid of the stuff they don't understand and re-prompt or manually add stuff they do. Average case: they waste several hours and/or sweep the bugs partly under the rug, resulting in a project with significant defects. Students in their second and even third years of a CS major still have a lot to learn about debugging, and usually have significant gaps in their knowledge of even their most comfortable programming language. I do think regardless of AI we as teachers need to get better at teaching debugging skills, but the knowledge gaps are inevitable because there's just too much to know. In Python, for example, the LLM is going to spit out yields, async functions, try/finally, maybe even something like a while/else, or with recent training data, the walrus operator. I can't expect even a fraction of 3rd year students who have worked with Python since their first year to know about all these things, and based on how students approach projects where they have studied all the relevant constructs but have forgotten some, I'm not optimistic seeing these things will magically become learning opportunities. Student projects are better off working with a limited subset of full programming languages that the students have actually learned, and using AI coding assistants as currently designed makes this impossible. Beyond that, even when the "assistant" just introduces bugs using syntax the students understand, even through their 4th year many students struggle to understand the operation of moderately complex code they've written themselves, let alone written by someone else. Having access to an AI that will confidently offer incorrect explanations for bugs will make this worse.
To be sure a small minority of students will be able to overcome these problems, but that minority is the group that has a good grasp of the fundamentals and has broadened their knowledge through self-study, which earlier AI-reliant classes would make less likely to happen. In any case, I care about the average student, since we already have plenty of stuff about our institutions that makes life easier for a favored few while being worse for the average student (note that our construction of that favored few as the "good" students is a large part of this problem).
To summarize: because AI assistants introduce excess code complexity and difficult-to-debug bugs, they'll slow down rather than speed up project progress for the average student on moderately complex projects. On a fixed deadline, they'll result in worse projects, or necessitate less ambitious project scoping to ensure adequate completion, and I expect this remains broadly true through 4-6 years of study in most programs (don't take this as an endorsement of AI "assistants" for masters students; we've ignored a lot of other problems along the way).
There's a related problem: solving open-ended project assignments well ultimately depends on deeply understanding the problem, and AI "assistants" allow students to put a lot of code in their file without spending much time thinking about the problem or building an understanding of it. This is awful for learning outcomes, but also bad for project success. Getting students to see the value of thinking deeply about a problem is a thorny pedagogical puzzle at the best of times, and allowing the use of AI "assistants" makes the problem much much worse. This is another area I hope to see (or even drive) pedagogical improvement in, for what it's worth.
1/2

@arXiv_csSE_bot@mastoxiv.page
2025-07-04 09:33:41

Human-Machine Collaboration and Ethical Considerations in Adaptive Cyber-Physical Systems
Zoe Pfister
arxiv.org/abs/2507.02578

@rasterweb@mastodon.social
2025-09-03 12:26:19

Sadly this is the only way we're getting our trains to go twice as fast...
("Trains operate at a maximum of 15 mph in the corridor... the report called for the trains to operate at up to 40 mph.")
urbanmilwaukee.com/2025/09/02/

@memeorandum@universeodon.com
2025-07-02 22:05:57

Tax Cuts Now, Benefit Cuts Later: The Timeline in the Republican Megabill (New York Times)
nytimes.com/2025/07/02/us/poli
memeorandum.com/250702/p117#a2

@arXiv_csDC_bot@mastoxiv.page
2025-08-05 09:51:00

PUSHtap: PIM-based In-Memory HTAP with Unified Data Storage Format
Yilong Zhao, Mingyu Gao, Huanchen Zhang, Fangxin Liu, Gongye Chen, He Xian, Haibing Guan, Li Jiang
arxiv.org/abs/2508.02309

@mszll@datasci.social
2025-07-03 09:08:01

Probably a good day 🥵 to remind of our CoolWalks study: nature.com/articles/s41598-025
- Above all, stop burning fossil fuels
- Buildings (and trees) cast a lot of shade in cities. We systematically quantify the benefits for 🚶🚴
- Make shade plans, b…

Example of three different links connecting two nodes in the street network, from shortest and least shaded (1, blue) to longest and most shaded (3, green).
@sean@scoat.es
2025-07-03 17:11:16

It’s annoyingly-difficult to advocate for users from a technology standpoint in 2025. Defaults are usually slanted for the benefit of service providers.
As an example: it took me WAY too long to figure out how to turn off link rewriting in emails we’re sending with AWS Simple Email Service. I had to find it in two different places before we could send URLs we actually asked it to send (instead of bouncing them through their tracking service). The default was invasive and hard to change…

@ubuntourist@mastodon.social
2025-07-03 20:15:30

...but MAGAts will somehow blame Biden when it kicks them in the teeth...
‘It’s harsh. It’s mean, brutal’: Trump bill to cause most harm to America’s poorest.
theguardian.com/us-news/2025/j

@arXiv_astrophIM_bot@mastoxiv.page
2025-07-04 09:05:31

Image Marker
Ryan Walker, Andi Kisare, Lindsey Bleem
arxiv.org/abs/2507.02153 arxiv.org/pdf/2507.02153

@Techmeme@techhub.social
2025-09-02 15:30:48

Amazon plans to end the ability for Prime members to share free shipping benefits with individuals outside their household, starting on October 1 (Emma Roth/The Verge)
theverge.com/news/769051/amazo

@arXiv_csIR_bot@mastoxiv.page
2025-09-03 11:53:13

Towards Multi-Aspect Diversification of News Recommendations Using Neuro-Symbolic AI for Individual and Societal Benefit
Markus Reiter-Haas, Elisabeth Lex
arxiv.org/abs/2509.02220

@scott@carfree.city
2025-07-02 05:08:38

hope this will lead to lots of garages and driveways being eliminated!
missionlocal.org/2025/07/sf-ad

@NFL@darktundra.xyz
2025-09-03 22:16:23

Lamar Jackson contract: Ravens QB sidesteps question about extension, says he's 'not worried about that'

cbssports.com/nfl/news/lamar-j…

@arXiv_eessSY_bot@mastoxiv.page
2025-09-03 12:52:13

Grid congestion stymies climate benefit from U.S. vehicle electrification
Chao Duan, Adilson E. Motter
arxiv.org/abs/2509.01662 arxiv.org/p…

@cowboys@darktundra.xyz
2025-06-30 17:20:44

3 players who will benefit the most from adding George Pickens insidethestar.com/3-players-wh

@arXiv_econGN_bot@mastoxiv.page
2025-07-04 08:05:01

Tertiary Education Completion and Financial Aid Assistance: Evidence from an Information Experiment
Luca Bonacini, Giuseppe Pignataro, Veronica Rattini
arxiv.org/abs/2507.02560

@arXiv_eessIV_bot@mastoxiv.page
2025-07-03 09:33:00

Structure and Smoothness Constrained Dual Networks for MR Bias Field Correction
Dong Liang, Xingyu Qiu, Yuzhen Li, Wei Wang, Kuanquan Wang, Suyu Dong, Gongning Luo
arxiv.org/abs/2507.01326

@arXiv_csDC_bot@mastoxiv.page
2025-07-04 09:07:51

FlowSpec: Continuous Pipelined Speculative Decoding for Efficient Distributed LLM Inference
Xing Liu, Lizhuo Luo, Ming Tang, Chao Huang
arxiv.org/abs/2507.02620

@arXiv_csIT_bot@mastoxiv.page
2025-09-03 11:04:43

Learning to Ask: Decision Transformers for Adaptive Quantitative Group Testing
Mahdi Soleymani, Tara Javidi
arxiv.org/abs/2509.01723 arxiv.…

@lschiff@mastodon.sdf.org
2025-08-01 03:25:20

Standing up to Trump's assault on #research requires solidarity. ARL, ACRL, SSP, STM, and AUPresses issued a joint statement & request for feedback on next steps. Share your ideas!

@arXiv_csDB_bot@mastoxiv.page
2025-09-03 18:49:44

Replaced article(s) found for cs.DB. arxiv.org/list/cs.DB/new
[1/1]:
- Can Uncertainty Quantification Improve Learned Index Benefit Estimation?
Tao Yu, Zhaonian Zou, Hao Xiong

@benb@osintua.eu
2025-06-30 19:59:36

The Kyiv Independent launches 'How to help Ukraine' newsletter: benborges.xyz/2025/06/30/the-k

@mgorny@social.treehouse.systems
2025-07-01 16:20:14

One of the goals I've set for further development of #Python eclasses in #Gentoo was to avoid needless complexity. Unfortunately, the subject matter sometimes requires them. However, many of the functions added lately were already manually done in ebuilds for years.
We've started disabling plugin autoloading years ago. First we just did that for individual packages that caused issues. Then, for these where tests ended up being really slow. Finally, pretty much anywhere `python_test()` was declared. Doing it all manually was particularly cumbersome — all I needed for `EPYTEST_PLUGINS` is a good idea how to generalize it.
Similarly, `EPYTEST_XDIST` was added after we have been adding manually `epytest -p xdist -n "$(makeopts_jobs)" --dist=worksteal` — and while at it, I've added `EPYTEST_JOBS` to override the job count.
Perhaps `EPYTEST_TIMEOUT` wasn't that common. However, it was meant to help CI systems that could otherwise get stuck on hanging test.
Similarly, "standard library" version (like `3.9`) matching to `python_gen_cond_dep` was added after a long period of explicitly stating `python3_9 pypy3`. As an extra benefit, this also resolved the problem that at the time `pypy3` could mean different Python versions.

@aardrian@toot.cafe
2025-08-01 02:28:26

If you are part of a book club, like maybe an accessibility book club, remember that there are far better choices than Amazon for your book.
For example, this book can come from your local independent seller or benefit a support fund:
bookshop.org/p/books/…

@arXiv_quantph_bot@mastoxiv.page
2025-07-02 10:15:00

Harnessing Patterns to Support the Development of Hybrid Quantum Applications
Daniel Vietz, Martin Beisel, Johanna Barzen, Frank Leymann, Lavinia Stiliadou, Benjamin Weder
arxiv.org/abs/2507.00696

@memeorandum@universeodon.com
2025-08-01 11:06:04

Harm or Help? Why Companies Are Battling Tariffs Meant to Benefit Them. (Ana Swanson/New York Times)
nytimes.com/2025/08/01/busines
memeorandum.com/250801/p13#a25

@arXiv_hepex_bot@mastoxiv.page
2025-09-03 10:30:03

Double Descent and Overparameterization in Particle Physics Data
Matthias Vigl, Lukas Heinrich
arxiv.org/abs/2509.01397 arxiv.org/pdf/2509.…

@arXiv_csRO_bot@mastoxiv.page
2025-07-02 09:42:50

Edge Computing and its Application in Robotics: A Survey
Nazish Tahir, Ramviyas Parasuraman
arxiv.org/abs/2507.00523

@ruth_mottram@fediscience.org
2025-08-30 07:57:59

This morning 's #veganWeek post: breakfast was porridge (made with water soya milk) for kids toast with peanut butter, strawberries banana for me #VeganWeek
Am I an influencer now?
#vegan food I'm preparing for my family for the next few days.  It's really not hard. Today Nasi Goreng (or at least my version of it, with organic pea protein pieces, katsup manis, lots of spices and fresh vegetables...
fediscience.org/@Ruth_Mottram/
@… - "Veganism wasn’t meant to be like other food fads: it was intended to benefit not just the individual but society as a whole. Eating less meat would reduce greenhouse gas emissions and animal suffering. The evidence is clear. But right now we can’t be bothered "
Why the vegans lost on.ft.com/4mzHgFz

@raiders@darktundra.xyz
2025-07-03 10:47:53

How an Underrated Offseason Move Will Benefit the Raiders si.com/nfl/raiders/las-vegas-d

@simon_lucy@mastodon.social
2025-07-31 14:36:37

The Beacon Theatre Robert Plant gig for the benefit of Arthur Lee with Ian Hunter is now published as a legitimate album, "Beacon Theatre"
It's been around as a bootleg (an excellent one with a few tweaks and bonks), for years and they're two comparable releases.
The official one doesn't credit Ian Hunter, it was his band that backed Robert Plant, along with Love's guitarist Johnny Ecchols. He also sang on "When Will I Be Loved".

@aral@mastodon.ar.al
2025-06-27 09:44:00

At its core, #CCSignals is an attempt by Creative Commons, a Silicon Valley-based organisation, to legitimise the AI grifts of its donors – Google, Microsoft, and Meta (Zuckerberg).
Creative Commons was always a thinly-veiled attempt at enabling Big Tech data farmers to get more data (that’s why the whole “open data” realm is so well funded/popular – open as in “open for business” not fre…

Donald Trump, along with his two eldest sons, Eric and Donald Jr., formed a series of partnerships, with investors who were willing to bank on his victory, especially in cryptocurrency
Once Mr. Trump won the presidency in November, that approach kicked into overdrive.
His family business announced numerous new deals that would financially benefit Mr. Trump directly,
-- even as he made policy decisions that affected those industries or that involved countries in which the Uni…

@arXiv_physicsplasmph_bot@mastoxiv.page
2025-09-03 10:13:53

Real-Time Applicability of Emulated Virtual Circuits for Tokamak Plasma Shape Control
Pedro Cavestany (STFC Hartree Centre), Alasdair Ross (STFC Hartree Centre), Adriano Agnello (STFC Hartree Centre), Aran Garrod (STFC Hartree Centre), Nicola C. Amorisco (UK Atomic Energy Authority), George K. Holt (STFC Hartree Centre), Kamran Pentland (UK Atomic Energy Authority), James Buchanan (UK Atomic Energy Authority)

@arXiv_csLG_bot@mastoxiv.page
2025-09-01 09:58:42

UniMLR: Modeling Implicit Class Significance for Multi-Label Ranking
V. Bugra Yesilkaynak, Emine Dari, Alican Mertan, Gozde Unal
arxiv.org/abs/2508.21772

@fgraver@hcommons.social
2025-07-31 08:20:46

A Society Governed by Whiny Rich People Throwing Tantrums jacobin.com/2025/07/ultrarich-
💯🔥: “Capital flight is an important political and economic question for anyone thinking about legislating as a lefti…

@pixelcode@social.tchncs.de
2025-07-30 12:49:08

It would benefit all #Firefox-based browsers if #TorBrowser implemented privacy-preserving #VerticalTabs.
(Size-adjustable sidebars inevitably resize the viewport, resulting in odd wid…

@arXiv_physicschemph_bot@mastoxiv.page
2025-07-03 09:12:10

Attosecond Control and Measurement of Chiral Photoionisation Dynamics
Meng Han, Jia-Bao Ji, Alexander Blech, R. Esteban Goetz, Corbin Allison, Loren Greenman, Christiane P. Koch, Hans Jakob W\"orner
arxiv.org/abs/2507.01906

@arXiv_csCL_bot@mastoxiv.page
2025-08-01 10:20:11

DiffLoRA: Differential Low-Rank Adapters for Large Language Models
Alexandre Misrahi, Nadezhda Chirkova, Maxime Louis, Vassilina Nikoulina
arxiv.org/abs/2507.23588

@NFL@darktundra.xyz
2025-07-02 15:44:37

Added touch: Opportunity knocks for these fantasy football players in 2025 espn.com/fantasy/football/stor

@Techmeme@techhub.social
2025-08-25 13:15:40

Docs: xAI terminated its status as a public benefit corporation as of May 9, 2024; XAI Holdings, which houses xAI and X, also doesn't have a PBC designation (Lora Kolodny/CNBC)
cnbc.com/2025/08/25/elon-musk-

@arXiv_csGT_bot@mastoxiv.page
2025-07-02 07:49:20

Horus: A Protocol for Trustless Delegation Under Uncertainty
David Shi, Kevin Joo
arxiv.org/abs/2507.00631 arxiv.org/…

@threeofus@mstdn.social
2025-06-30 09:31:01

1/ I’m #depressed. Turns out dropping my antidepressant dose was not a good idea. I went back up from 50 to 75mg / day of Sertraline last week. Will take a few weeks to feel the benefit. The max dose I take is 100. Maybe I’ll increase to that in 4 weeks’ time. It’s crazy the difference a little pill makes. I’m very lethargic. I slept for 3 hours yesterday afternoon, but could have stayed in bed…

@arXiv_astrophCO_bot@mastoxiv.page
2025-07-02 08:52:10

Lya2pcf: an efficient pipeline to estimate two- and three-point correlation functions of the Lyman-$\alpha$ forest
Josue De-Santiago, Rafael Guti\'errez-Balboa, Gustavo Niz, Alma X. Gonz\'alez-Morales
arxiv.org/abs/2507.00129

@timbray@cosocial.ca
2025-08-26 16:10:23

On several input channels I’m seeing rage about the new Android Phone and Contacts apps, big redesigns with no warning. If there are people for whom you provide tech support, e.g. elderly relatives, you'd better get in touch and teach them how to answer a phone call, because tapping the green button doesn’t work any more. Also it’s hard to find call history, and more.
Some Product Manager needed a promotion. As usual, when doing cost/benefit analysis, the pain inflicted on users w…

@arXiv_eessIV_bot@mastoxiv.page
2025-09-03 08:28:03

Promptable Longitudinal Lesion Segmentation in Whole-Body CT
Yannick Kirchhoff, Maximilian Rokuss, Fabian Isensee, Klaus H. Maier-Hein
arxiv.org/abs/2509.00613

@arXiv_csPL_bot@mastoxiv.page
2025-07-31 07:37:31

A Compute-Matched Re-Evaluation of TroVE on MATH
Tobias Sesterhenn, Ian Berlot-Attwell, Janis Zenkner, Christian Bartelt
arxiv.org/abs/2507.22069

@bourgwick@heads.social
2025-08-27 14:09:08

the grateful dead's springfield creamery benefit, today in 1972. come for the galaxy-bending "dark star" & stay for organic yogurt (& huey lewis, on driving the truck), naked pole guy (his name was gary jensen!), & al strobel, the 1-armed man from "twin peaks." also LSD.

flyer for Grateful Dead in veneta
Jerry Garcia with Naked Pole Guy
Al Strobel smokes cigarette
Al Strobel wields one armed chainsaw
@primonatura@mstdn.social
2025-08-01 16:00:22

"Revolutionary city-scanning satellite from UK-France partnership set to transform climate monitoring"
#UK #UnitedKingdom #France

@lightweight@mastodon.nzoss.nz
2025-08-27 21:03:23
Content warning: NZ Tourism, Cruise ships

Seems to me that rnz.co.nz/news/thedetail/57126 is good news for pretty much everyone. Precious few people in Aotearoa benefit from cruise ships visiting. Many, however, …

@azonenberg@ioc.exchange
2025-08-27 16:08:59

Curious: do banks in other countries completely shut down automated processing operations on weekends/holidays, or is that just a US thing? Why is the US still like this?
Like, if I want to move money from one bank to another on a weekend it normally won't execute until the following Monday (plus additional clearance delays etc).
There's no logical reason for this, it's not like there's a social benefit to the process and I doubt the banks shut down their datacent…

@tarah@infosec.exchange
2025-08-08 11:29:00

New Post: Players! Welcome to the 4th Annual EFF Benefit Poker Tournament at Defcon 33! tarah.org/2025/08/08/players-w

@zachleat@zachleat.com
2025-07-25 17:48:10

@… @… since I have the benefit of comparing test output against other libraries, I can just generate random input and see what happens right 😅

@andres4ny@social.ridetrans.it
2025-07-27 05:07:03

Between the "supply" charges, "delivery" charges, sales taxes, and system benefit charge, I calculate that my electricity (from Con Ed, in NYC) costs about 33.5 cents per kWh (plus roughly $30/mo basic service fee & tax surcharges).
Which is.. a lot. I calculate that my two (classic) CR boxes at 50W each cost about $25/mo to run 24/7, so switching to more efficient fans (PC fans or efficient hepa filter units) would pay for themselves reasonably quickly.

@seeingwithsound@mas.to
2025-06-27 11:31:46

Parents of young blind children face ethical dilemmas both when applying or withholding sensory substitution options: children have greater brain plasticity, but long-term benefit of sensory substitution in daily life has not yet been firmly established, while mastering sensory substitution as an adult likely becomes harder and with more limited results. #ethics

@lysander07@sigmoid.social
2025-08-26 09:55:22

The benefit of an multidisciplinary community as in #CORDI2025 is that you also have chemists on site assisting you to start up the conference with a loud booom! :)
nfdi.de/cordi-2025/?lang=en

York Sure Vetter, director of NFDI, together with chemist professor Sonja Herris-Pawlis about to start up the conference with a small hydrogen oxigen explosion....
Formulas for illustrating the two experiments that are presented at the start of CORDI. The upper one is the hydrogen oxigene reaction producing water. The socond one is supposed to create smoke. Probably some trained chemists might assist us to explain it :)
@arXiv_condmatmtrlsci_bot@mastoxiv.page
2025-09-01 08:06:53

Simulation of Radiation Damage on [M(COD)Cl]$_2$ using Density Functional Theory
Nathalie K. Fernando, Nayera Ahmed, Katherine Milton, Claire A. Murray, Anna Regoutz, Laura E. Ratcliff
arxiv.org/abs/2508.21170

@cheryanne@aus.social
2025-07-23 22:45:06

The ACT government provides a wide range of grants, funding and other support to individuals, groups, and organisations that benefit Canberrans and the Canberra community.
act.gov.au/money-and-tax/grant

@pbloem@sigmoid.social
2025-06-26 10:56:22

After training, we finetune on real-world data. We observe that the models that have been pre-trained with noise converge very quickly compared to a baseline which is trained from scratch.
Moreover, on the other datasets, the UP models retain their zero-shot performance during finetuning. This suggests that there may be a generalization benefit to using a UP model.
All this is at the expense of much longer training, but that cost can be amortized over many tasks.

The results for the finetuning experiment. Six datasets (linux, code, dyck, wp, german and ndfa) and the performance of four models: the baseline and UP trained models and two finetuning datasets. 

The results show that the UP models converge quicker, and that they retain most of their zero-shot performance on the other datasets.
@wyri@toot-toot.wyrihaxim.us
2025-06-26 20:30:08

And then look at any other
potentional @… package that would benefit from this, which will probably mean github.com/friends-of-reactphp

@ruth_mottram@fediscience.org
2025-08-29 19:08:17

"Veganism wasn’t meant to be like other food fads: it was intended to benefit not just the individual but society as a whole. Eating less meat would reduce greenhouse gas emissions and animal suffering. The evidence is clear. But right now we can’t be bothered "
Why the vegans lost on.ft.com/4mzHgFz

@steve@s.yelvington.com
2025-07-25 13:09:37

Meanwhile, all food prices are going up.
economist.com/united-states/20

@jtk@infosec.exchange
2025-07-13 16:44:29

I'm frequently despondent when I see organizations remove or hide once public Internet operational status and trending data.
At one time, lots of networks used to publish a variety of status and traffic summaries. The non-profit Internet2 for example was exemplary in this regard. Cloudflare, a commercial organization, now provides far more public benefit data than they do.
A big loss earlier this year was the once public Equinix network status page -

@gwire@mastodon.social
2025-08-28 06:55:54

There was a piece on morning TV that started “Parents are being urged to apply [for a new childcare benefit]” and as usual I said (to myself) “can we stop using administrative burden to control demand?”
But then the first question the presenter asked the minister was “why is this not automatic?”

@detondev@social.linux.pizza
2025-06-23 19:13:42

If we must grind up human flesh and bones in the industrial machine that we call modern America, then, before God, I assert that those who consume the coal and you and I who benefit from that service, because we live in comfort, we owe cutsey little corporate memphis instagram shoutouts to those men first and we owe nothing to their families if they die

@benb@osintua.eu
2025-07-11 16:57:40

Europe will also benefit from US-Ukraine minerals investment fund, Americans say: benborges.xyz/2025/07/11/europ

@arXiv_csSE_bot@mastoxiv.page
2025-07-31 09:53:01

AutoCodeSherpa: Symbolic Explanations in AI Coding Agents
Sungmin Kang, Haifeng Ruan, Abhik Roychoudhury
arxiv.org/abs/2507.22414 arxiv.org…

@tiotasram@kolektiva.social
2025-07-29 11:17:44

#ContemporaryContradictions #HashTagGames
Rules: include as many contradictions s you'd like. Can be profound or trivial. Each contradiction is stated via exactly 1 or 2 questions, no statements and not more than 2 questions. Try to group yours into a single post, rather than one post per contradiction, so that it's easier to see more voices when scrolling the hash tag.
Why does "race" work according to the "one drop rule" if you have Black ancestors, but according to "blood quantum" if you have Indigenous ancestors? Who benefits from this arrangement?
Why do we think of seeds as merely a reproduction mechanism for trees, instead of thinking of trees as merely a reproduction mechanism for seeds, especially since some plants can spend millennia as seeds but can survive for only part of a year after sprouting? Are metabolic activity or structural complexity really so important?
If Columbus discovered America, did Batu Khan discover Europe? What is an "Age of Discovery?"
Why don't corporations in the US try to lobby the government for a single-payer healthcare system where the government foots the bill for healthcare instead of companies paying to deeply subsidize their employees' healthcare? What benefit do they gain that's worth that cost, which in other countries is paid for via taxes?
Why is the cost of renting (which gets you zero equity) anywhere close to the cost of a mortgage (which eventually gets you ownership)? If the costs are similar but the benefits are so different, why does anyone ever rent?
Why do we obsess over the fruit/vegetable classification of tomatoes, but not corn, okra, cucumbers, zucchini, etc.?

@aral@mastodon.ar.al
2025-07-23 18:33:19

Does it even have to be said that Microsoft should not have anything to do with an EU sovereign tech fund or benefit from it financially in any way?
Probably, yes.
Over and over again. social.ayco.io/@ayo/1149039402

@ruth_mottram@fediscience.org
2025-08-29 18:18:31

I was so annoyed about the implications of this article I decided to post photos of the delicious nutritious #vegan food I'm preparing for my family for the next few days. It's really not hard. Today Nasi Goreng (or at least my version of it, with organic pea protein pieces, katsup manis, lots of spices and fresh vegetables...
fediscience.org/@Ruth_Mottram/
@… - "Veganism wasn’t meant to be like other food fads: it was intended to benefit not just the individual but society as a whole. Eating less meat would reduce greenhouse gas emissions and animal suffering. The evidence is clear. But right now we can’t be bothered "
Why the vegans lost on.ft.com/4mzHgFz

@arXiv_econGN_bot@mastoxiv.page
2025-09-01 08:56:23

Non-Take-Up of Unemployment Benefit II in Germany: A Longitudinal Perspective Using Administrative Data
J\"urgen Wiemers
arxiv.org/abs/2508.21535

@seeingwithsound@mas.to
2025-06-27 11:13:57

Parents of young blind children face ethical dilemmas both when applying or withholding sensory substitution options: children have greater brain plasticity, but long-term benefit of sensory substitution in daily life has not yet been firmly established. #ethics #NeuroEthics

@Techmeme@techhub.social
2025-06-20 15:55:49

Silicon Valley is pushing senators to follow the House in reviving a favorable tax benefit that disappeared because of a US tax law Section 174 change in 2017 (Axios)
axios.com/2025/06/20/silicon-v

@tarah@infosec.exchange
2025-08-08 06:26:37

NOTE THE VENUE CHANGE #DEFCON Folks! the @… 4th annual poker benefit tournament starts at high noon Friday August 8th at the Planet Hollywood

@arXiv_csRO_bot@mastoxiv.page
2025-07-30 10:00:31

A Systematic Robot Design Optimization Methodology with Application to Redundant Dual-Arm Manipulators
Dominic Guri, George Kantor
arxiv.org/abs/2507.21896

@cowboys@darktundra.xyz
2025-06-19 11:03:45

Cowboy Roundup: Most important offensive players, Roster rule can benefit Dallas si.com/nfl/cowboys/news/dallas

@raiders@darktundra.xyz
2025-06-29 19:37:36

Critical Change in Approach Will Benefit the Raiders si.com/nfl/raiders/las-vegas-p

@ruth_mottram@fediscience.org
2025-08-28 20:40:22

"Veganism wasn’t meant to be like other food fads: it was intended to benefit not just the individual but society as a whole. Eating less meat would reduce greenhouse gas emissions and animal suffering. The evidence is clear. But right now we can’t be bothered "
Why the vegans lost on.ft.com/4mzHgFz

Federal health officials are seeking to launch a “bold, edgy” public service campaign to warn Americans of the dangers of ultra-processed foods in social media, transit ads, billboards and even text messages.
-- And they potentially stand to profit off the results.
Calley Means, a key adviser to Bobby Kennedy, could directly benefit from one of the campaign’s stated aims:
popularizing “technology like wearables as cool, modern tools for measuring diet impact and taking contr…

@memeorandum@universeodon.com
2025-07-21 23:55:42

Engineer Pleads Guilty to Stealing for Chinese Government's Benefit Trade Secret Technology Designed for Missile Launch and Detection (US Department of Justice)
justice.gov/opa/pr/engineer-pl
memeorandum.com/250721/p130#a2

@aral@mastodon.ar.al
2025-07-22 15:37:20

“Solidarity activities at the global level should be strategic and impactful. They should focus on disrupting all components of the supply chain that benefit the Israeli occupation in general and settler colonialism in particular. This means citizens around the world in different sectors of society can contribute to the struggle for Palestine as both producers and consumers by heeding the call to boycott and divest from Israel.
Direct actions from the working class are crucial. Workers…

@tiotasram@kolektiva.social
2025-07-28 10:41:42

How popular media gets love wrong
Had some thoughts in response to a post about loneliness on here. As the author emphasized, reassurances from people who got lucky are not terribly comforting to those who didn't, especially when the person who was lucky had structural factors in their favor that made their chances of success much higher than those is their audience. So: these are just my thoughts, and may not have any bearing on your life. I share them because my experience challenged a lot of the things I was taught to believe about love, and I think my current beliefs are both truer and would benefit others seeing companionship.
We're taught in many modern societies from an absurdly young age that love is not something under our control, and that dating should be a process of trying to kindle love with different people until we meet "the one" with whom it takes off. In the slightly-less-fairytale corners of modern popular media, we might fund an admission that it's possible to influence love, feeding & tending the fire in better or worse ways. But it's still modeled as an uncontrollable force of nature, to be occasionally influenced but never tamed. I'll call this the "fire" model of love.
We're also taught (and non-boys are taught more stringently) a second contradictory model of love: that in a relationship, we need to both do things and be things in order to make our partner love us, and that if we don't, our partner's love for us will wither, and (especially if you're not a boy) it will be our fault. I'll call this the "appeal" model of love.
Now obviously both of these cannot be totally true at once, and plenty of popular media centers this contradiction, but there are really very few competing models on offer.
In my experience, however, it's possible to have "pre-meditated" love. In other words, to decide you want to love someone (or at least, try loving them), commit to that idea, and then actually wind up in love with them (and them with you, although obviously this second part is not directly under your control). I'll call this the "engineered" model of love.
Now, I don't think that the "fire" and "appeal" models of love are totally wrong, but I do feel their shortcomings often suggest poor & self-destructive relationship strategies. I do think the "fire" model is a decent model for *infatuation*, which is something a lot of popular media blur into love, and which drives many (but not all) of the feelings we normally associate with love (even as those feelings have other possible drivers too). I definitely experienced strong infatuation early on in my engineered relationship (ugh that sounds terrible but I'll stick with it; I promise no deception was involved). I continue to experience mild infatuation years later that waxes and wanes. It's not a stable foundation for a relationship but it can be a useful component of one (this at least popular media depicts often).
I'll continue these thoughts in a reply, by it might take a bit to get to it.
#relationships

@cowboys@darktundra.xyz
2025-08-18 21:24:18

Patriots positioned to benefit most if Cowboys trade Micah Parsons sportingnews.com/us/nfl/new-en

@raiders@darktundra.xyz
2025-07-31 17:42:42

The Biggest Beneficiary of the Raiders' Offseason Moves si.com/nfl/raiders/las-vegas-t

@tiotasram@kolektiva.social
2025-07-28 13:04:34

How popular media gets love wrong
Okay, so what exactly are the details of the "engineered" model of love from my previous post? I'll try to summarize my thoughts and the experiences they're built on.
1. "Love" can be be thought of like a mechanism that's built by two (or more) people. In this case, no single person can build the thing alone, to work it needs contributions from multiple people (I suppose self-love might be an exception to that). In any case, the builders can intentionally choose how they build (and maintain) the mechanism, they can build it differently to suit their particular needs/wants, and they will need to maintain and repair it over time to keep it running. It may need winding, or fuel, or charging plus oil changes and bolt-tightening, etc.
2. Any two (or more) people can choose to start building love between them at any time. No need to "find your soulmate" or "wait for the right person." Now the caveat is that the mechanism is difficult to build and requires lots of cooperation, so there might indeed be "wrong people" to try to build love with. People in general might experience more failures than successes. The key component is slowly-escalating shared commitment to the project, which is negotiated between the partners so that neither one feels like they've been left to do all the work themselves. Since it's a big scary project though, it's very easy to decide it's too hard and give up, and so the builders need to encourage each other and pace themselves. The project can only succeed if there's mutual commitment, and that will certainly require compromise (sometimes even sacrifice, though not always). If the mechanism works well, the benefits (companionship; encouragement; praise; loving sex; hugs; etc.) will be well worth the compromises you make to build it, but this isn't always the case.
3. The mechanism is prone to falling apart if not maintained. In my view, the "fire" and "appeal" models of love don't adequately convey the need for this maintenance and lead to a lot of under-maintained relationships many of which fall apart. You'll need to do things together that make you happy, do things that make your partner happy (in some cases even if they annoy you, but never in a transactional or box-checking way), spend time with shared attention, spend time alone and/or apart, reassure each other through words (or deeds) of mutual beliefs (especially your continued commitment to the relationship), do things that comfort and/or excite each other physically (anywhere from hugs to hand-holding to sex) and probably other things I'm not thinking of. Not *every* relationship needs *all* of these maintenance techniques, but I think most will need most. Note especially that patriarchy teaches men that they don't need to bother with any of this, which harms primarily their romantic partners but secondarily them as their relationships fail due to their own (cultivated-by-patriarchy) incompetence. If a relationship evolves to a point where one person is doing all the maintenance (& improvement) work, it's been bent into a shape that no longer really qualifies as "love" in my book, and that's super unhealthy.
4. The key things to negotiate when trying to build a new love are first, how to work together in the first place, and how to be comfortable around each others' habits (or how to change those habits). Second, what level of commitment you have right now, and what how/when you want to increase that commitment. Additionally, I think it's worth checking in about what you're each putting into and getting out of the relationship, to ensure that it continues to be positive for all participants. To build a successful relationship, you need to be able to incrementally increase the level of commitment to one that you're both comfortable staying at long-term, while ensuring that for both partners, the relationship is both a net benefit and has manageable costs (those two things are not the same). Obviously it's not easy to actually have conversations about these things (congratulations if you can just talk about this stuff) because there's a huge fear of hearing an answer that you don't want to hear. I think the range of discouraging answers which actually spell doom for a relationship is smaller than people think and there's usually a reasonable "shoulder" you can fall into where things aren't on a good trajectory but could be brought back into one, but even so these conversations are scary. Still, I think only having honest conversations about these things when you're angry at each other is not a good plan. You can also try to communicate some of these things via non-conversational means, if that feels safer, and at least being aware that these are the objectives you're pursuing is probably helpful.
I'll post two more replies here about my own experiences that led me to this mental model and trying to distill this into advice, although it will take me a moment to get to those.
#relationships #love

@Techmeme@techhub.social
2025-08-22 21:22:04

Apple sues ex-staffer Chen Shi for allegedly stealing trade secrets to benefit Oppo before leaving Apple in June, claims Oppo knew of and encouraged his actions (Newley Purnell/Bloomberg)
bloomberg.com/news/articles/20

@raiders@darktundra.xyz
2025-08-30 20:11:11

Trade Radar: The Raiders’ Pursuit of Pass Rush Upgrade raiderramble.com/2025/08/27/tr

@tiotasram@kolektiva.social
2025-07-22 00:03:45

Overly academic/distanced ethical discussions
Had a weird interaction with @/brainwane@social.coop just now. I misinterpreted one of their posts quoting someone else and I think the combination of that plus an interaction pattern where I'd assume their stance on something and respond critically to that ended up with me getting blocked. I don't have hard feelings exactly, and this post is only partly about this particular person, but I noticed something interesting by the end of the conversation that had been bothering me. They repeatedly criticized me for assuming what their position was, but never actually stated their position. They didn't say: "I'm bothered you assumed my position was X, it's actually Y." They just said "I'm bothered you assumed my position was X, please don't assume my position!" I get that it's annoying to have people respond to a straw man version of your argument, but when I in response asked some direct questions about what their position was, they gave some non-answers and then blocked me. It's entirely possible it's a coincidence, and they just happened to run out of patience on that iteration, but it makes me take their critique of my interactions a bit less seriously. I suspect that they just didn't want to hear what I was saying, while at the same time they wanted to feel as if they were someone who values public critique and open discussion of tricky issues (if anyone reading this post also followed our interaction and has a different opinion of my behavior, I'd be glad to hear it; it's possible In effectively being an asshole here and it would be useful to hear that if so).
In any case, the fact that at the end of the entire discussion, I'm realizing I still don't actually know their position on whether they think the AI use case in question is worthwhile feels odd. They praised the system on several occasions, albeit noting some drawbacks while doing so. They said that the system was possibly changing their anti-AI stance, but then got mad at me for assuming this meant that they thought this use-case was justified. Maybe they just haven't made up their mind yet but didn't want to say that?
Interestingly, in one of their own blog posts that got linked in the discussion, they discuss a different AI system, and despite listing a bunch of concrete harms, conclude that it's okay to use it. That's fine; I don't think *every* use of AI is wrong on balance, but what bothered me was that their post dismissed a number of real ethical issues by saying essentially "I haven't seen calls for a boycott over this issue, so it's not a reason to stop use." That's an extremely socially conformist version of ethics that doesn't sit well with me. The discussion also ended up linking this post: chelseatroy.com/2024/08/28/doe which bothered me in a related way. In it, Troy describes classroom teaching techniques for introducing and helping students explore the ethics of AI, and they seem mostly great. They avoid prescribing any particular correct stance, which is important when teaching given the power relationship, and they help students understand the limitations of their perspectives regarding global impacts, which is great. But the overall conclusion of the post is that "nobody is qualified to really judge global impacts, so we should focus on ways to improve outcomes instead of trying to judge them." This bothers me because we actually do have a responsibility to make decisive ethical judgments despite limitations of our perspectives. If we never commit to any ethical judgment against a technology because we think our perspective is too limited to know the true impacts (which I'll concede it invariably is) then we'll have to accept every technology without objection, limiting ourselves to trying to improve their impacts without opposing them. Given who currently controls most of the resources that go into exploration for new technologies, this stance is too permissive. Perhaps if our objection to a technology was absolute and instantly effective, I'd buy the argument that objecting without a deep global view of the long-term risks is dangerous. As things stand, I think that objecting to the development/use of certain technologies in certain contexts is necessary, and although there's a lot of uncertainly, I expect strongly enough that the overall outcomes of objection will be positive that I think it's a good thing to do.
The deeper point here I guess is that this kind of "things are too complicated, let's have a nuanced discussion where we don't come to any conclusions because we see a lot of unknowns along with definite harms" really bothers me.

@Techmeme@techhub.social
2025-07-22 01:35:46

Leaked memo: CEO Dario Amodei told staff Anthropic plans to seek UAE and Qatar funding, likely enriching "dictators", says a "no bad person" rule is impractical (Kylie Robison/Wired)
wired.com/story/anthropic-dari