Tootfinder

Opt-in global Mastodon full text search. Join the index!

@simon_lucy@mastodon.social
2025-08-15 09:47:09

As Putin is more aware than most, flattery and threats are his chief tactics against Trump, that and his trapping Trump in private into saying yes, to whatever he wants.
But they're also the tactics for Ukraine's and its Allies against the TACO President.
Trump wants out from under, an empty promise to share Ukraine's minerals won't cut it with Putin as he'll say they're his anyway.
A preemptive missile strike from Provideniyan Separatists on the t…

@arXiv_csCL_bot@mastoxiv.page
2025-07-14 09:58:42

DocPolarBERT: A Pre-trained Model for Document Understanding with Relative Polar Coordinate Encoding of Layout Structures
Benno Uthayasooriyar, Antoine Ly, Franck Vermet, Caio Corro
arxiv.org/abs/2507.08606

@arXiv_csMM_bot@mastoxiv.page
2025-07-16 07:36:31

MultiVox: Benchmarking Voice Assistants for Multimodal Interactions
Ramaneswaran Selvakumar, Ashish Seth, Nishit Anand, Utkarsh Tyagi, Sonal Kumar, Sreyan Ghosh, Dinesh Manocha
arxiv.org/abs/2507.10859

@midtsveen@social.linux.pizza
2025-07-13 23:26:20

I have recently become aware that I tend to change my profile picture based on my mood.
As someone who is #ActuallyAutistic, I have only just discovered this particular habit of mine.
The more you know!
#Autistic

@arXiv_csGR_bot@mastoxiv.page
2025-07-14 07:40:12

FlowDrag: 3D-aware Drag-based Image Editing with Mesh-guided Deformation Vector Flow Fields
Gwanhyeong Koo, Sunjae Yoon, Younghwan Lee, Ji Woo Hong, Chang D. Yoo
arxiv.org/abs/2507.08285

@tiotasram@kolektiva.social
2025-06-21 02:34:13

Why AI can't possibly make you more productive; long
#AI and "productivity", some thoughts:
Edit: fixed some typos.
Productivity is a concept that isn't entirely meaningless outside the context of capitalism, but it's a concept that is heavily inflected in a capitalist context. In many uses today it effectively means "how much you can satisfy and/or exceed your boss' expectations." This is not really what it should mean: even in an anarchist utopia, people would care about things like how many shirts they can produce in a week, although in an "I'd like to voluntarily help more people" way rather than an "I need to meet this quota to earn my survival" way. But let's roll with this definition for a second, because it's almost certainly what your boss means when they say "productivity", and understanding that word in a different (even if truer) sense is therefore inherently dangerous.
Accepting "productivity" to mean "satisfying your boss' expectations," I will now claim: the use of generative AI cannot increase your productivity.
Before I dive in, it's imperative to note that the big generative models which most people think of as constituting "AI" today are evil. They are 1: pouring fuel on our burning planet, 2: psychologically strip-mining a class of data laborers who are exploited for their precarity, 3: enclosing, exploiting, and polluting the digital commons, and 4: stealing labor from broad classes of people many of whom are otherwise glad to give that labor away for free provided they get a simple acknowledgement in return. Any of these four "ethical issues" should be enough *alone* to cause everyone to simply not use the technology. These ethical issues are the reason that I do not use generative AI right now, except for in extremely extenuating circumstances. These issues are also convincing for a wide range of people I talk to, from experts to those with no computer science background. So before I launch into a critique of the effectiveness of generative AI, I want to emphasize that such a critique should be entirely unnecessary.
But back to my thesis: generative AI cannot increase your productivity, where "productivity" has been defined as "how much you can satisfy and/or exceed your boss' expectations."
Why? In fact, what the fuck? Every AI booster I've met has claimed the opposite. They've given me personal examples of time saved by using generative AI. Some of them even truly believe this. Sometimes I even believe they saved time without horribly compromising on quality (and often, your boss doesn't care about quality anyways if the lack of quality is hard to measure of doesn't seem likely to impact short-term sales/feedback/revenue). So if generative AI genuinely lets you write more emails in a shorter period of time, or close more tickets, or something else along these lines, how can I say it isn't increasing your ability to meet your boss' expectations?
The problem is simple: your boss' expectations are not a fixed target. Never have been. In virtue of being someone who oversees and pays wages to others under capitalism, your boss' game has always been: pay you less than the worth of your labor, so that they can accumulate profit and thus more capital to remain in charge instead of being forced into working for a wage themselves. Sure, there are layers of management caught in between who aren't fully in this mode, but they are irrelevant to this analysis. It matters not how much you please your manager if your CEO thinks your work is not worth the wages you are being paid. And using AI actively lowers the value of your work relative to your wages.
Why do I say that? It's actually true in several ways. The most obvious: using generative AI lowers the quality of your work, because the work it produces is shot through with errors, and when your job is reduced to proofreading slop, you are bound to tire a bit, relax your diligence, and let some mistakes through. More than you would have if you are actually doing and taking pride in the work. Examples are innumerable and frequent, from journalists to lawyers to programmers, and we laugh at them "haha how stupid to not check whether the books the AI reviewed for you actually existed!" but on a deeper level if we're honest we know we'd eventually make the same mistake ourselves (bonus game: spot the swipe-typing typos I missed in this post; I'm sure there will be some).
But using generative AI also lowers the value of your work in another much more frightening way: in this era of hype, it demonstrates to your boss that you could be replaced by AI. The more you use it, and no matter how much you can see that your human skills are really necessary to correct its mistakes, the more it appears to your boss that they should hire the AI instead of you. Or perhaps retain 10% of the people in roles like yours to manage the AI doing the other 90% of the work. Paradoxically, the *more* you get done in terms of raw output using generative AI, the more it looks to your boss as if there's an opportunity to get enough work done with even fewer expensive humans. Of course, the decision to fire you and lean more heavily into AI isn't really a good one for long-term profits and success, but the modern boss did not get where they are by considering long-term profits. By using AI, you are merely demonstrating your redundancy, and the more you get done with it, the more redundant you seem.
In fact, there's even a third dimension to this: by using generative AI, you're also providing its purveyors with invaluable training data that allows them to make it better at replacing you. It's generally quite shitty right now, but the more use it gets by competent & clever people, the better it can become at the tasks those specific people use it for. Using the currently-popular algorithm family, there are limits to this; I'm not saying it will eventually transcend the mediocrity it's entwined with. But it can absolutely go from underwhelmingly mediocre to almost-reasonably mediocre with the right training data, and data from prompting sessions is both rarer and more useful than the base datasets it's built on.
For all of these reasons, using generative AI in your job is a mistake that will likely lead to your future unemployment. To reiterate, you should already not be using it because it is evil and causes specific and inexcusable harms, but in case like so many you just don't care about those harms, I've just explained to you why for entirely selfish reasons you should not use it.
If you're in a position where your boss is forcing you to use it, my condolences. I suggest leaning into its failures instead of trying to get the most out of it, and as much as possible, showing your boss very clearly how it wastes your time and makes things slower. Also, point out the dangers of legal liability for its mistakes, and make sure your boss is aware of the degree to which any of your AI-eager coworkers are producing low-quality work that harms organizational goals.
Also, if you've read this far and aren't yet of an anarchist mindset, I encourage you to think about the implications of firing 75% of (at least the white-collar) workforce in order to make more profit while fueling the climate crisis and in most cases also propping up dictatorial figureheads in government. When *either* the AI bubble bursts *or* if the techbros get to live out the beginnings of their worker-replacement fantasies, there are going to be an unimaginable number of economically desperate people living in increasingly expensive times. I'm the kind of optimist who thinks that the resulting social crucible, though perhaps through terrible violence, will lead to deep social changes that effectively unseat from power the ultra-rich that continue to drag us all down this destructive path, and I think its worth some thinking now about what you might want the succeeding stable social configuration to look like so you can advocate towards that during points of malleability.
As others have said more eloquently, generative AI *should* be a technology that makes human lives on average easier, and it would be were it developed & controlled by humanists. The only reason that it's not, is that it's developed and controlled by terrible greedy people who use their unfairly hoarded wealth to immiserate the rest of us in order to maintain their dominance. In the long run, for our very survival, we need to depose them, and I look forward to what the term "generative AI" will mean after that finally happens.

@mgorny@social.treehouse.systems
2025-08-11 11:36:26

"""
All of which was of the utmost importance for subsequent developments in the medicine of the mind. In its positivist incarnation, this was little more than the combination of the two experiences that classicism had juxtaposed without ever joining them together: a social, normative and dichotomous experience of madness that revolved entirely around the imperative of confinement, formulated in a style as simple as ‘yes or no’, ‘dangerous or harmless’, and ‘good or not good for confinement’, and a finely differentiated, qualitative, juridical experience, well aware of limits and degrees, which looked into all the aspects of the behaviour of the subject for the polymorphous incarnations that insanity might assume. The psychopathology of the nineteenth century (and perhaps our own too, even now) believes that it orients itself and takes its bearings in relation to a homo natura, or a normal man pre-existing all experience of mental illness. Such a man is in fact an invention, and if he is to be situated, it is not in a natural space, but in a system that identifies the socius to the subject of the law. Consequently a madman is not recognised as such because an illness has pushed him to the margins of normality, but because our culture situates him at the meeting point between the social decree of confinement and the juridical knowledge that evaluates the responsibility of individuals before the law. The ‘positive’ science of mental illness and the humanitarian sentiments that brought the mad back into the realm of the human were only possible once that synthesis had been solidly established. They could be said to form the concrete a priori of any psychopathology with scientific pretensions.
"""
(Michel Foucault, History of Madness)

@arXiv_quantph_bot@mastoxiv.page
2025-08-12 11:57:03

Characterization of syndrome-dependent logical noise in detector regions
Matthew Girling, Ben Criger, Cristina Cirstoiu
arxiv.org/abs/2508.08188

@arXiv_csSE_bot@mastoxiv.page
2025-08-11 08:47:40

A Survey on Task Scheduling in Carbon-Aware Container Orchestration
Jialin Yang, Zainab Saad, Jiajun Wu, Xiaoguang Niu, Henry Leung, Steve Drew
arxiv.org/abs/2508.05949

@metacurity@infosec.exchange
2025-08-05 13:37:18

As we head into a blizzard of infosec news, stay ahead of the curve by checking out today's Metacurity for the latest developments, including
--Ukraine claims major hack of Russian nuclear submarine,
--SonicWall is aware of flaw exploitation,
--Perplexity is stealthily evading robots.txt,
--FinCen warns of crypt ATM crimes,
--Vietnamese hackers are targeting thousands,
--Informants' data stolen in a Louisiana sheriff's office ransomware attack, …

@arXiv_csIR_bot@mastoxiv.page
2025-08-11 09:41:59

G-UBS: Towards Robust Understanding of Implicit Feedback via Group-Aware User Behavior Simulation
Boyu Chen, Siran Chen, Zhengrong Yue, Kainan Yan, Chenyun Yu, Beibei Kong, Cheng Lei, Chengxiang Zhuo, Zang Li, Yali Wang
arxiv.org/abs/2508.05709

@arXiv_eessSP_bot@mastoxiv.page
2025-07-08 08:23:50

Specific Absorption Rate-Aware Multiuser MIMO Assisted by Fluid Antenna System
Yuqi Ye, Li You, Hao Xu, Ahmed Elzanaty, Kai-Kit Wong, Xiqi Gao
arxiv.org/abs/2507.03351

@arXiv_csGT_bot@mastoxiv.page
2025-08-08 08:15:22

Toward Energy and Location-Aware Resource Allocation in Next Generation Networks
Mandar Datar (CEA-LETI), Mattia Merluzzi (CEA-LETI)
arxiv.org/abs/2508.05109

@arXiv_csHC_bot@mastoxiv.page
2025-08-08 09:50:12

Discrepancy-Aware Contrastive Adaptation in Medical Time Series Analysis
Yifan Wang, Hongfeng Ai, Ruiqi Li, Maowei Jiang, Ruiyuan Kang, Jiahua Dong, Cheng Jiang, Chenzhong Li
arxiv.org/abs/2508.05572

@arXiv_csPL_bot@mastoxiv.page
2025-08-06 07:50:30

SAGE-HLS: Syntax-Aware AST-Guided LLM for High-Level Synthesis Code Generation
M Zafir Sadik Khan, Nowfel Mashnoor, Mohammad Akyash, Kimia Azar, Hadi Kamali
arxiv.org/abs/2508.03558

@drbruced@aus.social
2025-07-01 22:29:50

It’s time to reclaim the word “Luddite”. Interesting report showing that Australians are getting more aware of the risks of AI with more exposure to the technology.
theguardian.com/australia-news

@ThatHoarder@mastodon.online
2025-06-30 18:13:16

I am still full of self-criticism. But I’m also more aware that it is counter-productive. I’m also more aware that maybe I don't always deserve to treat myself quite that badly. overcomecompulsivehoarding.co.

@arXiv_csDB_bot@mastoxiv.page
2025-07-02 07:52:50

Efficient Conformance Checking of Rich Data-Aware Declare Specifications (Extended)
Jacobo Casas-Ramos, Sarah Winkler, Alessandro Gianola, Marco Montali, Manuel Mucientes, Manuel Lama
arxiv.org/abs/2507.00094

@tiotasram@kolektiva.social
2025-08-11 13:30:26

Speculative politics
As an anarchist (okay, maybe not in practice), I'm tired of hearing why we have to suffer X and Y indignity to "preserve the rule of law" or "maintain Democratic norms." So here's an example of what representative democracy (a form of government that I believe is inherently flawed) could look like if its proponents had even an ounce of imagination, and/or weren't actively trying to rig it to favor a rich donor class:
1. Unicameral legislature, where representatives pass laws directly. Each state elects 3 statewide representatives: the three most-popular candidates in a statewide race where each person votes for one candidate (ranked preference voting would be even better but might not be necessary, and is not a solution by itself). Instead of each representative getting one vote in the chamber, they get N votes, where N is the number of people who voted for them. This means that in a close race, instead of the winner getting all the power, the power is split. Having 3 representatives trades off between leisure size and ensuring that two parties can't dominate together.
2. Any individual citizen can contact their local election office to switch or withdraw their vote at any time (maybe with a 3-day delay or something). Voting power of representatives can thus shift even without an election. They are limited to choosing one of the three elected representatives, or "none of the above." If the "none of the above" fraction exceeds 20% of eligible voters, a new election is triggered for that state. If turnout is less than 80%, a second election happens immediately, with results being final even at lower turnout until 6 months later (some better mechanism for turnout management might be needed).
3. All elections allow mail-in ballots, and in-person voting happens Sunday-Tuesday with the Monday being a mandatory holiday. (Yes, election integrity is not better in this system and that's a big weakness.)
4. Separate nationwide elections elect three positions for head-of-state: one with diplomatic/administrative powers, another with military powers, and a third with veto power. For each position, the top three candidates serve together, with only the first-place winner having actual power until vote switches or withdrawals change who that is. Once one of these heads loses their first-place status, they cannot get it again until another election, even if voters switch preferences back (to avoid dithering). An election for one of these positions is triggered when 20% have withdrawn their votes, or if all three people initially elected have been disqualified by losing their lead in the vote count.
5. Laws that involve spending money are packaged with specific taxes to pay for them, and may only be paid for by those specific revenues. Each tax may be opted into or out of by each taxpayer; where possible opting out of the tax also opts you out of the service. (I'm well aware of a lot of the drawbacks of this, but also feel like they'd not necessarily be worse than the drawbacks of our current system.) A small mandatory tax would cover election expenses.
6. I'm running out of attention, but similar multi-winner elections could elect panels of judges from which a subset is chosen randomly to preside in each case.
Now I'll point out once again that this system, in not directly confronting capitalism, racism, patriarchy, etc., is probably doomed to the same failures as our current system. But if you profess to want a "representative democracy" as opposed to something more libratory, I hope you'll at least advocate for something like this that actually includes meaningful representation as opposed to the current US system that's engineered to quash it.
Key questions: "Why should we have winner-take-all elections when winners-take-proportionately-to-votes is right there?" and "Why should elected officials get to ignore their constituents' approval except during elections, when vote-withdrawal or -switching is possible?"
2/2
#Democracy

@matematico314@social.linux.pizza
2025-05-31 16:38:47

#LB Fazendo uma tradução livre para quem não fala inglês:
"No mês do orgulho LGBT deste ano, héteros devem focar menos em 'toda forma de amor é bela' e mais em 'gays e trans estão em perigo.".
Que coisa triste de ler e ver como estão as coisas nos EUA. E receio que é questão de tempo até nós ficarmos assim também, nossa capacidade de resistência contra a loucura ext…

@arXiv_csDC_bot@mastoxiv.page
2025-07-01 09:56:13

QPART: Adaptive Model Quantization and Dynamic Workload Balancing for Accuracy-aware Edge Inference
Xiangchen Li, Saeid Ghafouri, Bo Ji, Hans Vandierendonck, Deepu John, Dimitrios S. Nikolopoulos
arxiv.org/abs/2506.23934

@axbom@axbom.me
2025-07-29 10:33:25

Something to always be aware of: Many wheelchair users can stand and move around for brief periods of time. Not all wheelchair users are paralysed. Reasons for wheelchair use are numerous and varied.

Some wheelchair users choose not to stand in public because chances are they will be chastised and harassed if they do. With more awareness and understanding this risk can hopefully diminish over time.

For example, if a wheelchair user is able to retrieve their own wheelchair from…

@arXiv_csCV_bot@mastoxiv.page
2025-06-04 15:02:07

This arxiv.org/abs/2505.24380 has been replaced.
initial toot: mastoxiv.page/@arXiv_csCV_…

@arXiv_csLO_bot@mastoxiv.page
2025-08-06 08:34:00

Intensional FOL over Belnap's Billatice for Strong-AI Robotics
Zoran Majkic
arxiv.org/abs/2508.02774 arxiv.org/pdf/2508.02774

@arXiv_hepph_bot@mastoxiv.page
2025-06-09 09:20:02

Accelerating multijet-merged event generation with neural network matrix element surrogates
Tim Herrmann, Timo Jan{\ss}en, Mathis Schenker, Steffen Schumann, Frank Siegert
arxiv.org/abs/2506.06203

@arXiv_csRO_bot@mastoxiv.page
2025-07-25 09:36:32

G2S-ICP SLAM: Geometry-aware Gaussian Splatting ICP SLAM
Gyuhyeon Pak, Hae Min Cho, Euntai Kim
arxiv.org/abs/2507.18344 arxiv.org/pdf/2507.…

@AimeeMaroux@mastodon.social
2025-05-28 20:52:35
Content warning:

For some reason I'm getting lots of clicks for this old #review I wrote about a piece of Hermes / Perseus #romance. It's a cute idea but the execution could be better as conflicts are introduced that are resolved 5 min later. Sometimes literally.

@arXiv_csCR_bot@mastoxiv.page
2025-07-25 09:29:42

Layer-Aware Representation Filtering: Purifying Finetuning Data to Preserve LLM Safety Alignment
Hao Li, Lijun Li, Zhenghao Lu, Xianyi Wei, Rui Li, Jing Shao, Lei Sha
arxiv.org/abs/2507.18631

@arXiv_csAI_bot@mastoxiv.page
2025-06-27 07:37:09

Beyond Reactive Safety: Risk-Aware LLM Alignment via Long-Horizon Simulation
Chenkai Sun, Denghui Zhang, ChengXiang Zhai, Heng Ji
arxiv.org/abs/2506.20949

Trump on Monday began to confront the potential economic blowback from his military strikes on Iran,
which threatens to send oil and gas prices soaring at a moment when U.S. consumers are already facing significant financial strains.
The mere prospect of rising energy costs appeared to spook even Trump, who took to social media to push for more domestic drilling
-- while demanding that companies “KEEP OIL PRICES DOWN”; otherwise, they would be “PLAYING RIGHT INTO THE HANDS O…

@berlinbuzzwords@floss.social
2025-05-29 11:00:17

Modern applications require search capabilities that go beyond basic text matching. They must be fast, accurate, personalised and context-aware. At this year's Berlin Buzzwords, Saurabh Singh will demonstrate how OpenSearch’s latest AI/ML enhancements and engine improvements enable organisations to build intelligent, scalable search experiences that meet these evolving needs.
Learn more:

Session title: From Search to Insight: Leveraging OpenSearch for Scalable, AI-Driven Search Experiences
Saurabh Singh
Join us for Berlin Buzzwords on June 15-17 at Kulturbrauerei or online / berlinbuzzwords.de
@arXiv_csSD_bot@mastoxiv.page
2025-05-30 07:23:01

Semantics-Aware Human Motion Generation from Audio Instructions
Zi-An Wang, Shihao Zou, Shiyao Yu, Mingyuan Zhang, Chao Dong
arxiv.org/abs/2505.23465

@arXiv_eessSP_bot@mastoxiv.page
2025-08-07 09:52:24

Less Signals, More Understanding: Channel-Capacity Codebook Design for Digital Task-Oriented Semantic Communication
Anbang Zhang, Shuaishuai Guo, Chenyuan Feng, Hongyang Du, Haojin Li, Chen Sun, Haijun Zhang
arxiv.org/abs/2508.04291

@rperezrosario@mastodon.social
2025-05-22 17:16:22

A GitHub public repository that allows you to add your climate-friendly/aware software to a browsable directory, after a peer review process. Accepted software is sorted into one or more of the following categories:
1. Measurement
2. Carbon Efficiency
3. Carbon Awareness
4. Special Tools
"GitHub's Green Software Directory"

@arXiv_eessIV_bot@mastoxiv.page
2025-06-25 09:14:30

NAADA: A Noise-Aware Attention Denoising Autoencoder for Dental Panoramic Radiographs
Khuram Naveed, Bruna Neves de Freitas, Ruben Pauwels
arxiv.org/abs/2506.19387

@arXiv_csIR_bot@mastoxiv.page
2025-07-03 09:36:50

Enhanced Influence-aware Group Recommendation for Online Media Propagation
Chengkun He, Xiangmin Zhou, Chen Wang, Longbing Cao, Jie Shao, Xiaodong Li, Guang Xu, Carrie Jinqiu Hu, Zahir Tari
arxiv.org/abs/2507.01616

@arXiv_csAR_bot@mastoxiv.page
2025-05-27 07:17:33

Enhancing Test Efficiency through Automated ATPG-Aware Lightweight Scan Instrumentation
Sudipta Paria, Md Rezoan Ferdous, Aritra Dasgupta, Atri Chatterjee, Swarup Bhunia
arxiv.org/abs/2505.19418

@paulbusch@mstdn.ca
2025-06-22 12:08:21

Good Morning #Canada
Before our move to #Innisfil, we spent 16 years in Caledon living on top of the Niagara Escarpment. We were fortunate to take advantage of the hiking trails and scenic beauty of the area with the Bruce Trail, Forks of The Credit Road, and numerous parks within a few kilometers. More Canadians should be aware of this approximately 1,050-kilometre geological feature that today is protected in Ontario as a continuous corridor.
#CanadaIsAwesome
youtu.be/5V5DIgF2yag?si=Z4tTs4

@tiotasram@kolektiva.social
2025-07-28 13:04:34

How popular media gets love wrong
Okay, so what exactly are the details of the "engineered" model of love from my previous post? I'll try to summarize my thoughts and the experiences they're built on.
1. "Love" can be be thought of like a mechanism that's built by two (or more) people. In this case, no single person can build the thing alone, to work it needs contributions from multiple people (I suppose self-love might be an exception to that). In any case, the builders can intentionally choose how they build (and maintain) the mechanism, they can build it differently to suit their particular needs/wants, and they will need to maintain and repair it over time to keep it running. It may need winding, or fuel, or charging plus oil changes and bolt-tightening, etc.
2. Any two (or more) people can choose to start building love between them at any time. No need to "find your soulmate" or "wait for the right person." Now the caveat is that the mechanism is difficult to build and requires lots of cooperation, so there might indeed be "wrong people" to try to build love with. People in general might experience more failures than successes. The key component is slowly-escalating shared commitment to the project, which is negotiated between the partners so that neither one feels like they've been left to do all the work themselves. Since it's a big scary project though, it's very easy to decide it's too hard and give up, and so the builders need to encourage each other and pace themselves. The project can only succeed if there's mutual commitment, and that will certainly require compromise (sometimes even sacrifice, though not always). If the mechanism works well, the benefits (companionship; encouragement; praise; loving sex; hugs; etc.) will be well worth the compromises you make to build it, but this isn't always the case.
3. The mechanism is prone to falling apart if not maintained. In my view, the "fire" and "appeal" models of love don't adequately convey the need for this maintenance and lead to a lot of under-maintained relationships many of which fall apart. You'll need to do things together that make you happy, do things that make your partner happy (in some cases even if they annoy you, but never in a transactional or box-checking way), spend time with shared attention, spend time alone and/or apart, reassure each other through words (or deeds) of mutual beliefs (especially your continued commitment to the relationship), do things that comfort and/or excite each other physically (anywhere from hugs to hand-holding to sex) and probably other things I'm not thinking of. Not *every* relationship needs *all* of these maintenance techniques, but I think most will need most. Note especially that patriarchy teaches men that they don't need to bother with any of this, which harms primarily their romantic partners but secondarily them as their relationships fail due to their own (cultivated-by-patriarchy) incompetence. If a relationship evolves to a point where one person is doing all the maintenance (& improvement) work, it's been bent into a shape that no longer really qualifies as "love" in my book, and that's super unhealthy.
4. The key things to negotiate when trying to build a new love are first, how to work together in the first place, and how to be comfortable around each others' habits (or how to change those habits). Second, what level of commitment you have right now, and what how/when you want to increase that commitment. Additionally, I think it's worth checking in about what you're each putting into and getting out of the relationship, to ensure that it continues to be positive for all participants. To build a successful relationship, you need to be able to incrementally increase the level of commitment to one that you're both comfortable staying at long-term, while ensuring that for both partners, the relationship is both a net benefit and has manageable costs (those two things are not the same). Obviously it's not easy to actually have conversations about these things (congratulations if you can just talk about this stuff) because there's a huge fear of hearing an answer that you don't want to hear. I think the range of discouraging answers which actually spell doom for a relationship is smaller than people think and there's usually a reasonable "shoulder" you can fall into where things aren't on a good trajectory but could be brought back into one, but even so these conversations are scary. Still, I think only having honest conversations about these things when you're angry at each other is not a good plan. You can also try to communicate some of these things via non-conversational means, if that feels safer, and at least being aware that these are the objectives you're pursuing is probably helpful.
I'll post two more replies here about my own experiences that led me to this mental model and trying to distill this into advice, although it will take me a moment to get to those.
#relationships #love

@arXiv_statML_bot@mastoxiv.page
2025-07-18 08:49:32

Relation-Aware Slicing in Cross-Domain Alignment
Dhruv Sarkar, Aprameyo Chakrabartty, Anish Chakrabarty, Swagatam Das
arxiv.org/abs/2507.13194

@arXiv_csMA_bot@mastoxiv.page
2025-06-06 07:20:23

From Standalone LLMs to Integrated Intelligence: A Survey of Compound Al Systems
Jiayi Chen, Junyi Ye, Guiling Wang
arxiv.org/abs/2506.04565

@arXiv_eessAS_bot@mastoxiv.page
2025-06-02 10:04:35

This arxiv.org/abs/2505.15004 has been replaced.
initial toot: mastoxiv.page/@arXiv_ees…

@arXiv_physicsappph_bot@mastoxiv.page
2025-08-07 09:16:04

X-ray thermal diffuse scattering as a texture-robust temperature diagnostic for dynamically compressed solids
P. G. Heighway, D. J. Peake, T. Stevens, J. S. Wark, B. Albertazzi, S. J. Ali, L. Antonelli, M. R. Armstrong, C. Baehtz, O. B. Ball, S. Banerjee, A. B. Belonoshko, C. A. Bolme, V. Bouffetier, R. Briggs, K. Buakor, T. Butcher, S. Di Dio Cafiso, V. Cerantola, J. Chantel, A. Di Cicco, A. L. Coleman, J. Collier, G. Collins, A. J. Comley, F. Coppari, T. E. Cowan, G. Cristoforetti, H…

@arXiv_csRO_bot@mastoxiv.page
2025-06-03 17:29:48

This arxiv.org/abs/2501.02580 has been replaced.
initial toot: mastoxiv.page/@arXiv_csRO_…

@arXiv_csIR_bot@mastoxiv.page
2025-08-01 07:42:21

Are Recommenders Self-Aware? Label-Free Recommendation Performance Estimation via Model Uncertainty
Jiayu Li, Ziyi Ye, Guohao Jian, Zhiqiang Guo, Weizhi Ma, Qingyao Ai, Min Zhang
arxiv.org/abs/2507.23208

@tiotasram@kolektiva.social
2025-08-02 13:28:40

How to tell a vibe coder of lying when they say they check their code.
People who will admit to using LLMs to write code will usually claim that they "carefully check" the output since we all know that LLM code has a lot of errors in it. This is insufficient to address several problems that LLMs cause, including labor issues, digital commons stress/pollution, license violation, and environmental issues, but at least it's they are checking their code carefully we shouldn't assume that it's any worse quality-wise than human-authored code, right?
Well, from principles alone we can expect it to be worse, since checking code the AI wrote is a much more boring task than writing code yourself, so anyone who has ever studied human-computer interaction even a little bit can predict people will quickly slack off, stating to trust the AI way too much, because it's less work. I'm a different domain, the journalist who published an entire "summer reading list" full of nonexistent titles is a great example of this. I'm sure he also intended to carefully check the AI output, but then got lazy. Clearly he did not have a good grasp of the likely failure modes of the tool he was using.
But for vibe coders, there's one easy tell we can look for, at least in some cases: coding in Python without type hints. To be clear, this doesn't apply to novice coders, who might not be aware that type hints are an option. But any serious Python software engineer, whether they used type hints before or not, would know that they're an option. And if you know they're an option, you also know they're an excellent tool for catching code defects, with a very low effort:reward ratio, especially if we assume an LLM generates them. Of the cases where adding types requires any thought at all, 95% of them offer chances to improve your code design and make it more robust. Knowing about but not using type hints in Python is a great sign that you don't care very much about code quality. That's totally fine in many cases: I've got a few demos or jam games in Python with no type hints, and it's okay that they're buggy. I was never going to debug them to a polished level anyways. But if we're talking about a vibe coder who claims that they're taking extra care to check for the (frequent) LLM-induced errors, that's not the situation.
Note that this shouldn't be read as an endorsement of vibe coding for demos or other rough-is-acceptable code: the other ethical issues I skipped past at the start still make it unethical to use in all but a few cases (for example, I have my students use it for a single assignment so they can see for themselves how it's not all it's cracked up to be, and even then they have an option to observe a pre-recorded prompt session instead).

@arXiv_csCL_bot@mastoxiv.page
2025-06-19 08:16:54

PhantomHunter: Detecting Unseen Privately-Tuned LLM-Generated Text via Family-Aware Learning
Yuhui Shi, Yehan Yang, Qiang Sheng, Hao Mi, Beizhe Hu, Chaoxi Xu, Juan Cao
arxiv.org/abs/2506.15683

@arXiv_csHC_bot@mastoxiv.page
2025-08-01 09:32:51

Accessibility Scout: Personalized Accessibility Scans of Built Environments
William Huang, Xia Su, Jon E. Froehlich, Yang Zhang
arxiv.org/abs/2507.23190

@arXiv_csSD_bot@mastoxiv.page
2025-07-01 09:47:03

You Sound a Little Tense: L2 Tailored Clear TTS Using Durational Vowel Properties
Paige Tutt\"os\'i, H. Henny Yeung, Yue Wang, Jean-Julien Aucouturier, Angelica Lim
arxiv.org/abs/2506.23367

@arXiv_eessSP_bot@mastoxiv.page
2025-07-22 12:01:50

A Novel Domain-Aware CNN Architecture for Faster-than-Nyquist Signaling Detection
Osman Tokluoglu, Enver Cavus, Ebrahim Bedeer, Halim Yanikomeroglu
arxiv.org/abs/2507.15291

@arXiv_csIR_bot@mastoxiv.page
2025-06-02 10:00:05

This arxiv.org/abs/2504.13703 has been replaced.
initial toot: mastoxiv.page/@arXiv_csIR_…

@arXiv_csCL_bot@mastoxiv.page
2025-06-23 08:16:40

Rethinking LLM Training through Information Geometry and Quantum Metrics
Riccardo Di Sipio
arxiv.org/abs/2506.15830 a…