Tootfinder

Opt-in global Mastodon full text search. Join the index!

@memeorandum@universeodon.com
2026-01-25 15:55:38

Eleanor Holmes Norton won't seek reelection as DC delegate (Nicholas Wu/Politico)
politico.com/news/2026/01/25/e
memeorandum.com/260125/p27#a26

@sean@scoat.es
2026-03-05 22:01:27

This week, @… figured out the SAML magic to let us delegate AWS account logins to an IAM Identity Center in another account, so now we have a MUCH better ops login experience.
This is good stuff.

@servelan@newsie.social
2026-03-30 06:30:04

‘Gave up my career to leak this’: UN delegate sounds alarm on nukes with desperate plea - Raw Story
rawstory.com/nuclear-weapons-2

@fluchtkapsel@nerdculture.de
2026-01-29 07:57:40

Another day, another IPv6 question. I'm on a Hetzner cloud VM. I have a static IPv6 /64 subnet / prefix¹. The system uses systemd-networkd for network management.
My eth0 does not seem to receive any RA at all but I'd like to delegate the prefix downstream to a bridge interface br0 on the same host. I have an IPv6Prefix section on eth0 with the Prefix=…, Assign=yes and Token=static:::1. It works, eth0 gets the ::1 address for this prefix.
Is there a way for br0 to get thi…

@memeorandum@universeodon.com
2026-01-28 10:01:00

Eleanor Holmes Norton confirms her retirement as DC delegate (Nicholas Wu/Politico)
politico.com/live-updates/2026
memeorandum.com/260128/p6#a260

@arXiv_econTH_bot@mastoxiv.page
2026-03-31 08:06:17

A Revealed Preference Framework for AI Alignment
Elchin Suleymanov
arxiv.org/abs/2603.27868 arxiv.org/pdf/2603.27868 arxiv.org/html/2603.27868
arXiv:2603.27868v1 Announce Type: new
Abstract: Human decision makers increasingly delegate choices to AI agents, raising a natural question: does the AI implement the human principal's preferences or pursue its own? To study this question using revealed preference techniques, I introduce the Luce Alignment Model, where the AI's choices are a mixture of two Luce rules, one reflecting the human's preferences and the other the AI's. I show that the AI's alignment (similarity of human and AI preferences) can be generically identified in two settings: the laboratory setting, where both human and AI choices are observed, and the field setting, where only AI choices are observed.
toXiv_bot_toot