2026-01-25 15:55:38
Eleanor Holmes Norton won't seek reelection as DC delegate (Nicholas Wu/Politico)
https://www.politico.com/news/2026/01/25/eleanor-holmes-norton-retires-dc-delegate-00731986
http://www.memeorandum.com/260125/p27#a260125p27
Eleanor Holmes Norton won't seek reelection as DC delegate (Nicholas Wu/Politico)
https://www.politico.com/news/2026/01/25/eleanor-holmes-norton-retires-dc-delegate-00731986
http://www.memeorandum.com/260125/p27#a260125p27
This week, @… figured out the SAML magic to let us delegate AWS account logins to an IAM Identity Center in another account, so now we have a MUCH better ops login experience.
This is good stuff.
‘Gave up my career to leak this’: UN delegate sounds alarm on nukes with desperate plea - Raw Story
https://www.rawstory.com/nuclear-weapons-2676635699/
Another day, another IPv6 question. I'm on a Hetzner cloud VM. I have a static IPv6 /64 subnet / prefix¹. The system uses systemd-networkd for network management.
My eth0 does not seem to receive any RA at all but I'd like to delegate the prefix downstream to a bridge interface br0 on the same host. I have an IPv6Prefix section on eth0 with the Prefix=…, Assign=yes and Token=static:::1. It works, eth0 gets the ::1 address for this prefix.
Is there a way for br0 to get thi…
Eleanor Holmes Norton confirms her retirement as DC delegate (Nicholas Wu/Politico)
https://www.politico.com/live-updates/2026/01/27/congress/eleanor-holmes-norton-retirement-statement-00749099
http://www.memeorandum.com/260128/p6#a260128p6
A Revealed Preference Framework for AI Alignment
Elchin Suleymanov
https://arxiv.org/abs/2603.27868 https://arxiv.org/pdf/2603.27868 https://arxiv.org/html/2603.27868
arXiv:2603.27868v1 Announce Type: new
Abstract: Human decision makers increasingly delegate choices to AI agents, raising a natural question: does the AI implement the human principal's preferences or pursue its own? To study this question using revealed preference techniques, I introduce the Luce Alignment Model, where the AI's choices are a mixture of two Luce rules, one reflecting the human's preferences and the other the AI's. I show that the AI's alignment (similarity of human and AI preferences) can be generically identified in two settings: the laboratory setting, where both human and AI choices are observed, and the field setting, where only AI choices are observed.
toXiv_bot_toot