Filing: FTX argues it should pay the IRS a $200M priority tax claim and a $685M subordinated claim; IRS had said FTX owed $44B in taxes then revised to $24B (MK Manoylov/The Block)
https://www.theblock.co/post/298456/ftx-wa
🔨 FBI opens criminal investigation Into Baltimore bridge collapse
#fbi
Socratic Planner: Inquiry-Based Zero-Shot Planning for Embodied Instruction Following
Suyeon Shin, Sujin jeon, Junghyun Kim, Gi-Cheon Kang, Byoung-Tak Zhang
https://arxiv.org/abs/2404.15190
Self-supervised learning for classifying paranasal anomalies in the maxillary sinus
Debayan Bhattacharya, Finn Behrendt, Benjamin Tobias Becker, Lennart Maack, Dirk Beyersdorff, Elina Petersen, Marvin Petersen, Bastian Cheng, Dennis Eggert, Christian Betz, Anna Sophie Hoffmann, Alexander Schlaefer
https://arxiv.org/abs/2404.18599 https://arxiv.org/pdf/2404.18599
arXiv:2404.18599v1 Announce Type: new
Abstract: Purpose: Paranasal anomalies, frequently identified in routine radiological screenings, exhibit diverse morphological characteristics. Due to the diversity of anomalies, supervised learning methods require large labelled dataset exhibiting diverse anomaly morphology. Self-supervised learning (SSL) can be used to learn representations from unlabelled data. However, there are no SSL methods designed for the downstream task of classifying paranasal anomalies in the maxillary sinus (MS).
Methods: Our approach uses a 3D Convolutional Autoencoder (CAE) trained in an unsupervised anomaly detection (UAD) framework. Initially, we train the 3D CAE to reduce reconstruction errors when reconstructing normal maxillary sinus (MS) image. Then, this CAE is applied to an unlabelled dataset to generate coarse anomaly locations by creating residual MS images. Following this, a 3D Convolutional Neural Network (CNN) reconstructs these residual images, which forms our SSL task. Lastly, we fine-tune the encoder part of the 3D CNN on a labelled dataset of normal and anomalous MS images.
Results: The proposed SSL technique exhibits superior performance compared to existing generic self-supervised methods, especially in scenarios with limited annotated data. When trained on just 10% of the annotated dataset, our method achieves an Area Under the Precision-Recall Curve (AUPRC) of 0.79 for the downstream classification task. This performance surpasses other methods, with BYOL attaining an AUPRC of 0.75, SimSiam at 0.74, SimCLR at 0.73 and Masked Autoencoding using SparK at 0.75.
Conclusion: A self-supervised learning approach that inherently focuses on localizing paranasal anomalies proves to be advantageous, particularly when the subsequent task involves differentiating normal from anomalous maxillary sinuses. Access our code at https://github.com/mtec-tuhh/self-supervised-paranasal-anomaly
The hopefully final version of the Barduino. I really wanted to keep it as through-hole as possible, so everything except for the ATmega328PB, the FTDI IC, and the USB connector are through hole. Pretty happy with how it turned out, now it's time to get the boards ordered and make sure everything works...then order a bunch more...lol #electronics
👉🏽 Key Bridge Collapse Shipowner Could Legally Skirt Liability
#law
The BBC splits off its India operations to create a new independent media company, Collective Newsroom, following regulatory scrutiny over its Modi documentary (John Reed/Financial Times)
https://t.co/1OlyA0r8TZ
Transmission Channel Analysis in Dynamic Models
Enrico Wegner, Lenard Lieb, Stephan Smeekes, Ines Wilms
https://arxiv.org/abs/2405.18987 https://