Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@arXiv_astrophSR_bot@mastoxiv.page
2025-08-12 10:03:13

Differential rotation of solar {\alpha}-sunspots and implications for stellar light curves
Emily Joe L\"o{\ss}nitz, Alexander G. M. Pietrow, Hritam Chakraborty, Meetu Verma, Ioannis Kontogiannis, Horst Balthasar, Carsten Denker, Monika Lendl
arxiv.org/abs/2508.08196

@mcdanlj@social.makerforums.info
2025-09-01 20:31:29

This is the third full build iteration of my miniature #QRP unun supporting both EFHW and "random wire" antennas. At this point, it is still fiddly to build, but I'd expect an experienced #HamRadio DIY / homebrew enthusiast to find the build not terribly difficult.
There's nothing…

Photograph of tiny QRP unun showing size of overall assembly including integrated coax. BNC connector for scale! The corners of the box have M3 countersink screws. There are three knobs on the side; a purple knob for the counterpoise connection, and red (high voltage!) knobs for "random wire" and end-fed half wave connections,
Inside view of unun autotransformer showing construction details, next to a ruler for scale, showing that the entire box is less that 3.5 cm across. There is a half-inch ferrite in the box, wrapped in kapton tape, with a 14-turn autotransformer wrapped around it. There are three M3x8 brass screws through the edges, connected to the transformer with ring terminals. The screws are held in place by captive nuts. An RG316 coax segment comes in through the side of the box to feed the autotransformer…
Side view of assembled unun, without a terminal knob, showing how thin it is. The letter "R" is visible on the side of the box, indicating that the adjacent terminal is intended for a "random wire" antenna.
A photo showing a comparison of this version of the unun design to the previous iteration, configured for deployment. The previous iteration was built on a SO-239 bulkhead connector, with a PL-259-to-BNC connector attached.
@arXiv_csRO_bot@mastoxiv.page
2025-07-04 08:45:41

CoInfra: A Large-Scale Cooperative Infrastructure Perception System and Dataset in Adverse Weather
Minghao Ning, Yufeng Yang, Keqi Shu, Shucheng Huang, Jiaming Zhong, Maryam Salehi, Mahdi Rahmani, Yukun Lu, Chen Sun, Aladdin Saleh, Ehsan Hashemi, Amir Khajepour
arxiv.org/abs/2507.02245

@pbloem@sigmoid.social
2025-07-18 09:25:22

Now out in #TMLR:
🍇 GRAPES: Learning to Sample Graphs for Scalable Graph Neural Networks 🍇
There's lots of work on sampling subgraphs for GNNs, but relatively little on making this sampling process _adaptive_. That is, learning to select the data from the graph that is relevant for your task.
We introduce an RL-based and a GFLowNet-based sampler and show that the approach perf…

A diagram of the GRAPES pipeline. It shows a subgraph being sampled in two steps and being fed to a GNN, with a blue line showing the learning signal. The caption reads Figure 1: Overview of GRAPES. First, GRAPES processes a target node (green) by computing node inclusion probabilities on its 1-hop neighbors (shown by node color shade) with a sampling GNN. Given these probabilities, GRAPES samples k nodes. Then, GRAPES repeats this process over nodes in the 2-hop neighborhood. We pass the sampl…
A results table for node classification on heterophilious graphs. Table 2: F1-scores (%) for different sampling methods trained on heterophilous graphs for a batch size of 256, and a sample size of 256 per layer. We report the mean and standard deviation over 10 runs. The best values among the sampling baselines (all except GAS) are in bold, and the second best are underlined. MC stands for multi-class and ML stands for multi-label classification. OOM indicates out of memory.
Performance of samples vs sampling size showing that GRAPES generally performs well across sample sizes, while other samplers often show more variance across sample sizes. The caption reads Figure 4: Comparative analysis of classification accuracy across different sampling sizes for sampling baseline
and GRAPES. We repeated each experiment five times: The shaded regions show the 95% confidence intervals.
A diagrammatic illustration of a graph classification task used in one of the theorems. The caption reads Figure 9: An example of a graph for Theorem 1 with eight nodes. Red edges belong to E1, features xi and labels yi are shown beside every node. For nodes v1 and v2 we show the edge e12 as an example. As shown, the label of each node is the second feature of its neighbor, where a red edge connects them. The edge homophily ratio is h=12/28 = 0.43.
@arXiv_csAI_bot@mastoxiv.page
2025-07-23 09:48:12

CHIMERA: Compressed Hybrid Intelligence for Twin-Model Enhanced Multi-Agent Deep Reinforcement Learning for Multi-Functional RIS-Assisted Space-Air-Ground Integrated Networks
Li-Hsiang Shen, Jyun-Jhe Huang
arxiv.org/abs/2507.16204

@arXiv_csAI_bot@mastoxiv.page
2025-06-24 11:54:00

Airalogy: AI-empowered universal data digitization for research automation
Zijie Yang, Qiji Zhou, Fang Guo, Sijie Zhang, Yexun Xi, Jinglei Nie, Yudian Zhu, Liping Huang, Chou Wu, Yonghe Xia, Xiaoyu Ma, Yingming Pu, Panzhong Lu, Junshu Pan, Mingtao Chen, Tiannan Guo, Yanmei Dou, Hongyu Chen, Anping Zeng, Jiaxing Huang, Tian Xu, Yue Zhang

@arXiv_physicscompph_bot@mastoxiv.page
2025-08-18 08:10:30

An efficient and robust high-order compact ALE gas-kinetic scheme for unstructured meshes
Yibo Wang, Xing Ji, Liang Pan
arxiv.org/abs/2508.11283