Long; central Massachusetts colonial history
Today on a whim I visited a site in Massachusetts marked as "Huguenot Fort Ruins" on OpenStreetMaps. I drove out with my 4-year-old through increasingly rural central Massachusetts forests & fields to end up on a narrow street near the top of a hill beside a small field. The neighboring houses had huge lawns, some with tractors.
Appropriately for this day and this moment in history, the history of the site turns out to be a microcosm of America. Across the field beyond a cross-shaped stone memorial stood an info board with a few diagrams and some text. The text of the main sign (including typos/misspellings) read:
"""
Town Is Formed
Early in the 1680's, interest began to generate to develop a town in the area west of Natick in the south central part of the Commonwealth that would be suitable for a settlement. A Mr. Hugh Campbell, a Scotch merchant of Boston petitioned the court for land for a colony. At about the same time, Joseph Dudley and William Stoughton also were desirous of obtaining land for a settlement. A claim was made for all lands west of the Blackstone River to the southern land of Massachusetts to a point northerly of the Springfield Road then running southwesterly until it joined the southern line of Massachusetts.
Associated with Dudley and Stoughton was Robert Thompson of London, England, Dr. Daniel Cox and John Blackwell, both of London and Thomas Freak of Hannington, Wiltshire, as proprietors. A stipulation in the acquisition of this land being that within four years thirty families and an orthodox minister settle in the area. An extension of this stipulation was granted at the end of the four years when no group large enough seemed to be willing to take up the opportunity.
In 1686, Robert Thompson met Gabriel Bernor and learned that he was seeking an area where his countrymen, who had fled their native France because of the Edict of Nantes, were desirous of a place to live. Their main concern was to settle in a place that would allow them freedom of worship. New Oxford, as it was the so-named, at that time included the larger part of Charlton, one-fourth of Auburn, one-fifth of Dudley and several square miles of the northeast portion of Southbridge as well as the easterly ares now known as Webster.
Joseph Dudley's assessment that the area was capable of a good settlement probably was based on the idea of the meadows already established along with the plains, ponds, brooks and rivers. Meadows were a necessity as they provided hay for animal feed and other uses by the settlers. The French River tributary books and streams provided a good source for fishing and hunting. There were open areas on the plains as customarily in November of each year, the Indians burnt over areas to keep them free of underwood and brush. It appeared then that this area was ready for settling.
The first seventy-five years of the settling of the Town of Oxford originally known as Manchaug, embraced three different cultures. The Indians were known to be here about 1656 when the Missionary, John Eliott and his partner Daniel Gookin visited in the praying towns. Thirty years later, in 1686, the Huguenots walked here from Boston under the guidance of their leader Isaac Bertrand DuTuffeau. The Huguenot's that arrived were not peasants, but were acknowledged to be the best Agriculturist, Wine Growers, Merchant's, and Manufacter's in France. There were 30 families consisting of 52 people. At the time of their first departure (10 years), due to Indian insurrection, there were 80 people in the group, and near their Meetinghouse/Church was a Cemetery that held 20 bodies. In 1699, 8 to 10 familie's made a second attempt to re-settle, failing after only four years, with the village being completely abandoned in 1704.
The English colonist made their way here in 1713 and established what has become a permanent settlement.
"""
All that was left of the fort was a crumbling stone wall that would have been the base of a higher wooden wall according to a picture of a model (I didn't think to get a shot of that myself). Only trees and brush remain where the multi-story main wooden building was.
This story has so many echoes in the present:
- The rich colonialists from Boston & London agree to settle the land, buying/taking land "rights" from the colonial British court that claimed jurisdiction without actually having control of the land. Whether the sponsors ever actually visited the land themselves I don't know. They surely profited somehow, whether from selling on the land rights later or collecting taxes/rent or whatever, by they needed poor laborers to actually do the work of developing the land (& driving out the original inhabitants, who had no say in the machinations of the Boston court).
- The land deal was on condition that there capital-holders who stood to profit would find settlers to actually do the work of colonizing. The British crown wanted more territory to be controlled in practice not just in theory, but they weren't going to be the ones to do the hard work.
- The capital-holders actually failed to find enough poor suckers to do their dirty work for 4 years, until the Huguenots, fleeing religious persecution in France, were desperate enough to accept their terms.
- Of course, the land was only so ripe for settlement because of careful tending over centuries by the natives who were eventually driven off, and whose land management practices are abandoned today. Given the mention of praying towns (& dates), this was after King Phillip's war, which resulted in at least some forced resettlement of native tribes around the area, but the descendants of those "Indians" mentioned in this sign are still around. For example, this is the site of one local band of Nipmuck, whose namesake lake is about 5 miles south of the fort site: #LandBack.
Exoplanet Atmospheric Refraction Effects in the #Kepler Sample: https://arxiv.org/abs/2507.02126 -> "We present an analysis on the detection viability of refraction effects in Kepler's exoplanet atmospheres using binning techniques for their light curves in order to compare against simulated refraction effects. We split the Kepler exoplanets into sub-populations according to orbital period and planetary radius, then search for out-of-transit changes in the relative flux associated with atmospheric refraction of starlight. The presence of refraction effects - or lack thereof - may be used to measure and set limits on the bulk properties of an atmosphere, including mean molecular weight or the presence of hazes.
In this work, we use the presence of refraction effects to test whether exoplanets above the period-radius valley have H/He atmospheres, which high levels of stellar radiation could evaporate away, in turn leaving rocky cores below the valley. We find strong observational evidence of refraction effects for exoplanets above the period-radius valley based on Kepler photometry, however those related to optically thin H/He atmospheres are not common in the observed planetary population. This result may be attributed to signal dampening caused by clouds and hazes, consistent with the optically thick and intrinsically hotter atmospheres of Kepler exoplanets caused by relatively close host star proximity."
I noticed that OpenStreetMaps somehow lost Lake Michigan, so I went investigating. Apparently back in April somebody accidentally changed it from a "lake" to a "school" and it's taking months to be fixed across all regions/renderers.
https:/…
I'm working with a few organisations with different meeting cultures at the moment, and I really enjoyed this post about levels of note taking #linkTuesday Which level do you use?
So the basic idea is that we first compute a "level" for whatever interaction, by adding beneficial modifiers and subtracting harmful ones. Imagine most modifiers are smallish integers like 2 or -3 (though they can be non-integers too). Each level can be thought of as making things twice as good/bad, although this only applies directly when they're balanced. The actual formula starts with a 50/50 chance of "success" at level 0, and then each positive level halves the chance of failure, or if the levels are negative, each negative level halves the chance of success (note that halving the chance of failure is not the same as doubling the chance of success).
The intuitive explanation is that you start with a coin flip. Then if the level is positive, you flip that many additional coins and succeed if any single coin succeeds, but it the level is negative, you have to flip that many additional coins and succeed only if *all* flips succeed.
For example, if I have a dagger with 5 crit chance, and I attack an opponent with no armor modifiers, I'd have to win any 1 of 6 coin flips to score a crit (p = 1 - (1/(2^6)) = 63/64. Increasing my crit modifier by 1 ups my chances only slightly, to 127/128. This is obviously pretty poor return, indicating that the 5 I already have is very strong. If the opponent had armor with -3 to crits, the interaction is now level 2, so the crit chance is 7/8, which is still pretty good. We can see from these examples that the basic system
rewards a small level advantage a lot, but the rewards diminish rapidly. The system has a few avenues for tweaking how it works though, that can let us modify this. There's also a potential benefit (though sometimes drawback) that no matter what the level gap, there's an effective limit to how much the interaction swings.
Dynamic Chunking for End-to-End Hierarchical Sequence Modeling
Sukjun Hwang, Brandon Wang, Albert Gu
https://arxiv.org/abs/2507.07955 https://arxiv.org/pdf/2507.07955 https://arxiv.org/html/2507.07955
arXiv:2507.07955v1 Announce Type: new
Abstract: Despite incredible progress in language models (LMs) in recent years, largely resulting from moving away from specialized models designed for specific tasks to general models based on powerful architectures (e.g. the Transformer) that learn everything from raw data, pre-processing steps such as tokenization remain a barrier to true end-to-end foundation models. We introduce a collection of new techniques that enable a dynamic chunking mechanism which automatically learns content -- and context -- dependent segmentation strategies learned jointly with the rest of the model. Incorporating this into an explicit hierarchical network (H-Net) allows replacing the (implicitly hierarchical) tokenization-LM-detokenization pipeline with a single model learned fully end-to-end. When compute- and data- matched, an H-Net with one stage of hierarchy operating at the byte level outperforms a strong Transformer language model operating over BPE tokens. Iterating the hierarchy to multiple stages further increases its performance by modeling multiple levels of abstraction, demonstrating significantly better scaling with data and matching a token-based Transformer of twice its size. H-Nets pretrained on English show significantly increased character-level robustness, and qualitatively learn meaningful data-dependent chunking strategies without any heuristics or explicit supervision. Finally, the H-Net's improvement over tokenized pipelines is further increased in languages and modalities with weaker tokenization heuristics, such as Chinese and code, or DNA sequences (nearly 4x improvement in data efficiency over baselines), showing the potential of true end-to-end models that learn and scale better from unprocessed data.
toXiv_bot_toot
The full formula for the probability of "success" is:
p = {
1/(2^(-n 1)) if n is negative, or
1 - (1/(2^(n 1))) if n is zero or positive
}
(Both branches have the same value when n is 0, so the behavior is smooth around the origin.)
How can we tweak this?
First, we can introduce fixed success and/or failure chances unaffected by level, with this formula only taking effect if those don't apply. For example, you could do 10% failure, 80% by formula, and 10% success to keep things from being too sure either way even when levels are very high or low. On the other hand, this flattening makes the benefit of extra advantage levels even less exciting.
Second, we could allow for gradations of success/failure, and treat the coin pools I used to explain that math like dice pools a bit. An in-between could require linearly more success flips to achieve the next higher grade of success at each grade. For example, simple success on a crit role might mean dealing 1.5x damage, but if you succeed on 2 of your flips, you get 9/4 damage, or on 4 flips 27/8, or on 7 flips 81/16. In this world, stacking crit levels might be a viable build, and just giving up on armor would be super dangerous. In the particular case I was using this for just now, I can't easily do gradations of success (that's the reason I turned to probabilities in the first place) but I think I'd favor this approach when feasible.
The main innovation here over simple dice pools is how to handle situations where the number of dice should be negative. I'm almost certain it's not a truly novel innovation though, and some RPG fan can point out which system already does this (please actually do this, I'm an RPG nerd too at heart).
I'll leave this with one more tweak we could do: what if the number 2 in the probability equation were 3, or 2/3? I think this has a similar effect to just scaling all the modifiers a bit, but the algebra escapes me in this moment and I'm a bit lazy. In any case, reducing the base of the probability exponent should let you get a few more gradations near 50%, which is probably a good thing, since the default goes from 25% straight to 50% and then to 75% with no integer stops in between.
Just finished watching series 1 and 2 of #Taskmaster.
Series 2 is definitely when they realized contestants completing the tasks not by the spirit of the instructions, but instead by the letter of the instructions IS the show.
Contestants finding loopholes, using alternate word definitions, challenging grammar, etc. makes the show more than simply who can complete the task the …
This https://arxiv.org/abs/2410.11295 has been replaced.
initial toot: https://mastoxiv.page/@arXiv_csCR_…