
2025-07-31 08:52:37
for i in {00000..00496}; do wget https://virusshare.com/hashfiles/VirusShare_$i.md5; done
echo MD5 > VS_full.txt
for i in {00000..00496}; do cat VirusShare_$i.md5 | grep -v '#' >> ./VS_full.txt; done
kthxbye :)
for i in {00000..00496}; do wget https://virusshare.com/hashfiles/VirusShare_$i.md5; done
echo MD5 > VS_full.txt
for i in {00000..00496}; do cat VirusShare_$i.md5 | grep -v '#' >> ./VS_full.txt; done
kthxbye :)
from my link log —
ai.robots.txt: A list of AI agents and robots to block.
https://github.com/ai-robots-txt/ai.robots.txt
saved 2025-01-16
How about having a new /.aibots.txt (or maybe just /.ai.txt)?
All AI-focused crawlers expected to read it.
Start off with the exact same syntax as /.robots.txt - then there’s scope for adding genAI-specific stuff like iP claims and optimized paths and so on.
#genAI
I found this simaltaniousky very informative and extremely obvious in hindsight. It reminded me of when everything seemed a lot more straight forward. I think that I might try this out.
https://www.al3rez.com/todo-txt-journey
https://slate.com/technology/2014/07/a-killer-robots-txt-google-easter-egg.html
see this is wrong, the T-1000’s user agent would actually be
Mozilla/5.0 (T-1000; CPU like T-800) Skynet (KHTML, like Gecko) Terminator version 1000 (like Human)…
A previously unknown regular satellite of #Uranus has been detected in a series of images obtained by the Near-Infrared Camera (NIRCam) onboard the James Webb Space Telescope on 2025 Feb. 2: http://www.cbat.eps.harvard.edu/iau/cbet/005500/CBET005593.txt -> https://science.nasa.gov/blogs/webb/2025/08/19/new-moon-discovered-orbiting-uranus-using-nasas-webb-telescope/ - the object is located at a projected radial distance of 56250 /- 250 km from Uranus' center in the planet's equatorial plane, initial astrometry is consistent with the moon orbiting on a nearly circular orbit with an orbital period 0.402 days, and the observed IR flux from the object indicates a radius of 4 - 5 km, placing it well below the detection threshold of earlier images from Voyager and the Hubble Space Telescope.
The maga-regime has opened the floodgates to allow purported AI company 'bots' steal everything they can find. (Goodbye robots.txt, goodbye terms-of-service, goodbye copyright.)
To me that suggests that anyone wanting to take information - even highly sensitive stuff such as medical, financial, or even classified data - now can raise a defense that they are just gathering data to feed their AI. (A smart criminal would prepare for to use defense by actually buying an Nvidia AI c…
📝🗃️ 𝗿𝗱𝗼𝗰𝗱𝘂𝗺𝗽: Dump ‘R’ Package Source, Documentation, and Vignettes into One File for use in LLMs #rstats #LLM is on CRAN https://www.ekotov.pro/rdocdum…
『DOSの人が困るので、ファイル名は8文字のアルファベット大文字と _ と数字の組みあわせ(8.3形式)でお願いします』 -- README~1.TXT
As we head into a blizzard of infosec news, stay ahead of the curve by checking out today's Metacurity for the latest developments, including
--Ukraine claims major hack of Russian nuclear submarine,
--SonicWall is aware of flaw exploitation,
--Perplexity is stealthily evading robots.txt,
--FinCen warns of crypt ATM crimes,
--Vietnamese hackers are targeting thousands,
--Informants' data stolen in a Louisiana sheriff's office ransomware attack, …
Been designing distributed counters for NATS. Pretty happy with this.
50k/second unoptimised and on a single counter - but we will support aggregation of regional to global etc.
Hard dist sys problems made trivial to use and operate 💪💪
https://gist.github.com/ripienaar/d95d
Z archiwum: jak Jasiu próbował się z kołem przewieźć cugiem z Katowic w okolice Bełsznicy.
http://tek.org.pl/psota-ic.txt
Jak ktoś potrzebuje po warszawsku, to np. tu niżej jest relacja:
Fsck GMail!
@ IN TXT "v=spf1 all"
I just discovered TXT... feels all kinds of uplifting for a Monday morning :)
#TomorrowXTogether #KPop
Cloudflare says Perplexity uses stealth crawling techniques, like undeclared user agents and rotating IP addresses, to evade robots.txt rules and network blocks (Cloudflare)
https://blog.cloudflare.com/perplexity-is-using-stea…
🥱
The day SHALL start.
Regards not given,
RFC2119
https://ietf.org/rfc/rfc2119.txt
"Today, over two and a half million websites have chosen to completely disallow AI training through our managed robots.txt feature or our managed rule blocking AI Crawlers. Every Cloudflare customer is now able to selectively decide which declared AI crawlers are able to access their content in accordance with their business objectives.
We expected a change in bot and crawler behavior based on these new features, and we expect that the techniques bot operators use to evade detecti…
KIMissbrauch
Cloudflare wirft dem KI-Anbieter ##Perplexity vor, sich mit undeklarierten Crawlern Zugang zu gesperrten Websites zu verschaffen.
Trotz robots.txt-Verboten und IP-Blockaden soll Perplexity mit wechselnden User-Agents und IPs Inhalte verdeckt auslesen.
Das wäre eine Verletzung etablierter Webstandards und Missachtung von Website-Präferenzen.
Is there anything I can put in robots.txt that will stop Scrapy?
Failing that, let’s take the ship up and nuke the site from orbit. It’s the only way to be sure.
There is now also a CBET about the new interstellar #comet 3I/ATLAS: http://www.cbat.eps.harvard.edu/iau/cbet/005500/CBET005578.txt - it comes with an even more precise orbit based on astrometry back to 5 June and predicts 13th magnitude with 60° elongation after perihelion in November. The current magnitude is about 17.7.
Alltså snabb fråga: Jag har fått en .msg fil av en kollega och jag behöver använda adresserna i den för att göra ett utskick till alla på listan. Men jag lyckas inte på något sätt skapa ett nytt mail från filen. Öppnar jag den som txt fil så har den dubbletter av alla namn med ' ' tecken kring varje namn/adress.
Hur i hela världen gör jag för att slippa kopiera namn för namn in i ett nytt mail?