She should get a library card.
She tried but she couldn’t read the application.
Perfect use of the meme
time to fill sites code with randomly generated garbage text that humans will not see but crawlers will gobble up?
Until you realize that you are paying for access fees/network
Accessibility tho
I don’t think it’s a bad idea but it’s largely dependent on the crawler. I can’t speak for AI based crawlers, but typical scraping targets specific elements on a page or grabbing the whole page and parsing it for what you’re looking for. In both instances, your content is already scrapped and added to the pile. Overall, I have to wonder how long “poisoning the water well” is going to work. You can take me with a grain of salt, though; I work on detecting bots for a living.
I work on detecting bots for a living.
You should just tell people you’re a blade runner.
I’m a blade runner. 😁
You see a turtle, upended on the hot asphalt. As you pass it, you do not stop to help. Why is that?
also that job title is cool as fuck
All this robot.txt stuff “perplex” me.
TikTok spider has been a real offender for me. For one site I host it burred through 3TB of data over 2 months requesting the same 500 images over and over. It was ignoring the robots.txt too, I ended up having to block their user agent.
Are you sure the caching headers your server is sending for those images are correct? If your server is telling the client to not cache the images, it’ll hit the URL again every time.
If the image at a particular URL will never change (for example, if your build system inserts a hash into the file name), you can use a far-future expires header to tell clients to cache it indefinitely (e.g.
expires max
in Nginx).Thanks for the suggestion, turns out there are no cache headers on these images. They indeed never change, I’ll try that update. Thanks again
TBF, pushing a site to the public while adding a “no scrapping” rule is a bit of a shitty practice; and pushing it and adding a “no scrapping, unless you are Google” is a giant shitty practice.
Rules for politely scrapping the site are fine. But then, there will be always people that disobey those, so you must also actively enforce those rules too. So I’m not sure robots.txt is really useful at all.
People really should be providing a sitemap.xml file
How would a site make itself acessible to the internet in general while also not allowing itself to be scraped using technology?
robots.txt does rely on being respected, just like no tresspassing signs. The lack of enforcement is the problem, and keeping robots.txt to track the permissions would make it effective again.
I am agreeing, just with a slightky different take.
No it’s not, what a weird take. If I publish my art online for enthusiasts to see it’s not automatically licensed to everyone to distribute. If I specifically want to forbid entities I have huge ethical issues with (such as Google, OpenAI et. al.) from scraping and transforming my work, I should be able to.
Nothing in my post (or in robots.txt) has any relation to distributing your content.
What else would they scrape your data for? Sure some could be for personal use but most of the time it will be to redistribute in a new medium. Like a recipe app importing recipes.
Indexing is what “scrapers” mostly do.
That’s how search engines work. If you don’t allow any scraping don’t be surprised if you get no visitors.
Search engine scrapers index. But that’s a subset of scrapers.
There are data scrapers and content scrapers, and these are becoming more prolific as AI takes off and ppl need to feed it data.
This post is specifically about AI scrapers.
I think by shitty they mean ineffective, not poor manners.